text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Molecular characterization of Dictyocaulus nematodes in wild red deer Cervus elaphus in two areas of the Italian Alps Nematodes of the genus Dictyocaulus are the causative agents of parasitic bronchitis and pneumonia in several domestic and wild ungulates. Various species have been described in wild cervids, as the case of Dictyocaulus cervi in red deer, recently described as a separate species from Dictyocaulus eckerti. In Italy, information on dictyocaulosis in wildlife is limited and often outdated. In this work, 250 red deer were examined for the presence of Dictyocaulus spp. in two areas of the Italian Alps (n = 104 from Valle d’Aosta, n = 146 from Stelvio National Park), and the retrieved lungworms were molecularly characterized. Lungworms were identified in 23 and 32 animals from Valle d’Aosta and Stelvio National Park, respectively. The nematodes, morphologically identified as D. cervi, were characterized molecularly (18S rDNA, ITS2, and coxI). Consistently, almost all specimens were found to be phylogenetically related to D. cervi. Three individuals, detected from both study sites and assigned to an undescribed Dictyocaulus sp., clustered with Dictyocaulus specimens isolated from red deer and fallow deer in previous studies. Within each of D. cervi and the undescribed Dictyocaulus sp., the newly isolated nematodes phylogenetically clustered based on their geographical origin. This study revealed the presence of D. cervi in Italian red deer, and an undetermined Dictyocaulus sp. that should be more deeply investigated. The results suggest that further analyses should be focused on population genetics of cervids and their lungworms to assess how they evolved, or co-evolved, throughout time and space and to assess the potential of transmission towards farmed animals. Supplementary Information The online version contains supplementary material available at 10.1007/s00436-022-07773-4. Introduction Parasitic nematodes of the genus Dictyocaulus (family Dictyocaulidae, superfamily Trichostrongyloidea) are known to affect domestic and wild ruminants (Ács et al. 2016;Höglund et al. 2003). The adult lungworms are found in the small and large airways of the host, potentially causing parasitic, often fatal, bronchitis (dictyocaulosis), especially in cattle, sheep, and farmed red deer Cervus elaphus. This condition, in turn, can lead to potential economic losses in livestock production (Jackson 2008;Pyziel et al. 2017). In free-living cervids, the disease should be considered noteworthy for wildlife management with respect to the interaction with livestock (Pyziel et al. 2015), as well as to potential threats to biodiversity, wildlife conservation, and the development of the game industry (Pyziel et al. 2017;Pyziel 2018). Some species of the genus Dictyocaulus show host-specificity, as in the case of Dictyocaulus viviparus Bloch, 1782 (cattle) and Dictyocaulus filaria Rudolphi, 1 3 1809 (sheep and goat) (Bangoura et al. 2020). On the other side, Dictyocaulus eckerti Skrjabin, 1931 was until recently considered a complex of species associated with a range of cervid species (Divina et al. 2000;Pyziel et al. 2017). The Dictyocaulus eckerti complex was then split following the description of additional cervid-specific species, such as Dictyocaulus capreolus Gibbons & Höglund, 2002 in roe deer Capreolus capreolus, moose Alces alces (Gibbons and Höglund 2002) and Dictyocaulus cervi Pyziel 2017 in red deer (Pyziel et al. 2017). However, the host range and epidemiology of Dictyocaulus spp. in cervids still need to be elucidated. In fact, a higher morphological variability than currently recognized may occur in the cervid-related lungworm species (Bangoura et al. 2020), especially from the perspective of possible cross-transmission events between cervids and livestock. Species identification is generally based on morphology for all Dictyocaulus species, although significant skilled labor is required for the exact identification. Additionally, in the light of some recently described species, as for D. cervi, the slight differences between populations suggest that the identification should be reinforced by other tools, for example molecular analyses (Bangoura et al. 2020). Generally, dictyocaulosis in wildlife has been sporadically investigated in Italy, leading in turn to limited and/or often outdated data on the infection (Romano et al. 1980;Bregoli et al. 2006;Zanet et al. 2021). The morphological identification of D. eckerti and D. cervi requires a careful evaluation, since misidentification between the two could have occurred in the past. This aspect thus calls for updated knowledge of the epidemiology of these parasites in wild cervids. The present work is aimed to investigate the presence of Dictyocaulus nematodes in wild red deer in two areas of the Italian Alps and characterize them from the molecular standpoint. The respiratory tracts from the trachea to bronchioles of freshly hunted animals were dissected, cut open, and flushed with tap water. The nematodes were recovered directly from the respiratory tract or decanted washing water and observed under a Leica MZ95 stereomicroscope (Leica Microsystems, Wetzlar, Germany), as described in Pyziel et al. (2017). For further analyses, the recovered nematodes were preserved individually in 70% ethanol at + 4 °C. The collected lungworms were cleared with lactophenol to perform morphological identification. Lungworms were observed and measured using a Leica DMLS light microscope (magnification from 100 × to 400 × ; Leica Microsystems, Wetzlar, Germany), and the identification was based on specific keys (Skrjabin et al. 1954;Gibbons et al. 1988;Pyziel et al. 2017;Bangoura et al. 2020) for all recovered lungworms. Parasite prevalence, mean intensity, mean abundance, and confidence limits were generated with the software package Quantitative Parasitology 3.0 (QP 3.0) (Rózsa et al. 2000). Parasite prevalence and mean intensity were compared using a Bootstrap 2-sample t-test and Fisher's exact test, both available in QP 3.0. For the molecular investigations, worms were selected from a subsample of positive red deer chosen randomly. One worm per red deer was subsequently randomly selected, for a total of 17 and 21 nematodes from VdA and SNP, respectively. A portion of ~ 1 mm from the central part of the nematode (lacking useful morphological features) was collected and stored at − 18 °C until further analyses. Raw lysates were obtained for genomic DNA extraction of single nematode portions and used as templates in PCR reactions (Romeo et al. 2021). Molecular analyses were performed on three marker gene sequences: 18S rDNA gene, nuclear internal transcribed spacer 2: ITS2, and cytochrome oxidase I: coxI. Amplifications were performed, according to Pyziel et al. (2017) using the following primers: NF50 (5′-TGA AAT GGG AAC GGC TCA T-3′) and BNR1 (ACC TAC AGA TAC CTT GTT ACGAC-3′) targeting the 18S rDNA region; ITS2F (5′-ACG TCT GGT TCA GGG TTG TT-3′) and BD3R (5′-TAT GCT TAA GTT CAG CGG GT-3′) targeting the ITS2 region. For the coxI region, the primer pair described in Bowles et al. (1992) were used: coxI_F (5'-TTT TTT GGG CAT CCT GAG GTT TAT -3') and coxI_R (5′-TAA AGA AAG AAC ATA ATG AAA ATG -3′). PCRs were carried out following the original protocol described for each primer pair. The obtained amplicons were run on 2% agarose gel; gel bands were excised and purified using the Wizard® SV Gel and PCR Clean-Up System (Promega, Madison, USA) following the manufacturer's instructions and Sanger sequenced bidirectionally. Sequencing was performed using internal primers NF890 (CCT AAA GCG AAA GCA TTT GCC) and NR1040 (CAT ACC CCA GGA ACC GAA ) (Pyziel et al. 2017) for the 18S rDNA fragment and using the primers given above for the ITS2 and coxI amplicons. The obtained gene sequences were deposited in GenBank (see Supplementary Table S1). Their evolutionary distances, and with respect to Dictyocaulus spp. sequences present in GenBank, were estimated using MEGA X version 10.1.8 (Kumar et al. 2018) software and compared. Concerning phylogenetic analyses, for each marker (18S rDNA, ITS2, and coxI) a representative selection of sequences of other Dictyocaulus spp. from previous studies was downloaded, together with outgroup sequences. Next, for 18 rDNA, sequences were aligned with the automated aligner of the ARB software package (Ludwig et al. 2004), and the alignment was manually refined to optimize basepairing. For ITS2 and coxI, sequences were aligned with MUSCLE (Edgar 2004). The optimal substitution model was then selected for each aligned marker with jModelTest (Darriba et al. 2012), and a maximum likelihood phylogeny was inferred with PhyML (Guindon and Gascuel 2003) with 100 bootstrap pseudo-replicates. Bayesian inference trees were inferred with MrBayes (Ronquist et al. 2012), employing three runs, each with one cold and three heated Markov chains Monte Carlo, iterating for 1,000,000 generations, with 25% burn-in. According to the morphology, all the lungworms could be ascribed to D. cervi. No significant differences between parasite prevalence and mean intensity in hosts from the two study areas were found. Overall, the prevalence in the two areas was lower than previously reported for other European countries. For example, recent investigations on D. cervi in red deer showed prevalence levels of about 44-68% in Poland (Pyziel et al. 2017;Pyziel 2018). Lungworm surveys in Sweden and Hungary recorded a putative D. eckerti prevalence of about 33% in red deer (Divina et al. 2002;Ács et al. 2016). In our study, parasite prevalence was higher in calves than in adults, confirming data reported in the literature (David 1997;Divina et al. 2002;Ács et al. 2016). In the molecular analyses, all the examined lungworms tested positive for the three PCR targets (data on samples, accession numbers, ascribed species, and isolates are reported in Supplementary Table 1). The sequenced 18S rDNA and ITS2 amplicons showed that 35 out of 38 sequences were 100% identical with sequences of D. cervi available in GenBank (18S rDNA -accession MH183394; ITS2 -accession KM374673). Consistently, they were included in the D. cervi clade with high branch support in the respective phylogenies ( Fig. 1; Supplementary Fig. 1). The 18S rDNA gene sequences of three out of 38 specimens (two from VdA and one from SNP, respectively indicated as "Dictyocaulus sp. VdA isolate 4," "Dictyocaulus sp. VdA isolate 5", and "Dictyocaulus sp. SNP isolate 6") showed 100% identity with undescribed Dictyocaulus sp. found in red deer and fallow deer Dama dama, forming the sister group of D. viviparus (Fig. 1). For these three undescribed Dictyocaulus specimens, the obtained ITS2 amplicons showed 93.83-100% identity with sequences belonging to Dictyocaulus sp. isolates present in GenBank. Contrary to the 18S rDNA gene-based phylogenetic inference, the ITS2 gene sequences of these three Dictyocaulus sp. specimens from the present work did not form a sister clade with D. viviparus but with D. capreolus instead ( Supplementary Fig. 1). Interestingly, the phylogenetic inferences based on the mitochondrial coxI gene revealed population subdivisions based on the geographical area for what concerns lungworms molecularly identified as D. cervi using the two nuclear markers. Furthermore, coxI gene analysis again confirmed the presence of a separate, well-supported clade belonging to a Dictyocaulus sp. (Fig. 2). The D. cervi coxI gene sequences from SNP clustered with high support within the D. cervi clade (Fig. 2), together with sequences obtained from red deer and moose from central-eastern European countries (e.g., Poland and Hungary). On the contrary, sequences from VdA, previously identified as belonging to D. cervi based on morphology and the two nuclear markers, clustered with D. eckerti sequences with strong support (Fig. 2). On the one hand, a higher phylogenetic resolution within the D. cervi/D. eckerti clade obtained with coxI is not surprising. It should be considered that mitochondrial DNA is extensively employed as a marker in genetic diversity and phylogenetic analyses at various taxonomic levels for its maternal inheritance, lack of recombination, and, specifically, its fast evolutionary rate (Boore 1999;Blouin 2002;Yong et al. 2015;Zhao et al. 2022). On the other hand, the non-congruence in the species assignment of some specimens to D. eckerti (or D. cervi) is possibly only apparent, considering that D. cervi was only recently circumscribed from the original D. eckerti complex (Pyziel et al. 2017). Therefore, to address this point, a careful and comprehensive comparison of the morphological and genetic data obtained in previous studies would be required, which is far beyond the aims of the present study. Thus, we will treat those genetic variations mostly from a bio-geographical perspective. The geographical separation may associate with the genetic differentiation between lungworms in the two areas, and this is partly supported by the historical dynamics of red deer in the Italian Alps. Red deer populations in Italy faced a drastic reduction over the past centuries, nearly reaching extinction, until the half of the twentieth century when the species recolonized the southern Alps both spontaneously and through reintroductions and restocking (Mattioli et al. 2001). Recolonization first occurred spontaneously in the central-eastern Alps towards the end of the 1940s, with red deer entering from neighboring countries (Mattioli et al. 2001). This aspect may explain why SNP coxI gene sequences cluster with high support with D. cervi sequences from central-eastern Europe. In the western Alps, including Valle d'Aosta, red deer demographic recovery was possible thanks to spontaneous recolonization from the neighboring Swiss regions as well as to occasional reintroductions of subjects from the eastern Alps (Tarello 1991;Mattioli et al. 2001;Carnevali et al. 2009). This might have impacted the genetic differences observed between the lungworm populations from the two study areas. Previous studies performed on populations genetics of large lungworms in wild deer in Hungary cast the possibility of divergence between Dictyocaulus species linked to their host populations (Àcs et al. 2016). Sequence differences between the two nematode populations from VdA and SNP are about 8.7% (Supplementary Table 2), which is lower than the empirical 10% threshold applied to species differentiation in nematodes (Ács et al. 2016;Blouin 2002). This is in accordance with sequence variation observed in other studies (Pyziel 2018). No coxI gene sequences of D. cervi/D. eckerti from central-western European areas (e.g., France, Switzerland, and Germany) are currently available, thus limiting the geographical resolution of phylogenetic inferences of nematodes belonging to this complex. The three novel sequences forming a separate clade from the D. cervi/D. eckerti complex showed nucleotide differences of the coxI gene > 12% with respect to D. cervi/D. eckerti. Geographical clustering was also observed in the coxI gene for this putative undescribed For each sequence, the NCBI accession numbers, plus the host species origin are reported. Numbers on branches report bootstrap supports with 100 pseudo-replicates and Bayesian posterior probabilities after 1.000.000 iterations. The scale bar stands for proportional sequence divergence. Support values below (70|0.85) were omitted Dictyocaulus species, with the two VdA isolates more closely related to each other than the SNP one, confirming the clustering results obtained with the ITS2 gene ( Supplementary Fig. 1). In the coxI analysis, the undescribed Dictyocaulus sp. was defined as a sister group of D. viviparus, supporting the result obtained through the 18S rDNA phylogenetic inference. Interestingly, a previous study reported coxI sequences from an undescribed Dictyocaulus sp. clustering as a sister group of D. viviparus as well (namely Dictyocaulus sp. S-HU in Hungary; Ács et al. 2016). However, a direct comparison is currently not possible since, in the present study, a neighboring gene region of the coxI gene was analyzed, compared to those of Dictyocaulus sp. S-HU (Ács et al. 2016). In any case, regarding the phylogenetic positioning within the genus of the Dictyocaulus sp. from this study, the three markers Supplementary Fig. 1). Such differences may be explained by insufficient phylogenetic resolution (e.g., available ITS2 sequences, which provided the alternative reconstruction, compared to the other two markers, were, on average, relatively short and not fully reciprocally overlapping due to the usage of different primers among studies). Thus, these results should be considered preliminary and taken with some caution. Nevertheless, all three markers consistently indicated that the three Dictyocaulus specimens likely belonging to a separate, still undescribed, species. In summary, the present results provide the first detection of D. cervi in red deer in Italy, representing a jigsaw piece for the epidemiological overview of dictyocaulosis in European wildlife. Our study underlines the importance of molecular analyses for lungworm identification, specifically for Dictyocaulus species in cervids. The presence of the Dictyocaulus sp. in red deer indicates that further efforts are needed to investigate and define the host range of this still undescribed species to evaluate its potential ability to reach domestic livestock. Future studies should investigate the taxonomy, phylogeny, ecology, and epidemiology of Dictyocaulus spp. at large spatial scale, as this would provide essential indications for both wildlife and lungworm management. Funding Open access funding provided by Università degli Studi di Milano within the CRUI-CARE Agreement. This research was partially supported by PRIN_MIUR2012 (code 2012A4F828) to C. B. Data availability The obtained sequences are deposited in Gen-Bank under the accession numbers OP617687-OP617698 and OP628626-OP628632. Declarations Ethics approval Animals were not sacrificed for research purposes specific to this study. No ethical approval was required, and ethical statement is not applicable as sample collection from animals has been gathered after animals were hunted according to Italian national hunting law 157/1992 or culled for management purposes, according to the official culling plan to reduce red deer density that has been authorized by Istituto Superiore per la Protezione e la Ricerca Ambientale (ISPRA), the Italian Institute for Environmental Protection and Research (Prot. 48585/T-A25-Ispra), in the Lombardy sector of the Stelvio National Park starting from 2011. Consent to participate Not applicable. Consent for publication All the authors provided their consent for the publication of this manuscript. Conflict of interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
4,163.6
2023-01-14T00:00:00.000
[ "Environmental Science", "Biology" ]
FNDC5 Promotes Adipogenic Differentiation of Primary Preadipocytes in Mashen Pigs Fibronectin type III domain-containing protein 5 (FNDC5) plays an important role in fat deposition, which can be cut to form Irisin to promote fat thermogenesis, resulting in a decrease in fat content. However, the mechanism of FNDC5 related to fat deposition in pigs is still unclear. In this research, we studied the expression of FNDC5 on different adiposes and its function in the adipogenic differentiation of primary preadipocytes in Mashen pigs. The expression pattern of FNDC5 was detected by qRT-PCR and Western blotting in Mashen pigs. FNDC5 overexpression and interference vectors were constructed and transfected into porcine primary preadipocytes by lentivirus. Then, the expression of key adipogenic genes was detected by qRT-PCR and the content of lipid droplets was detected by Oil Red O staining. The results showed that the expression of FNDC5 in abdominal fat was higher than that in back subcutaneous fat in Mashen pigs, whereas the expression in back subcutaneous fat of Mashen pigs was significantly higher than that of Large White pigs. In vitro, FNDC5 promoted the adipogenic differentiation of primary preadipocytes of Mashen pigs and upregulated the expression of genes related to adipogenesis, but did not activate the extracellular signal-regulated kinase (ERK) signaling pathway. This study can provide a theoretical basis for FNDC5 in adipogenic differentiation in pigs. Introduction Fat deposition is one of the important economic characters of pigs. Adipose tissue is mainly composed of mature adipocytes, which are distributed throughout the animal body and play a critical role in energy metabolism and homeostasis [1,2]. Although adipose tissue is significant for maintaining the energy metabolism of animals, excessive accumulation of fat will affect the carcass quality and reduce the value of pork [3]. In addition, there are many factors affecting the carcass quality of pigs, such as pig breed [4]. The Large Yorkshire pig, also known as the Large White pig, has a high lean rate and less fat deposition. The Mashen pig, a local breed in China, has the advantages of strong stress resistance, high fecundity, strong adaptability and high meat quality compared with the Large White pig [5]. However, it possesses a low feed conversion ratio, slow growth rate and more fat deposition. Adipogenesis in adult pigs is a highly regulated process that includes preadipocyte proliferation and subsequent differentiation into mature adipocytes [6]. However, limited research has been conducted on adipocyte differentiation in pigs. Adipogenic differentiation requires precise regulation of genes. Fibronectin type III domain-containing protein 5 (FNDC5) was first discovered in 2002 and can be hydrolyzed by proteases to form Irisin, which performs its function by circulating throughout the body [7]. Previous research revealed that FNDC5 is mainly expressed in skeletal muscle, but also in adipose, liver and cardiovascular tissues [8]. For myogenesis, existing research has demonstrated that FNDC5 can promote myogenic differentiation by activating the IL6 signaling pathway in C2C12 cells [9]. Similarly, FNDC5 can promote skeletal muscle regeneration by activating the proliferation and differentiation of skeletal muscle satellite cells after skeletal muscle injury [9]. In one study, the activities of oxidative metabolic enzymes and the expression of oxidative muscle fiber genes were increased by Irisin treatment in C2C12 myotubes [10]. The content of FNDC5 and Irisin decreased in skeletal muscle with hydrogen sulfide (H 2 S) deficiency, which can alter glucose metabolism [11]. Regarding adipose, FNDC5 can promote the browning of white adipose tissue [12]. Furthermore, the expression of FNDC5 in visceral adipose and epididymal adipose of mice increased significantly after endurance training [13]. The expression of FNDC5 in subcutaneous adipose tissue of obese patients was significantly higher than that of normal weight people [14]. Nevertheless, the specific function of FNDC5 during the adipogenic differentiation of preadipocytes in pigs remains poorly understood. In the present study, we focused on the difference in fat deposition between Mashen pigs and Large White pigs. FNDC5 caught our attention, because it is expressed differentially in subcutaneous adipose of the two breeds. This study explored the regulatory effect of FNDC5 during the adipogenic differentiation of preadipocytes in pigs. Ethics Statement All procedures performed on animals were approved by the Shanxi Agricultural University Animal Care and Ethical Committee, China (approval no. SXAU-EAW-P002003). Sample Preparation Under the same feeding and management conditions, 1-, 90-and 180-day-old Mashen pigs (castrated males, n = 12; females, n = 12) and 90-day-old Large White pigs (castrated males, n = 4; females, n = 4) were selected from Datong Pig Farm (Datong, Shanxi, China). All the tissue samples from Mashen pigs and Large White pigs were collected strictly according to the anatomy of the pigs. The Mashen pig was used for the isolation of porcine preadipocytes based on previously established methods [18]. Porcine preadipocytes were isolated and cultured from subcutaneous adipose tissue of Mashen pigs at 7 days of age. The tissues were dissected and digested with collagenase II (2 mg/mL, Gibco, Carlsbad, CA, USA, Cat. 17101015) for 1 h. After digestion was terminated with Dulbecco's Modified Eagle Medium (DMEM) (Gibco, Carlsbad, CA, USA, Cat. 11965118) including 10% fetal bovine serum (FBS) (Gibco, Carlsbad, CA, USA, Cat. 10099141), the mixture was centrifuged at 1200 rpm for 15 min, and then the supernatant was discarded and the cells were re-suspended. After filtration, the filtrate was inoculated in a 60 mm Petri dish. The obtained cells were cultured in low-glucose DMEM containing 10% FBS and 1% penicillin streptomycin (Gibco, Carlsbad, CA, USA, Cat. 15140122) in a 5% CO 2 incubator at 37 • C. The culture medium was changed every 2 days. To stain the lipids, the cells were washed with ice-cold PBS (Solarbio, Beijing, China, Cat. P1010) and fixed with 4% paraformaldehyde at 4 • C for 30 min. After that, the cells were rinsed with 60% isopropyl alcohol for 1 min and incubated with Oil Red O working solution (Solarbio, Beijing, China, Cat. O8010) for 1 h. After the cells were washed several times with double-distilled water, the images were captured using a microscope magnified 100 times. RNA Extraction and cDNA Synthesis Total RNA was extracted using RNAiso Plus (Takara, Shiga, Japan, Cat. 9108) according to the manufacturer s protocol. The RNA concentration and integrity were determined using an ND-2000 spectrophotometer (NanoDrop Technologies, DE) and 1% agarose gel electrophoresis, respectively. Thereafter, total RNA (500 ng) was converted into complementary DNA (cDNA) using a PrimeScript RT reagent Kit with gDNA Eraser (Takara, Shiga, Japan, Cat. RR047A) under the following conditions: 37 • C for 15 min and 85 • C for 15 s. Quantitative Real-Time PCR (qRT-PCR) Quantitative real-time PCR was performed using TB Green Premix Ex Taq II (Takara, Shiga, Japan, Cat. RR820A) on an ABI-7500 (Life Technologies) under the following conditions: pre-denaturation at 95 • C for 30 s; 45 cycles of 95 • C for 5 s and 60 • C for 34 s; one cycle of 95 • C for 15 s and 60 • C for 1 min; 95 • C for 30 s. The expressions of all genes were normalized to 18S rRNA. The 2 −∆∆Ct formula was used to estimate relative expression levels. Primer sequences used for qRT-PCR analyses were listed in Table 1. Paraffin Section and HE Staining The adipose tissues were immersed with 4% paraformaldehyde for 24 h, and dehydrated with gradient alcohol. The tissue was hyalinized with xylene, then immersed in wax to be embedded and trimmed. The repaired wax block was placed into the slicer, and then picked up with slides after spreading. After drying, HE staining was carried out. The slices were placed in xylene and alcohol for dewaxing, then washed with PBS 3 times and stained in hematoxylin dye for 5 min. The excess dye was washed with PBS, and the slices were placed in gradient alcohol and eosin dye for 5 min. Finally, the slices were dehydrated and sealed with neutral resin glue. Lentiviral-Mediated Transfection The pair of short hairpin oligonucleotides (GCGATGCACAACTTTGCAAGT) targeting the open reading frame (ORF) of FNDC5 was designed and synthesized by GenePharma. Both vector construction (OE-FNDC5 and sh-FNDC5) and lentivirus package were also commissioned by GenePharma. The preadipocytes were inoculated into 12-well plates. When the cell density was about 50%, the appropriate amount of lentivirus was directly added to the medium for infection. The medium was changed after 24 h. Statistical Analysis The two groups of samples were compared by the Student's t-test, where p < 0.05 (*) and p < 0.01 (**) indicate statistically significant differences. Comparisons between three or more samples were performed using one-way analysis of variance (ANOVA), and Duncan's method was used for multiple comparisons. GraphPad Prism (Version 8, San Diego, CA, USA) was used to conduct the statistical analysis and plotting. The Expression Profile of FNDC5 in Mashen Pigs The expression of FNDC5 was higher in muscle, liver and heart, but lower in fat in 90-day-old Mashen pigs ( Figure 1A). The FNDC5 expression in back fat was higher at 90 days old than that at 1 and 180 days old by qRT-PCR ( Figure 1B) and Western blotting ( Figure 1C) (p < 0.01). Furthermore, the qRT-PCR ( Figure 1D) and Western blotting ( Figure 1E) demonstrated that the FNDC5 expression in abdominal fat of Mashen pigs was significantly higher than that in back fat (p < 0.01). Difference of Back Fat between Mashen and Large White Pigs The fat area in the back fat tissue of Mashen pigs was obviously larger than that in Large White pigs (Figure 2A). The FNDC5 expression of back fat in Mashen pigs was significantly higher than that in Large White pigs (p < 0.05) (Figure 2B), and the result of Western blotting was similar (p < 0.05) ( Figure 2C). Moreover, qRT-PCR was performed to determine the expression of the adipogenic marker genes ( Figure 2D). Compared with Large White pigs, the expressions of CEBP/β (p < 0.01), PPARγ (p < 0.01), FABP3 (p < 0.05) and FABP4 (p < 0.01) in Mashen pigs were significantly increased. Difference of Back Fat between Mashen and Large White Pigs The fat area in the back fat tissue of Mashen pigs was obviously larger than that in Large White pigs (Figure 2A). The FNDC5 expression of back fat in Mashen pigs was significantly higher than that in Large White pigs (p < 0.05) (Figure 2B), and the result of Western blotting was similar (p < 0.05) ( Figure 2C). Moreover, qRT-PCR was performed to determine the expression of the adipogenic marker genes ( Figure 2D). Compared with Large White pigs, the expressions of CEBP/β (p < 0.01), PPARγ (p < 0.01), FABP3 (p < 0.05) and FABP4 (p < 0.01) in Mashen pigs were significantly increased. FNDC5 Promoted the Adipogenic Differentiation of Porcine Preadipocytes in Mashen Pigs The expression of FNDC5 was detected during the adipogenic differentiation stage of porcine preadipocytes. The results indicate that the expression of FNDC5 increased gradually over the adipogenic differentiation time ( Figure 3A). To further confirm the function of FNDC5 during the adipogenic differentiation of porcine preadipocytes, the overexpression and interference vectors of FNDC5 were transfected into porcine preadi pocytes by lentivirus. The expression of FNDC5 was significantly increased and decreased after transfecting the overexpression and interference vectors, respectively ( Figure 3B,C) Western blotting showed similar results ( Figure 3D,E). After the overexpression and in terference of FNDC5 in porcine preadipocytes, the expression of adipogenic marker gene was detected on the fourth day of adipogenic differentiation. The expression of C/EBPβ C/EBPα, PPARγ and FABP4 was significantly upregulated by FNDC5 overexpression (Fig ure 3F) and downregulated by FNDC5 interference ( Figure 3G). Furthermore, Oil Red O staining on the 8th day of adipogenic differentiation showed a significant increase in lipid droplet formation upon FNDC5 overexpression ( Figure 3H). In contrast, FNDC5 interfer ence reduced lipid accumulation ( Figure 3I). FNDC5 Promoted the Adipogenic Differentiation of Porcine Preadipocytes in Mashen Pigs The expression of FNDC5 was detected during the adipogenic differentiation stage of porcine preadipocytes. The results indicate that the expression of FNDC5 increased gradually over the adipogenic differentiation time ( Figure 3A). To further confirm the function of FNDC5 during the adipogenic differentiation of porcine preadipocytes, the overexpression and interference vectors of FNDC5 were transfected into porcine preadipocytes by lentivirus. The expression of FNDC5 was significantly increased and decreased after transfecting the overexpression and interference vectors, respectively ( Figure 3B,C). Western blotting showed similar results ( Figure 3D,E). After the overexpression and interference of FNDC5 in porcine preadipocytes, the expression of adipogenic marker genes was detected on the fourth day of adipogenic differentiation. The expression of C/EBPβ, C/EBPα, PPARγ and FABP4 was significantly upregulated by FNDC5 overexpression ( Figure 3F) and downregulated by FNDC5 interference ( Figure 3G). Furthermore, Oil Red O staining on the 8th day of adipogenic differentiation showed a significant increase in lipid droplet formation upon FNDC5 overexpression ( Figure 3H). In contrast, FNDC5 interference reduced lipid accumulation ( Figure 3I). FNDC5 Had No Effect on ERK1/2 Phosphorylation during Adipogenic Differentiation in Porcine Preadipocytes Western blotting of ERK1/2 and phosphorylated ERK1/2 was performed on the 8th day of adipogenic differentiation after transfecting the FNDC5 overexpression and interference vectors in porcine preadipocytes. There was no significant difference in P-ERK1/2 content after overexpressing and interfering FNDC5 in porcine preadipocytes ( Figure 4A,B). FNDC5 Had No Effect on ERK1/2 Phosphorylation during Adipogenic Differentiation in Porcine Preadipocytes Western blotting of ERK1/2 and phosphorylated ERK1/2 was performed on the 8th day of adipogenic differentiation after transfecting the FNDC5 overexpression and interference vectors in porcine preadipocytes. There was no significant difference in P-ERK1/2 OE-FNDC5 and sh-FNDC5 mean transfecting the FNDC5 overexpression and interference vectors, respectively. The same as below. Scale bars: 100 µm. Notes: "**" means extremely significant difference (p < 0.01) and "*" means significant difference (p < 0.05). content after overexpressing and interfering FNDC5 in porcine preadipocytes ( Figure 4A,B). Discussion FNDC5 is mainly expressed in skeletal muscle and less in adipose tissue [19,20]. However, FNDC5 plays an important role in regulating body fat deposition and cell adipogenic differentiation. Irisin can promote the differentiation of mesenchymal stem cells (MSCs) into beige adipocytes [21]. Moreover, Perez-Sotelo et al. reported that the expression level of FNDC5 in subcutaneous adipose tissue of obese patients was significantly higher than that of normal weight subjects [14]. Furthermore, FNDC5 upregulated the expression of genes related to lipolysis and promoted browning in 3T3-L1 adipocytes [22]. In our study, we found that the expression of FNDC5 in back subcutaneous fat of Mashen pigs was significantly higher than that of Large White pigs. Presumably, this is because Mashen pigs have a stronger ability to deposit fat compared to Large White pigs. Meanwhile, our study suggested that the expression of FNDC5 in abdominal fat was higher than that in back subcutaneous fat in Mashen pigs. These findings suggest that FNDC5 plays an important role in adipogenesis of Mashen pigs. At present, there are inconsistent reports on the regulation of FNDC5 during the differentiation of preadipocytes. Huh et al. found that FNDC5 inhibited the adipogenic differentiation of human adipocytes by downregulating the expression of fatty acid synthase (FAS) [23]. In addition, interference with FNDC5 promoted adipogenic differentiation of C3H10T1/2 cells, accompanied by a pronounced increase in lipid droplet content [14]. On the contrary, Dong et al. reported that the lipid content was visibly increased after overexpressing FNDC5 in goat preadipocytes [24]. Therefore, the function of FNDC5 during the differentiation of porcine preadipocytes requires more in-depth research. This study found that overexpressing FNDC5 in porcine preadipocytes resulted in upregulated expression of adipogenic marker genes and increased lipid accumulation. In brief, FNDC5 promoted the differentiation of porcine preadipocytes. It is universally accepted that FNDC5 can be cleaved to form irisin protein, which is secreted into blood circulation and then enters the adipose tissues [25]. Irisin can alleviate adipogenesis and increase energy consumption [26]. Consistently, overexpression of Irisin in 3T3-L1 cells downregulated the expression of PPARγ and FABP4, and significantly reduced lipid droplet content [27]. Li et al. also demonstrated that the addition of Irisin recombinant protein to human preadipocytes predominantly attenuates the lipid accumulation [28]. Therefore, it is a controversial issue regarding the regulatory roles of FNDC5 and Irisin in adipogenic differentiation, and further investigation is required to clarify this issue in the future. The ERK1/2 signaling pathway was shown to carry out a key role in regulating porcine adipogenesis. It was observed that progranulin inhibits adipogenesis in porcine preadipocytes partially through ERK1/2 activation-mediated PPARc phosphorylation [29]. Moreover, vascular endothelial growth factor receptor type 2 (VEGFR2) can activate Discussion FNDC5 is mainly expressed in skeletal muscle and less in adipose tissue [19,20]. However, FNDC5 plays an important role in regulating body fat deposition and cell adipogenic differentiation. Irisin can promote the differentiation of mesenchymal stem cells (MSCs) into beige adipocytes [21]. Moreover, Perez-Sotelo et al. reported that the expression level of FNDC5 in subcutaneous adipose tissue of obese patients was significantly higher than that of normal weight subjects [14]. Furthermore, FNDC5 upregulated the expression of genes related to lipolysis and promoted browning in 3T3-L1 adipocytes [22]. In our study, we found that the expression of FNDC5 in back subcutaneous fat of Mashen pigs was significantly higher than that of Large White pigs. Presumably, this is because Mashen pigs have a stronger ability to deposit fat compared to Large White pigs. Meanwhile, our study suggested that the expression of FNDC5 in abdominal fat was higher than that in back subcutaneous fat in Mashen pigs. These findings suggest that FNDC5 plays an important role in adipogenesis of Mashen pigs. At present, there are inconsistent reports on the regulation of FNDC5 during the differentiation of preadipocytes. Huh et al. found that FNDC5 inhibited the adipogenic differentiation of human adipocytes by downregulating the expression of fatty acid synthase (FAS) [23]. In addition, interference with FNDC5 promoted adipogenic differentiation of C3H10T1/2 cells, accompanied by a pronounced increase in lipid droplet content [14]. On the contrary, Dong et al. reported that the lipid content was visibly increased after overexpressing FNDC5 in goat preadipocytes [24]. Therefore, the function of FNDC5 during the differentiation of porcine preadipocytes requires more in-depth research. This study found that overexpressing FNDC5 in porcine preadipocytes resulted in upregulated expression of adipogenic marker genes and increased lipid accumulation. In brief, FNDC5 promoted the differentiation of porcine preadipocytes. It is universally accepted that FNDC5 can be cleaved to form irisin protein, which is secreted into blood circulation and then enters the adipose tissues [25]. Irisin can alleviate adipogenesis and increase energy consumption [26]. Consistently, overexpression of Irisin in 3T3-L1 cells downregulated the expression of PPARγ and FABP4, and significantly reduced lipid droplet content [27]. Li et al. also demonstrated that the addition of Irisin recombinant protein to human preadipocytes predominantly attenuates the lipid accumulation [28]. Therefore, it is a controversial issue regarding the regulatory roles of FNDC5 and Irisin in adipogenic differentiation, and further investigation is required to clarify this issue in the future. The ERK1/2 signaling pathway was shown to carry out a key role in regulating porcine adipogenesis. It was observed that progranulin inhibits adipogenesis in porcine preadipocytes partially through ERK1/2 activation-mediated PPARc phosphorylation [29]. Moreover, vascular endothelial growth factor receptor type 2 (VEGFR2) can activate the ERK pathway to induce the differentiation of adipose-derived mesenchymal stem cells (AMSCs) to endothelial cells (ECs) [30]. Sun et al. found that platelet-derived growth factor receptor α (PDGFRα) promoted adipogenesis in porcine intramuscular preadipocytes through activating the ERK1/2 signaling pathway [31]. Various reports found that FNDC5 can regulate many biological functions through the ERK signaling pathway. Exogenous addition of Irisin recombinant protein can enhance endothelial cell proliferation through regulating the ERK1/2 signaling pathway and improve cell angiogenesis [32]. In addition, FNDC5 can stimulate transient activation of ERK1/2 in Alzheimer's disease mouse models [33]. Previous studies found that FNDC5 can promote C2C12 cell proliferation via the ERK1/2 signaling pathway [34]. However, there is no report that FNDC5 regulates the differentiation of porcine preadipocytes by the ERK1/2 signaling pathway. Our results showed that FNDC5 had no significant effect on the ERK1/2 signaling pathway in adipogenic differentiation of porcine preadipocytes. Based on this result, we deem that FNDC5 regulates the differentiation of porcine preadipocytes by other signaling pathways. For example, overexpression of FNDC5 can activate the p38 MAPK pathway in 3T3-L1 adipocytes [35]. Additionally, the mTORC1 signaling pathway was activated and fat content was significantly increased in FNDC5 knockout mice [36]. In addition, FNDC5 has been reported to improve insulin sensitivity in mice by activating the AMPK signaling pathway [37]. In conclusion, our study suggests that FNDC5 promotes the differentiation of porcine adipocytes, which may be one of the causes of the difference in fat deposition between Mashen pigs and Large White pigs. Furthermore, FNDC5 has no significant effect on the ERK1/2 signaling pathway in porcine preadipocytes. These results provide a theoretical basis for the regulation of FNDC5 in adipose development. Institutional Review Board Statement: The animal study protocol was approved by the the Animal Ethics Committee of Shanxi Agricultural University, China (SXAU-EAW-P002003). Informed Consent Statement: Not applicable. Data Availability Statement: All data generated or analyzed during this study are included.
4,912.6
2022-12-28T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
Online Ramsey Theory for Planar Graphs An online Ramsey game (G,H) is a game between Builder and Painter, alternating in turns. During each turn, Builder draws an edge, and Painter colors it blue or red. Builder’s goal is to force Painter to create a monochromatic copy of G, while Painter’s goal is to prevent this. The only limitation for Builder is that after each of his moves, the resulting graph has to belong to the class of graphs H. It was conjectured by Grytczuk, Ha luszczak, and Kierstead (2004) that if H is the class of planar graphs, then Builder can force a monochromatic copy of a planar graph G if and only if G is outerplanar. Here we show that the “only if” part does not hold while the “if” part does. Introduction For a fixed graph G and a class of graphs H such that G ∈ H, an online Ramsey game (G, H), defined by Grytczuk, Ha luszczak, and Kierstead [5], is a game between Builder and Painter with the following rules.The game starts with the empty graph on infinitely many vertices.On the i-th turn, Builder adds a new edge to the graph created in the first i − 1 turns so that the resulting graph belongs to H (we say that Builder plays on H), and Painter colors this edge blue or red.Builder wins if he can always force Painter to create a monochromatic copy of G (or force G for short).We then say that G is unavoidable on H.A graph G is unavoidable if it is unavoidable on planar graphs.On the other hand, if Painter can ensure that a monochromatic copy of G is never created, then G is avoidable on H.A class of graphs H is self-unavoidable if every graph of H is unavoidable on H. According to Ramsey's theorem, for every t ∈ N there exists n ∈ N such that every 2-coloring of the edges of K n contains a monochromatic copy of K t .Thus, without the electronic journal of combinatorics 21(1) (2014), #P1.64 restricting to H, Builder would always win the online Ramsey game by creating sufficiently large complete graph.The size Ramsey number r(G) for a graph G is the minimum number of edges of a graph that contains a monochromatic copy of G in every 2-coloring of its edges.The online size Ramsey number r(G) is the minimum m such that Builder can force G by playing on the class of graphs with at most m edges.Clearly, r(G) r(G) (Builder wins by presenting a graph of size r(G) that contains a monochromatic copy of G for any 2-edge-coloring).However, Builder may be able to win using less than r(G) edges since he can adapt his strategy to Painter's coloring.One can then ask whether or not r(G) = o(r(G)).The basic conjecture in the field, attributed to Rödl by Kurek and Ruciński [9], is that r(K t ) = o(r(K t )).In 2009, Conlon [3] showed that r(K t ) 1.001 −t (r(K t )) for infinitely many t.On the other hand, if G is a path or a cycle, then both r(G) and r(G) are linear in |V (G)| (see [1], [6], [7]). Butterfield et al. [2] studied Online Ramsey games played on the class S k of graphs with maximum degree at most k.The authors introduce an online degree Ramsey number r (G) as the least k for which G is unavoidable on S k . Online Ramsey games played on various classes of graphs were studied by Grytczuk et al. [5].They proved that the class of k-colorable graphs as well as the class of forests are self-unavoidable.(It was later shown by Kierstead and Konjevod [8] that the k-colorable graphs are self-unavoidable even if Painter uses more colors.)Various games played on planar graphs were investigated in [5].It was shown, for example, that every cycle, as well as the graph K 4 − e, is unavoidable on planar graphs.They made the following conjecture: Conjecture ( [5]).The class of graphs unavoidable on planar graphs is exactly the class of outerplanar graphs. Here we show that the conjecture is only partially true.In particular, it is true that the class of outerplanar graphs is a subclass of the class of graphs unavoidable on planar graphs. Theorem 1.Every outerplanar graph is unavoidable on planar graphs. However, we show that there exists an infinite family of planar but not outerplanar graphs which are unavoidable on planar graphs.Let θ i,j,k denote the union of three internally disjoint paths of lengths i, j, k, respectively.Theorem 2. The graph θ 2,j,k is unavoidable for even j, k. The paper is organized as follows.In Section 2, we introduce notation.Section 3 gives a proof of Theorem 1, and Section 4 gives a proof of Theorem 2. Notation In this section, we first mention several notions that are particularly important for the next discussion.Besides these, we follow standard graph theory terminology (see Diestel [4]). G : All graphs considered here are simple and undirected.For a graph G, the set of vertices is denoted by V (G) and the set of edges by E(G).The length of a path is the number of its edges.If we replace an edge e of G with a path of length k + 1 (i.e.we place k vertices of degree 2 on e), then we say that e is subdivided k-times.For a fixed graph G, a copy When we say that a graph is a disjoint union of G 1 and G 2 , we are automatically assuming that V 1 ∩ V 2 = ∅.A planar graph is a graph that can be drawn in the plane without edge crossings.An outerplanar graph is a planar graph that can be embedded so that all its vertices belong to the boundary of the outer face.A red-blue graph is a graph with its edges colored red or blue.A red-blue graph will often use the same name as its underlying (uncolored) graph. Let G 1 and G 2 be two disjoint graphs containing cliques G 2 is a graph formed from the disjoint union of G 1 and G 2 by identifying the vertex v j (G 1 ) with v j (G 2 ) for each j = 1, . . ., k.To simply notation, we write G 1 ⊕G 2 if k 2. Note that G 1 ⊕ G 2 does not specify the appending cliques, and so it is not a well-defined operation.However, if k = 1, then we can make this notation precise and specify the appending vertex v by writing G 1 ⊕ v G 2 (which we will do often).For k = 2, we sometimes write G 1 ⊕ e G 2 , where e(G i ) is a non-oriented edge v 1 (G i )v 2 (G i ) (the resulting graph is again not always unique).Also, we abbreviate (( Let G be a graph, H a subgraph of G.If there exist planar graphs X 1 , . . ., X n such that G = H ⊕ X 1 ⊕ • • • ⊕ X n , then we say that G is reducible to H, and we write G H. It is a well known fact that for k 2, a k-sum of two planar graphs is planar, thus the following holds: Remark 3. If H is a planar graph, and a graph G is reducible to H, then G is planar. Informally, G is reducible to H if G can be formed from H by successively "appending" planar graphs on edges/vertices.So, Remark 3 says that if the starting graph H is planar, then so is G. Consider an online Ramsey game on planar graph.A strategy (for Builder) X is a finite sequence of rules that tell Builder how to move on any given turn of the game, no matter how Painter plays.If a monochromatic copy of the target graph G arises, the game ends and Builder wins (provided that the final red-blue graph is planar).The output graph of strategy X is then the final red-blue graph with a fixed monochromatic copy of G, called a winning copy (of G by X) and denoted simply by G if no confusion can arise.This winning copy adopts all notation from the target graph.For example, for a target graph G with vertices u, v and a cycle C, the two corresponding vertices and the corresponding cycle are again denoted by u, v, C in the chosen winning copy G.If Builder always wins when following strategy X, then we say that G is unavoidable by strategy X.The set of all output graphs of a strategy X is denoted X (the calligraphic version of the name of the strategy). Outerplanar graphs In this section we show that every outerplanar graph is unavoidable on the class of planar graphs.The idea behind our proof is based on the inductive proof of the self-unavoidability of forests presented by Grytczuk et al. [5].Suppose that Builder's goal is to force a forest T .We can assume that T is a tree (since every forest is contained in some tree).Choose a leaf u of T , let v be the neighbor of u in T , and let T = T − u.Builder forces 2|T | − 1 monochromatic copies of T (where the corresponding final graphs are pairwise disjoint), from which at least |T | are of the same color, say blue.On those copies, Builder builds a new copy of T by adding edges between copies of v.If any one of the new added edges is blue, then that edge and a blue copy of T appended to one of its endpoints form a blue copy of T .Otherwise, those edges form a red copy of T .We will call this strategy the tree strategy. Since trees are planar, the tree strategy shows that forests are unavoidable (on planar graphs).Moreover, a generalized version of the tree strategy can be used for forcing a graph formed from a tree T by appending a copy of an unavoidable graph G to each vertex of T .Before presenting this strategy we need some notation. Let T be a tree on vertices v 1 (T ), . . ., v n (T ), and let G be a graph with an arbitrary vertex labeled by v.The ordered triple (T, G, v) denotes the graph Next, let S be any set of red-blue graphs X such that each has a fixed monochromatic copy G X of G. Let A be a red-blue graph with a fixed monochromatic subgraph (T, In our proofs we take S to be the set of all final graphs of some strategy.For example, the set of all output graphs X of a strategy X is a set of red-blue graphs, each with a fixed monochromatic copy of G, and so, for any given tree T , we can talk about (T, X )-reducible graphs. Suppose that G is unavoidable by strategy X.We consider the following Builder's strategy for forcing a monochromatic copy of (T, G, v). Figure 2: Forcing a monochromatic copy of (T, G, v), where T is a path of length 2, G a triangle, and v is an arbitrary vertex of V (G). strategy A (T, G, v, X) of the same color, and in ith of them label the vertex that corresponds to the neighbor of u in T by u i .Add an edge e ij = u i u j if and only if v i v j is an edge in (T, G, v). To prove that (T, G, v) is unavoidable by strategy A(T, G, v, X), we have to ensure that no matter how Painter plays, a monochromatic copy of the target graph (T, G, v) eventually appears, and that the final graph is planar.Both parts are shown below using induction and reduction arguments that rely on Remark 3. Lemma 4. Let T be a tree, G a graph, and v a vertex of V (G).If G is unavoidable by strategy X, then (T, G, v) is unavoidable by strategy A(T, G, v, X), and every graph Proof.We use all the notation introduced in strategy A. The proof is by induction on the number n of vertices of T .If n = 1, then (T, G, v) = G, which is unavoidable by strategy X by the assumption.Since A(T, G, v, X) = X , the graph A is (T, X )-reducible.Now let n > 1.The following two cases can arise. Case 1: All edges e ij are red.These edges form a red (T, G, v).Every final graph for forcing H i is planar by the induction hypothesis.Observe that each such graph is appended to (T, G, v) by one vertex only.Thus, A is reducible directly to (T, G, v), and hence is (T, X )-reducible, which proves the planarity as well as the second part of the claim.See Figure 2. Case 2: Some edge e ij is blue.The graph H i , the edge e ij , and one copy of G contained in H j form a blue (T, G, v).The graph A is planar by previous discussion, so the first part of the claim is complete.Let A i , A j ∈ A(T , G, v, X) be subgraphs of A that were used for forcing H i and H j , respectively.By the induction hypothesis, A i is (T , X )-reducible. the electronic journal of combinatorics 21(1) (2014), #P1.64 Similarly A j is (T , X )-reducible, and therefore is (u , X )-reducible.Since the rest of A is reducible to e ij , and e ij shares with each of H i and H j only one vertex (the vertex u i and u j , respectively), we get the second part of the claim. A block is a maximal 2-connected subgraph.For a graph G with a vertex set V = {v 1 , . . ., v k } and blocks B 1 , . . ., B l , the complete block graph B(G) is a graph on V ∪ {B 1 , . . ., B l } formed by the edges v i B j with v i ∈ V (B j ) (see Figure 3).Notice that B(G) can be obtained from the block graph B(G) of G by adding edges with one endpoint of degree 1, and thus, B(G) is a tree for every connected graph G. Remark 5.The union of an outerplanar graph G and its complete block graph B(G) is planar. Let H be an outerplanar graph.The weak dual H * of H is the graph obtained from the plane dual of H by removing the vertex that corresponds to the outer face of H.It is easy to see that H * is a forest, which is a tree whenever H is 2-connected.If there exists a vertex r ∈ V (H * ) such that H * rooted in r (denoted by H * (r)) is a full binary tree, then we call H a full outerplanar graph.The height h(H) of a full outerplanar graph H is the number of levels in its full binary tree H * (r).The edge of a full outerplanar graph H incident to the face that corresponds to r, as well as to the the outer face, is the central edge e H of H (see Figure 4, left).For the sake of convenience, a graph that consists of a single edge is also considered to be full outerplanar.Its height is then defined to be 0 and its central edge is the only edge of the graph.Lemma 6.For every outerplanar graph G there exists a full outerplanar graph H such that G ⊆ H. Proof.Let G T be an almost triangulation of G, i.e. an outerplanar graph formed by triangulating the inner faces of G.The maximum degree of G * T is at most 3, and there exists a vertex r ∈ V (G * T ) of degree 1 or 2. Let H * (r) be a full binary tree of height h(G * T (r)) containing G * T .The graph H is then the desired full outerplanar graph.Recall that for a tree T with n vertices and m edges, a graph G, and a vertex v ∈ V (G), we have (T, Let H be a full outerplanar graph, and for i = 1, . . ., m, let H i be a copy of H with the central edge e i (H i ).Then we define (T, G, v, H) as a graph , (T, G, v, H) is simply the graph that arises from (T, G, v) if we "glue" a copy of H by its central edge to every edge of T (cf. Figure 4, right). We now present a strategy B for forcing a monochromatic copy of (T, G, v, H), assuming that G is unavoidable by a strategy X. strategy B (T, G, v, H, X) 3. Choose a leaf u of T and call its neighbor u .Call strategy B(T , G , v , H , X ), where • H is the full outerplanar graph of height h − 1, Let {u 1 , . . ., u k } be the vertex set of (T, G, v, H), and thus also a subset of a vertex set of B(T, G, v, H) = T .Adopt this notation to the subgraph T of the winning copy (T , G , v , H ) found by strategy B(T , G , v , H , X ).Add an edge e ij = u i u j in (T , G , v , H ) if and only if u i u j is an edge in (T, G, v, H). Let S be a set of red-blue graphs such that each X ∈ S contains a fixed monochromatic graph G. Then we set S = S ∪ {G ∪ B(G)}, where G is the fixed monochromatic graph. Claim.Let T be a tree, G an outerplanar graph, and v ∈ V (G).If G is unavoidable by strategy X, then (T, G, v, H) is unavoidable by strategy B(T, G, v, H, X), and every graph B ∈ B(T, G, v, H, X) is (T, X )-reducible. This statement implies Theorem 1 since every outerplanar graph G is contained in some full outerplanar graph H by Lemma 6, which can be written as (e H , ({v}, ∅), v, H), and is therefore unavoidable by the above claim. We adopt all the notation used in strategy B. Let S be the set of all 2-tuples (h, t) ∈ (N ∪ {0}) × N. On S, we define the lexicographic order , i.e. (h 1 , t 1 ) (h 2 , t 2 ) exactly when h 1 < h 2 , or h 1 = h 2 and t 1 t 2 for all h 1 , h 2 ∈ N ∪ {0} and t 1 , t 2 ∈ N. The set S together with the relation is linear, and we can apply induction. We start with the basis.Suppose first that h 0, and t = 1.Then (T, G, v, H) = G and the claim is trivially satisfied.Let now h = 0, and t 1.In this case we have (T, G, v, H) = (T, G, v).By Lemma 4, (T, G, v) is unavoidable by A(T, G, v, X).So, every graph of A(T, G, v, X) is (T, X )-reducible, and thus (T, X )-reducible since X ⊆ X . Suppose now that h 1, t 2. By the induction hypothesis ((h, t − 1) (h, t)), G = (T − u, G, v, H) is unavoidable by strategy X = B(T − u, G, v, H, X), and every graph of X is (T − u, X )-reducible.Since G is unavoidable by strategy X , it holds by the induction hypothesis ((h − 1, t) (h, t)) that (T , G , v , H ) is unavoidable by strategy B = B(T , G , v , H , X ), and every graph B of B is (T , X )-reducible.Say that the winning copy (T , G , v , H ) in B is blue.We distinguish the following two cases: Case 1: All edges e ij are red.These edges form a red copy of (T, G, v, H).The graph B is (T , X )-reducible, and thus reducible to T = B(T, G, v, H).Since B arose from B by adding the edges forming (T, G, v, H), B is reducible to (T, G ∪ B(G), v), and thus (T, X )-reducible. the electronic journal of combinatorics 21(1) (2014), #P1.64 Case 2: At least one edge e ij = u i u j is blue.The endpoints of e ij are connected by a path P in T of length 2. There is a copy of H appended along each of the edges of P .Those two copies of H together with e ij form a full outerplanar graph H of height h with central edge e ij (see Figure 5).Let G i and G j be the blue copies of G = (T − u, G, v, H) appended to u i and u j , respectively.Then H, G i , and the copy of G in G j that is appended to u j form a blue copy of (T, G, v, H).We can assume that Builder chooses this copy as the winning copy.We now prove the second part of the claim.Recall that B is (T , X )-reducible.Let X i , X j be the graphs of X appended to u i , u j , respectively.So, X i is (T − u, X )-reducible, and X j is ({u j }, X )-reducible.Since the rest of the graph B is reducible to e ij , we find that B is (T, X )-reducible.See the diagram on the right in Figure 5. Non-outerplanar graphs We now show that an infinite subclass of theta-graphs is unavoidable on planar graphs.Recall that a theta-graph (θ-graph) is the union of three internally disjoint paths that have the same two end vertices.We write θ i,j,k for the theta-graph with paths of length i, j, k.For example, K 2,3 is the graph θ 2,2,2 . Before stating the main theorem, we introduce a strategy for forcing even cycles.The unavoidability of cycles was proven in [5], but here we need the final graph to have a special type of plane embedding that we utilize in the proof of the main theorem. Let C be a cycle of even length n that is unavoidable by strategy X.If for every graph X of X there is a plane embedding of X such that (G1) all vertices of V (C) belong to the boundary of one common face, and (G2) there exists a path P ⊂ C of length n 2 such that all vertices of V (P ) lie the boundary of another face, then we say that strategy X is a good strategy.The path P is then called a good path in C. 2. In P , Connect the vertices v 0 and v a 2 by an edge e. If e has the other color than P , add the path Lemma 7. Let C be an even cycle.Then strategy C(C) is a good strategy.Proof.We will follow the notation introduced in strategy C. We fix a planar embedding of an output graph of strategy C(C) as shown in Figure 6.By [5], every final graph of the tree strategy for forcing P is a forest, which is reducible to (the chosen monochromatic) P .Assume that P is blue.The following two cases can arise. Case 1: The edge e is red.If Painter colors some edge of P blue, a blue copy of C arises since there is a blue path of length n − 1 between such two vertices.Otherwise, P ∪ e is a red cycle C of length n.In both cases, all vertices of the monochromatic copy of C belong to two common faces.See Figure 6, left. Case 2: The edge e is blue.Suppose that Painter colors some edge e of C − v b v (a 2 +b) blue.Since each such pair is connected by a blue path of length a = n − 1, a blue cycle of length n arises.Condition (G1) is then satisfied by the face bounded by this cycle, and (G2) is satisfied by the face f bounded by the cycle v 0 v a v 2a . . .v a 2 v 0 , which contains a good path on n/2 + 1 vertices if e = v a(a−1) v a 2 and on all n vertices in all the other cases.Suppose now that Painter colors the edge v b v (a 2 +b) blue.Then the blue copy of C is formed by this edge and the blue path starting at v b , going through e, and ending at v (a 2 +b) .All of the vertices of the blue copy of C belong to the outer face, and there is a good path v b v b−1 . . .v 0 v a 2 of length b + 1 = n 2 that belongs to f .Consider the last possibility when C is red.Now, all of the vertices of V (C) belong to the boundary of f , and all but the vertex v (a 2 +b) of V (C) belong to the boundary of f .See Figure 6, right. Theorem 2. The graph θ 2,j,k is unavoidable for even j, k. Proof.For fixed j and k, let j = j 2 , k = k 2 .We consider disjoint cycles C 1 , . . ., C j +k +1 of length k + 2. In ith of them, we label an arbitrary vertex by c i and one of the two vertices in distance 2 from c i by v 0 (C i ) if i j + 1, and by v 1 (C i ) otherwise.Let P 1 , . . ., P j +k +2 be paths of length j − 1, where in each P i , one end is labeled by p i , and another one by v 0 (P i ) if i j + 1, and by v 1 (P i ) otherwise.Let Then we write H for a graph that is formed from the disjoint union of L and R by identifying p 1 with p j +k +2 , and p j +1 with p j +2 (see Figure 7, left).The cycle consisting of the paths P 1 , P j +1 , P j +2 , and P j +k +2 is denoted C 0 . the electronic journal of combinatorics 21(1) (2014), #P1.64 H: θ 2,2,4 : Observe first that having a monochromatic copy of a H, Builder could easily force θ 2,j,k (cf. Figure 7).The graph H is outerplanar, and hence unavoidable by Theorem 1.The problem is that by connecting the proper vertices of the monochromatic copy of H in the resulting graph, the planarity condition would be violated.Therefore, we have to change the strategy for forcing H. For n = 0, . . ., j So, the graph G j +k +1 is the graph H without the paths P 2 , . . ., P j , P j +3 , . . ., P j +k +1 .Let us refer to the blocks of G n and the corresponding vertices of the complete block graph simply by C 0 , C 1 , . . ., C n .Next, let V be the set of vertices of G n (and thus also of B(G n )) for which the distance from v 0 in G n is even.For G n , we define a subdivided complete block graph B S (G n ) as a graph that arises from B(G n ) by subdividing each edge joining C i (i = 1, . . ., n) and a vertex of V (k − 1)-times.Observe that B S (G) is a tree, and that G ∪ B S (G) is planar. We now present strategy D for forcing G n . strategy D (G n ) 1.If n = 0, call strategy C(C 0 ).In C 0 , find a good path P 0 , denote the middle vertex of P 0 by v 0 and its opposite vertex in C 0 by v 1 .We show by induction on n that G n is unavoidable by strategy D(G n ), and that every graph D of D(G n ) can be embedded in the plane so that (1) all vertices v 0 , v 1 , p 1 , p j +1 , c 1 , . . ., c n belong to some face f 1 , and If (2) (a) the vertices v 0 , p 1 , p j +1 belong to some face f 2 , other than f 1 , or (b) there is a path P = p 1 c 1 p j +1 of the other color than G n . The base case is n = 0.By Lemma 7, strategy C(C 0 ) is a good strategy, i.e. every graph of C(C 0 ) can be embedded in such a way that all vertices of C 0 belong to one common face, and there is a path P 0 ⊂ C 0 of length 4(j−1) face and either v 0 , p 1 , p j +1 belong to another common face or there is red path of p 1 c 1 p j +1 .Also, D can be embedded so that all vertices of C n lie in the boundary of a common face.Now, G can be drawn inside that face, which gives both Condition (1) and Condition (2). 2. Add edges of the cycle p 1 c 2 p 2 c 3 . . .p j +1 (= p j +2 )c j +2 p j +3 . . .c j +k +1 p 1 to H.If there is not the path p 1 c 1 p j +1 , also add the edges p 1 c 1 and c 1 p j +1 . As a consequence of Lemma 4, every graph of A(T 1 , (T 0 , G, v 0 ), v 1 , X) can be embedded in such a way that Conditions (1) and ( 2) hold for G, and that all the vertices p 2 , . . ., p j +k +1 lie in the boundary of the face f 1 .This means that adding the cycle in Step 3 of strategy E does not violate the planarity of the final graph.Finally, Condition (2) ensures that either there already is a path of length 2 connecting p 1 and p j +1 of the desired color, or Builder can add it by connecting p 1 to c 1 and c 1 to p j +1 . Further problems The question of whether the class of planar graphs is self-unavoidable is still open.To disprove it, it suffices to find a single planar graph G such that Painter can ensure that a monochromatic copy of G never occurs when playing on planar graphs.The graph K 4 seems to be a good candidate.Conjecture 8. K 4 is avoidable on the class of planar graphs. Unfortunately, Painter's winning strategies seem to be much harder to find.So far, only one such strategy has been presented; namely a strategy showing that a triangle is avoidable on the class of outerplanar graphs given in [5]. Figure 3 : Figure 3: A graph G and its complete block graph B(G). Figure 4 : Figure 4: A full outerplanar graph H with its full binary tree H * (r) of height 3 (left).The structure of (T, G, v, h) (right). Figure 5 : Figure 5: Forcing a monochromatic copy of (T, G, v, H), where T is a path of length 2, G is a cycle of length 4, v is any vertex of V (G), and H is the full outerplanar graph of height 1. 2 Figure 6 : Figure 6: Forcing cycle of length 4 by strategy C. 3 . In (T , G , v ), connect two vertices of T = B S (G n ) by an edge if and only if the corresponding vertices are connected by an edge in G n . Figure 8 : Figure 8: Left: The graphs G 3 (dashed) and B S (G 3 ) (solid) for j = 2, k = 2. Black vertices represent the vertices of B(G 3 ) whereas white ones are the subdividing vertices.Right: Forcing G 3 , Case 2 -one of the edges added to (T , G , v ) = (B S (G 3 ), G 2 , v 2 ) is blue.
7,740.6
2014-03-24T00:00:00.000
[ "Mathematics" ]
An investigation on solving cooperative problem solving Article history: Received October 28, 2013 Received in revised format 25 November 2013 Accepted 15 January 2014 Available online January 17 2014 One of the most important techniques to improve teaching skills is to use cooperative problem solving (CPS) approach. Implementing CPS techniques in elementary schools helps us train more creative generations. This paper presents an empirical investigation to find out how much elementary teachers use CPS techniques at different schools located in city of Zanjan, Iran. The study designs a questionnaire and distributes it among 90 volunteers out of 120 teachers who were enrolled in elementary schools. The study analyzes the data using some basic statistics and the result indicates that teachers maintain an average CPS score of 39.37, which is well above the average level. The study provides some guidelines for exploring teachers CPS’s capabilities. © 2014 Growing Science Ltd. All rights reserved. Introduction One of the most important methods for teaching effectively is to encourage students to participate in detailed discussion and share their thoughts (Johnston et al., 2000;Nelson, 1999).Cooperative problem solving (CPS) is one of the most popular methods for this purpose (Redish, 2003).Giangreco et al. (1994) presented various techniques of planning, adapting, and implementing inclusive educational experiences for students with different capabilities.Inclusive education can be defined differently.Heterogeneous grouping is the first definition when some students are educated together in groups and student could develop most when in the physical, emotional as well as social presence of having no disabilities. Inclusive education can be stated as a sense of belonging to a particular group where all students are members of a class, simultaneously (Stainback & Stainback, 1996).Although they have a common objective for cooperation but they have various aims of learning, which is stated as multilevel instruction (Campbell et al., 1988;Collicott, 1991).Inclusive education looks an individualized balance between the academic and social characteristics of schooling (Giangreco, 1992).There are several advantages of encouraging students to involve in teaching progress through problem solving procedures.First, most problem solvers are optimistic people and they enter the process with knowledge that every challenge they encounter could be an opportunity to facilitate inclusive education.Second, problem solvers have the right to alter between divergent and convergent thinking.Third, problem solvers also actively defer and engage their judgment.According to Firestien (1989) effective problem solvers refrain from this practice and detect times to actively defer judgment and times to involve judgment, purposefully and they are associated with divergent and convergent thinking.In a divergent phase, judgment is actively deferred while in a convergent phase, judgment is engaged, intentionally.In addition, problem solvers consider challenges as fun and they take necessary action when needed. Problem-based learning (PBL) has been implemented within health care professional educational programs to help critical thinking skills via a learner-centered technique.Hammel et al. (1999) examined student evaluations of the first three class cohorts taking part in a PBL-based curriculum.They reported that students perceived that a PBL method adopted consistently across the curriculum contributed to the development of information management, critical reasoning, communication, and team-building skills; however, they also detected some challenges such as time and role management, information access, instructor versus PBL expectations and practices, and coping with the ambiguity of knowledge and reasoning. The proposed study This paper presents an empirical investigation to find out how much elementary teachers use CPS techniques at different schools located in city of Zanjan, Iran.The sample size is calculated as follows, where N is the population size, and N=120, the number of sample size is calculated as n=90.The proposed study of this paper considers the following two questions. 1. How much do teachers use CPS activities at school? 2. Is there any difference between the level of CPS activities and teachers' job experiences? We have performed normality test using Kolmogorov-Smirnov test where the null hypothesis states that all data are normally distributed and the alternative hypothesis states that data are not normally distributed.Table 1 demonstrates the results of our findings, The results In this section, we present details of our findings on testing two hypotheses of the survey. Examining the level of CPS We first look at the level of CPS among teachers who were enrolled in elementary schools in city of Zanjan, Iran.Table 2 presents details of our findings on mean and max of numbers.As we can observe from the results of Table 2, teachers maintain relatively a high level of CPS, which indicates they all have good capabilities of asking students to participate in different discussions. The relationship between job experience and CPS capabilities The second question of the survey is associated with the relationship between teachers' job experience and CPS capabilities.We have categorized teachers in terms of their job experiences and performed an ANOVA test between two groups.Table 3 demonstrates the summary of our findings.Based on the results of Table 3 we can conclude that there was no difference between two groups and job experience did not play essential role on CPS development skills. Discussion and conclusion In this paper, we have performed an empirical investigation to study the level of CPS among elementary teachers who were enrolled in different regions of city of Zanjan, Iran.The study has detected that teachers have maintained a good CPS level, which is a good sign of modern teaching skills and they must be supported to improve and retain such capabilities.Unfortunately, in Iran, there are some barriers of CPS implementation.First, CPS implementation may create different discussions and many parents blame CPS adoption because of having noisy classes.Second, many parents ask teachers not to use CPS methods because they believe this method wastes teachers' times.There is no doubt that CPS method is time consuming compared with traditional one but there are many advantages on CPS implementation as discussed earlier in this study.There are also teachers who are not fully familiar with CPS methods and the study recommends offering a short course programs for helping teachers become more familiar with this technique. As we can observe from the results of Table1, all data are normally distributed and we may use regular statistical observations to examine two hypotheses of the survey.Next, we present details of our findings on testing two hypotheses of the survey. Table 3 The summary of ANOVA test
1,450.4
2014-01-01T00:00:00.000
[ "Education", "Computer Science" ]
Imputing missing covariate values for the Cox model Multiple imputation is commonly used to impute missing data, and is typically more efficient than complete cases analysis in regression analysis when covariates have missing values. Imputation may be performed using a regression model for the incomplete covariates on other covariates and, importantly, on the outcome. With a survival outcome, it is a common practice to use the event indicator D and the log of the observed event or censoring time T in the imputation model, but the rationale is not clear. We assume that the survival outcome follows a proportional hazards model given covariates X and Z. We show that a suitable model for imputing binary or Normal X is a logistic or linear regression on the event indicator D, the cumulative baseline hazard H0(T), and the other covariates Z. This result is exact in the case of a single binary covariate; in other cases, it is approximately valid for small covariate effects and/or small cumulative incidence. If we do not know H0(T), we approximate it by the Nelson–Aalen estimator of H(T) or estimate it by Cox regression. We compare the methods using simulation studies. We find that using log T biases covariate-outcome associations towards the null, while the new methods have lower bias. Overall, we recommend including the event indicator and the Nelson–Aalen estimator of H(T) in the imputation model. Copyright © 2009 John Wiley & Sons, Ltd. INTRODUCTION Multiple imputation (MI) [1] is commonly used to perform statistical inference in the presence of missing data. Unlike simpler imputation methods, it can yield inferences that accurately reflect the uncertainty due to the missing data. MI is typically more efficient than complete cases analysis when covariates have missing values. Implementations in Stata [2,3], SAS [4] and R [5] have led to its widespread use. broadly missing at hospital level: that is, its measurement appears to have been largely a matter of hospital policy, and it appears to be approximately MCAR. We exclude three patients with no follow-up. A number of possible prognostic variables were available, from which five were selected by analysis of the patients with observed ESR. The variables are listed in Table I. Analysis by multivariable fractional polynomials [16] suggested that wcc and t_mt should be entered into the analysis model as wccˆ3 and log(t_mt+1), respectively. Because of the large number of missing values of ESR, complete cases analysis uses less than half the data set. However, the rest of the data set carries information about the associations between the other covariates and the outcome, so it is sensible to use MI for the analysis of these data. We will use the data to compare different ways to incorporate the outcome in the imputation model. Multiple imputation We briefly describe MI for a single incomplete variable X , a vector of complete variables Z and complete outcome Y . We assume that we have an imputation model p(X |Y, Z ; ) parameterized by . Formally, MI involves drawing values of the missing data X mis from the predictive distribution p(X mis |X obs , Y, Z ) = p(X mis |X obs , Y, Z ; ) p( |X obs , Y, Z )d , where p( |X obs , Y, Z ) is the Bayesian posterior distribution of [1]. In practice, this may be achieved (with implicit vague priors) by (1) fitting the model p(X |Y, Z ; ) to the cases with observed X , yielding an estimate (typically an MLE)ˆ with estimated variance-covariance matrix S ; (2) drawing a value of , * , say, from its posterior, perhaps approximated as N (ˆ , S ); and (3) drawing values of X mis from p(X |Y, Z ; * ) [6]. Where some of the Z variables are also incomplete, the method of MI by chained equations (MICE) [10] starts by filling in missing values arbitrarily, then applies the above univariate method for each incomplete variable in turn, using the current imputed values of Z when drawing new values of X , and vice versa. The procedure is iterated until convergence, which often requires fewer than 10 cycles [2]. An alternative non-iterative procedure is available if the data are monotonically missing [8]. Once imputed data sets have been created, analysis is performed on each data set separately. Let Q r be the point estimate of a (scalar or vector) parameter of interest for the r th imputed data set (r = 1, . . . , m) with variance-covariance matrix U r . These values are then combined by Rubin's rules [1]: the overall point estimate isQ = (1/m) r Q r with varianceŪ +(1+1/m)B, wherē U = (1/m) r U r and B = (1/(m −1)) r (Q r −Q)(Q r −Q) T . Tests and confidence intervals for a scalar parameter are constructed using a t-distribution with degrees of freedom given by Rubin's formula [1] or an alternative [17]. Conditional distribution of covariates We now focus on the case of a survival outcome T with event indicator D (1 for events, 0 for censored observations). We assume that the outcome follows the Cox proportional hazards model where again X is incomplete and Z is complete. We also need an 'exposure model' p(X |Z ; ) in order to allow for the incomplete X . In the appendix, we prove a number of exact and approximate results about the imputation model p(X |T, D, Z ) in terms of the model parameters = ( , X , Z , h 0 (.)). These results are used to motivate regression models p(X |T, D, Z ; ), where the parameter is some function of . In practice, we do not know , but we can estimate the parameters directly from the complete cases. Therefore, the models below are stated in terms of the unknown parameters , which typically differ across different models. First, with binary X and no Z , we have In other words, the missing X may be imputed by fitting a logistic regression of X on D and H 0 (T ) to the complete cases. Second, with binary X and binary or categorical Z , if we take the most general exposure model logit p(X = 1|Z ) = Z , then we get where terms such as 3Z represent a set of dummy variables with their coefficients. In other cases we can only obtain approximate results. For binary X with more general (possibly vector-valued) Z , we make a Taylor series approximation for exp ( Z Z ) that is valid when Z Z has small variance. Using the exposure model logit p(X = 1|Z ) = 0 + 1 Z gives and the addition of an interaction term 4 H 0 (T )Z improves the accuracy of the approximation. Further, if the user believed that a particular transformation of Z was needed for predicting X , then this transformation should be entered in the imputation model. For Normal X , we make a fuller Taylor series approximation for exp ( X X + Z Z ) that is valid when X X + Z Z has small variance. Using the exposure model X |Z ∼ N( 0 + 1 Z , 2 ) gives using a first-order approximation, and again the addition of an interaction term 4 H 0 (T )Z improves the accuracy of the approximation. Equations (A7) and (A8) in the appendix suggest that departures from the above model will be most marked when both var( X X ) and H 0 (t) exp ( XX + ZZ ) (roughly the overall cumulative hazard at the event time T ) are large. A small empirical investigation We explored the distribution of X |T, D empirically using 100 000 simulated data points and the model described above with standard Normal X , X = 0.7, h 0 (t) = 1 (so H 0 (t) = t) and censoring times uniformly distributed on [0, 2]. Figure 1 shows smoothed graphs of the conditional mean and standard deviation, E[X |T, D] and SD(X |T, D). A linear regression on D and H 0 (T ) would be shown by parallel straight lines for the mean and a constant SD. Some departures from linear regression are seen: the mean graphs are somewhat curved and converging, and the SD declines with T . Taken together, these results suggest that logistic or linear regression of X on D, H 0 (T ) and Z may be appropriate in many situations, and that including an interaction between Z and H 0 (T ) may improve the approximation, but that the approximation will not work well in situations with strong covariate effects and large cumulative incidences. Implementation In practice, H 0 (T ) is unknown and must be estimated. We consider three possible methods. Substantive knowledge. In many applications, the baseline hazard may be approximately known: for example, in following a cohort of healthy individuals over a small number of years, the baseline hazard could be assumed to be roughly constant. In this case it would be reasonable to assume H 0 (T ) ∝ T . This may be a useful 'off-the-shelf' method. Nelson-Aalen method. When the covariate effects X and Z are small, we may approximate H 0 (T ) ≈ H (T ), which is easily estimated before imputation using the Nelson-Aalen estimator. It seems possible that this method will perform well for moderate sized X and Z because small errors in estimating H 0 (T ) are unlikely to have much impact on the imputations. Cox method. We also propose estimating H 0 (T ) iteratively: first, imputing X using the current estimate of H 0 (T ), then fitting the Cox proportional hazards model to the data using the current values of the covariates X, Z and extracting a revised estimate of the baseline hazard function H 0 (T ). This fits conveniently within the MICE algorithm: in each imputation cycle, as well as updating each incomplete variable in turn, we also update H 0 (T ) by fitting the Cox model. Because it is unlikely that H 0 (T ) will change much from one iteration to the next, we also consider a less computationally intensive version in which H 0 (T ) is updated only on the first k cycles. Here we will use k = 2. Theoretical properties. We note that the methods described in Sections 3.4.2 and 3.4.3 do not acknowledge the uncertainty in estimating H 0 (T ). As a result, they are not Bayesianly proper [1], so that standard errors may be too small and confidence intervals may be too narrow. However, we do allow for uncertainty in the coefficient of H 0 (T ), so we do not expect any undercoverage to be important. SIMULATION STUDY We now present simulation studies to compare the methods introduced in Section 3. These are summarized in Table II. We first consider the simple case of binary or Normal X and no Z . One covariate: design of simulation study The covariate X was either binary with P(X = 1) = X or standard Normal, so that its standard deviation = √ X (1− X ) or 1, respectively. X was missing completely at random with probability M . Survival times were drawn from a Weibull distribution h T (t) = T t −1 exp ( X X ). Random censoring times were drawn from a Weibull distribution with the same shape parameter, h C (t) = C t −1 . The parameter values used were M = 0.5; X = 0.5; T = 0.002; = 1; X = 0, 0.5, 1; and C = 0.002 (corresponding to approximately 50 per cent censoring). When X = 0, Table II the sample size n was chosen to give 90 per cent power to detect a significant association between binary X and survival at the 5 per cent level, using Collett's formula [18]. For Normal X , the sample size was chosen to be the same as for binary X with the same value of X . When X = 0, the sample size was chosen to be the same as that for X = 0.5. In sensitivity analyses, one parameter was varied at a time: 'High censoring', C = 0.01 (corresponding to approximately 83 per cent censoring); 'low missing', M = 0.2; 'shape 2', = 2; and 'administrative censoring', censoring at a fixed time computed to give the same censored fraction as random censoring when X = 0. The imputation methods described above were used to construct m = 10 imputed data sets. The analysis model was a Cox regression on X . Results from the imputed data sets were combined using Rubin's rules as described in Section 3.1. For comparison, we also analysed each simulated data set before introducing missing values (PERFECT) and using complete cases only (CC). In each case we estimated the bias and the empirical standard error of the point estimate; the relative error in the average model-based standard error, defined as its difference from the empirical standard error of the point estimate minus 1; the coverage of a nominal 95 per cent Normal-theory confidence interval; and the power of a Normal-theory 5 per cent significance test of the null hypothesis of =0. Table III shows the key results. NO-T is strongly biased towards the null, the proportionate bias equalling the proportion of missing data. LOGT is mildly biased (up to 10 per cent) towards the null. All other methods have no appreciable bias. All methods except NO-T have similar empirical standard errors (results not shown). NO-T has smaller empirical standard error as a result of its bias towards the null. One covariate: results for binary X . All methods except NO-T have model-based standard errors that compare well with the empirical standard errors. NO-T has a standard error that is up to 70 per cent too large. All methods except NO-T have coverage between 93 and 96 per cent in all cases, while coverage of NO-T varies from 73 to 100 per cent (results not shown). Power was very low for NO-T, reduced by up to 6 per cent for LOGT and by up to 3 per cent for T2 (only when = 1), compared with other methods. Differences in power between other methods appear to be consistent with chance. Results with 'high censoring' were very similar to the base case; results with 'low missing' showed weaker patterns than the base case; and with 'Shape 2' and 'Administrative censoring', results with < 1 showed weaker patterns than those shown with = 1. Because of its poor properties, we do not consider NO-T in further simulation studies. Table IV show that some methods, notably COX but also LOGT and T2 (when = 1), show small bias towards the null. (Note that when n=84 the PERFECT and CC methods show small-sample bias away from the null.) We did not explore precision because of the presence of bias. Coverage was 93-96 per cent for LOGT, T, T2 and NA, and 92-97 per cent for COX and COX* (results not shown). Power was greatest with T, NA and COX* methods. T2 had noticeably less power than T when =1, but was not superior when = 2. We conclude that LOGT is somewhat suspect because of potential bias towards the null. All other methods considered are adequate, and T, NA and COX* may be the best. There is no gain from the extra computational burden in COX, which if anything performs worse than COX*. Two covariates: design of simulation study We next added in a complete covariate Z . We took X and Z to be standard Normal with correlation . The analysis model was now h T (t) = T t −1 exp ( X X + Z Z ). We were especially interested in seeing what happens as X and Z get larger, since Section 3.2 suggested that this is where our approximations may break down. We induced missing data in X only, using a MCAR mechanism as before. We took M = 0.5, T = 0.002, = 1 and random censoring with C = 0.002 in all simulations: these choices for M and C were found in the univariate study to be most sensitive to different analysis methods. Further, we took all combinations of = 0, 0.5; X = 0, 0.25, 0.5; and Z = 0, 0.25, 0.5. To explore how the missing data mechanism affects the results, we repeated the bivariate simulation under the MAR mechanism logit P(M X |X, Z ) = Z , where M X indicates missingness of X : this yielded 50 per cent missing values. We did this in the case X = Z = 1 only. We used all the methods proposed before, with the exception of NO-T, which had performed very poorly, and COX, which had not performed well enough to justify its computational burden in the univariate study. In addition, we introduced a modification of the NA method that includes the interaction of Z with H (T ) in the imputation model: we call this method NA-INT. Two covariates: results. Results forˆ X are given in Table V. We first consider the MCAR case. Bias towards the null increases with increasing values of X , Z and . It is worst for T2, being up to 20 per cent of the true value of . Precision is not compared because of the presence of bias. Model-based standard errors are up to 17 per cent too high, with the discrepancy increasing with X . Despite these problems, coverage was adequate (94-97 per cent) for all methods (results not shown). Power was greatest with T, NA, NA-INT and COX* methods, and worst for LOGT and T2. The NA-INT method performed very similarly to the NA method. Results for the MAR case show increased bias inˆ X , increased error in the model-based standard errors and decreased power, but the comparisons between methods are similar to the MCAR case. Results forˆ Z are given in Table VI. There was small bias away from the null when X >0 and =0.5 because the small bias inˆ X seen previously leads to residual confounding. Model-based standard errors were all accurate to within 10 per cent. Coverages ranged from 94 to 97 per cent (results not shown). Power was similar for all MI methods, but was substantially greater for MI than for CC. RESULTS FOR THE RENAL CANCER DATA As stated in Section 2, the analysis model of interest for these data is a proportional hazards model including covariates esr, haem, who, trt, (wcc)ˆ3 and log(t_mt+1). For ease of comparison, we scale the quantitative covariates esr, haem, (wcc)ˆ3 and log(t_mt+1) by their CC standard deviations. Before imputing the missing values, the skewed variables wcc and t_mt were transformed to an approximate Normal distribution using the lnskew0 program in Stata, which replaces a variable X with log (±X −k) where k and the sign are chosen so that log (±X −k) has zero skewness. Although esr was non-Normally distributed, it was not transformed because exploratory linear Bold cells indicate estimates that differ from the NA estimate by more than 20 per cent of the NA standard error, or standard errors that differ from the NA standard error by more than 20 per cent of the NA standard error. Monte Carlo error in parameter estimates is no more than 0.003 in all cases. regression on the other covariates suggested that its conditional distribution was approximately Normal. Imputation was performed on the transformed variables using the ice routine in Stata [2, 3] and including the outcome variables appropriate to each method. Transformed values of wcc and t_mt were converted back to the original scale and then formed into the terms wccˆ3 and log(t_mt+1) for the analysis model. The COX and COX* methods were implemented by additional programming within ice. We used m = 1000 imputations so that Monte Carlo error did not disguise any differences between methods. We first look at differences between the CC method and all imputation methods (Table VII). One would expect standard errors for the coefficient of a variable X to be smaller by MI than by CC when there are a substantial number of observations with observed X , but missing data in other variables. In the present data, this would suggest that, compared with CC standard errors, MI standard errors would be similar for esr and smaller for all other variables. The expected pattern is observed for the other variables. However, the standard errors for esr are somewhat increased. This may reflect other features of the current data or may be a chance finding. There are substantial differences in point estimates. Turning to comparisons between imputation methods, the main differences are seen for esr, with T2 and NO-T giving point estimates less than half those for other methods. Smaller differences are seen for other methods. The only other variable whose coefficients show substantial differences between imputation methods is haem. This is the variable with the strongest correlation (−0.61) with esr, and the differences between the methods reflect residual confounding as a consequence of the attenuated estimates of the coefficient of esr. DISCUSSION We have developed an approximate theoretical rationale for imputing missing covariates in a Cox model using new methods based on the cumulative baseline or marginal hazard (NA, NA-INT, COX and COX*). These methods have the appealing property that they are invariant to monotonic transformation of the time axis, like the Cox proportional hazards model itself, but unlike more commonly used methods (LOGT and T). Our simulation study allows us to choose between these methods. The NA method performed at least as well as the more complex NA-INT, COX and COX* methods, appearing to have the lowest bias and highest power in most simulations. We therefore consider this to be the best method in general. The NA method is simple to implement in standard software. For example, using ice in Stata [2,3], after the data have been stset, the Nelson-Aalen estimator is produced by sts gen HT=na and then the MICE algorithm is implemented by ice HT _d X* with appropriate options. However, all methods were somewhat biased towards the null when covariates were strongly predictive of outcome. This is because the imputation models were not entirely correct. The MI procedure might be improved by using predictive mean matching [19], which aims to draw from the empirical distribution rather than the fitted conditional distribution. Our explorations of this approach in the context of our simulation studies suggest that it can perform very poorly: in particular, when there is no true association between covariate and outcome, predictive mean matching gives implausible distributions of imputed values and very variable estimated coefficients. This appears to be a consequence of small imputation models; strengths and limitations of predictive mean matching are a topic for further research. This paper did not aim to compare MI with complete cases analysis. The standard view is that MI is more efficient than complete cases for estimating the coefficient of a variable whenever some of the other model covariates are incomplete [11]. Our results support this view, since MI procedures had greater power than complete cases in Table VI but not in Tables III, IV and V. Indeed, MI had worse power in Tables III, IV and V because we used too few imputations: had our aim been a fair comparison of MI with complete cases, we would probably have needed to use m = 50 or more imputations. We assumed a proportional hazards survival model. Other non-parametric survival models, such as the accelerated life model and the proportional odds model, do not yield simple imputation models for the covariates. For the proportional odds model, it can be shown that a Taylor series approximation with X ≈ 0 suggests a logistic model for X on S 0 (T ), D and their interaction. Thus in principle, different methods are required for these models. We suggest that the NA method might be a reasonable first choice, but that more flexible imputation models should be carefully considered. We have explored and compared methods in the setting of a single incomplete covariate, but our finding that D and H 0 (T ) should be included in imputation models for incomplete covariates is equally relevant for any form of MI that is based on regression models for incomplete covariates. These include imputing from a multivariate normal distribution [13], imputing using monotone missing methods and imputing via chained equations [5]. We therefore recommend that, instead of the logarithm of survival time, imputation should be based on the Nelson-Aalen estimate of the cumulative hazard to the survival time. APPENDIX A Under the PH model h(t|X, Z ) = h 0 (t) exp ( X X + Z Z ), the log-likelihood for the outcomes, given complete data, is = + X D + X H 0 (T ) (A3) so the correct model for imputing missing X is a logistic regression on D and H 0 (T ). If in addition X is small, the model further simplifies to logit p(X = 1|T, D) = + X (D − H 0 (T )), but this simplification is unlikely to be useful in practice, and we do not pursue it further. More generally, in the presence of a single categorical Z , model (A2) is exactly a logistic regression on D, Z , H 0 (T ) and the interaction between Z and H 0 (T ). In other cases, we have no exact results. However, if we assume an exposure model logit p(X = 1|Z ) = 0 + 1 Z and approximate exp ( Z Z ) ≈ exp ( ZZ ) (whereZ is the sample mean of Z ) in (A2) for small var( Z Z ), we get
6,059.2
2009-05-19T00:00:00.000
[ "Mathematics" ]
Large-scale intermittency and rare events boosted at dimensional crossover in anisotropic turbulence Understanding rare events in turbulence provides a basis for the science of extreme weather, for which the atmosphere is modeled by Navier-Stokes equations (NSEs). In solutions of NSEs for isotropic fluids, various quantities, such as fluid velocities, roughly follow Gaussian distributions, where extreme events are prominent only in small-scale quantities associated with the dissipation-dominating length scale or anomalous scaling regime. Using numerical simulations, this study reveals another universal promotion mechanism at much larger scales if three-dimensional fluids accompany strong two-dimensional anisotropies, as is the case in the atmosphere. The dimensional crossover between two and three dimensions generates prominent fat-tailed non-Gaussian distributions with intermittency accompanied by colossal chain-like structures with densely populated self-organized vortices (serpentinely organized vortices (SOV)). The promotion is caused by a sudden increase of the available phase space at the crossover length scale. Since the discovered intermittency can involve much larger energies than those in the conventional intermittency in small spatial scales, it governs extreme events and chaotic unpredictability in the synoptic weather system. Introduction Since fluid turbulence is chaotic, probability distributions in addition to averaged physical quantities are imperative in order to understand the turbulence deeply [1]. For the average energy flux, Kolmogorov showed that the energy injected at large (long) spatial scales cascades into smaller (shorter) scales and energy injection balances dissipation at small scales, forming an inertial range between the spatial scale of the energy injection and the dissipation range. Kolmogorov's self-similarity assumption asserts that the mean energy spectra ( ), where is the wavenumber, follow universal power laws, ( ) ∝ ' with ~− 5/3 for three-dimensional (3D) turbulence [2] and ~− 3 for 2D turbulence [3] within the inertial range. These power laws have been supported by experiments and numerical simulations, at least approximately [1]. The probability density function (PDF) ( ) of a macroscopic quantity A is critical for understanding the chaotic and intermittent behavior of turbulence. The Gaussian distribution predicts a very low chance of extreme events. Deviations from it can be probed by the n-th moment 〈 1 〉 = ∫ 1 ( ) , such as the flatness defined by ( ) = 〈( − 〈 〉) 7 〉/ 〈( − 〈 〉) 8 〉 8 ; the intermittency associated with promoted rare-event occurrences is typically signaled by exceeding the Gaussian value 3. Kolmogorov's claim with the assumption of self-similarity was challenged by Landau's remark [4] on the intermittent nature of the dissipation, which invalidates the self-similarity. Subsequent studies on intermittency, inspired by concepts such as anomalous scaling and multi-fractals [4][5][6][7][8][9], have proven fruitful and shown the existence of intermittency even in the inertial range. However, it should be pointed out that the intermittency (non-Gaussianity) expected in these studies emerges at small length scales, much smaller than the system size. In such small length scales or large wavenumbers k, the contained energy is small as well because of the Kolmogorov law ( ) ∝ ' with z<0. Therefore, the intermittency arising from these mechanisms (anomalous scaling and dissipation) have only a limited impact on the global phenomena at least for simulations with Reynolds number available in the present computer power (in comparison to the finding under the condition of the present work as we see later including the atmospheric simulation). In fact, non-Gaussian distributions are empirically visible only for high-order spatial derivatives associated with the anomalous scaling expected for asymptotically small scales or small scales connected to the dissipation range in isotropic fluids [5][6][7][8][10][11][12][13][14]. The present study describes a case in which qualitatively new and strong intermittency emerges in the inertial range potentially at a much larger length scale than the scale of the dissipation and the anomalous scaling when the fluid is in a flattened 3D space, namely flat 3D fluid (F3DF), where the vertical dimension (thickness) is much smaller than the two horizontal dimensions. F3DF behaves 2D-like at scales larger than the crossover length scale (CLS), determined by the thickness, and 3D-like at smaller length scales. Unlike the large-scale intermittency associated with shear or wakes behind obstacles, the present intermittency in F3DF is not associated with large-scale forcing or boundaries. For instance, the atmosphere is a typical F3DF. Recent observations [15,16] and numerical simulations [17,18] have revealed the atmosphere's average statistics: the energy spectra of winds exhibit the 2D-like exponent ~− 3 for the scale of O(100~10,000 km) and the 3D-like exponent ~− 5/3 for scales smaller than the CLS, around O(100 km). The present study demonstrates that the dimensional crossover at the CLS generates unexplored strong intermittency, further spreading to the 3D inertial range. This spatial intermittency is also corroborated by the temporal intermittency generated at the crossover time scale derived from the CLS with the aid of the spatio-temporal correspondence hypothesized by Tennekes [19]. The CLS can be much larger than the length scales of the dissipation and the asymptotic anomalous scaling regimes, thereby the intermittency induced at the CLS may involve much larger energies than the intermittency induced at those small length scales. This is indeed the case for the atmosphere, where the CLS is around 100 km and the dissipation range is below 1 km. Our study further reveals that the CLS intermittency generates sinuous chain-like structures, which we call serpentinely organized vortices (SOV). The width and depth of the SOV structure are comparable to the CLS, while the length is much longer than the CLS as schematically illustrated in Fig.1. Inside the SOV structures, extreme events are highly promoted, which is surrounded by relatively calm areas. A SOV structure contains a mass of coherently assembled elementary vortices. Sizes of these element vortices are comparable or smaller than the CLS. The SOV structure looks like densely distributed many peas (vortices) contained in a pod (chain structure). The finite-time Lyapunov exponent (FTLE) is a measure of chaos, in which infinitesimally small difference in the initial condition grows exponentially (see Appendix A). We find that region of large FTLE forms large-scale chain structures in the same regions with the SOV observed at the CLS intermittency, indicating that the same origin as the CLS intermittency enhances the chaos as well. Here, the origin of the intermittency is ascribed to a strong violation of the self-similarity intrinsic at the CLS. Furthermore, a global atmospheric simulation reveals similar strong intermittency and chaos at the CLS, namely at the mesoscale O(100 km), implying the universality of our finding. This finding is important because the CLS intermittency involves much larger energies than those in the asymptotically small scales responsible for the conventional intermittency in isotropic fluids and has much stronger spill-over effects, for instance those on the origin of synoptic events such as cyclogenesis. Direct numerical simulation of F3DF We performed a direct numerical simulation (DNS) for a F3DF. The continuity equation (∇ ⋅ = 0) and the following incompressible Navier-Stokes-type equation are solved for a flat cuboid with periodic boundary conditions for all three directions. Table 1 summarizes the computational settings. The computational domain has the spatial size of 2 × 2 × (2 / ), where λ is the aspect ratio. The present study mainly discusses the case with λ = 64 (i.e., N4096-λ64); two other cases (i.e., N4096-λ128 and N1024-λ32) are used to investigate the effect of the aspect ratio. The grid resolution kmaxη, where kmax (= Nx/3 in the present simulation with the two-thirds dealiasing method) is the maximum effective wavenumber and η (= ( ] • 〈 〉) ID/7 , where ε is the energy dissipation rate) is the Kolmogorov scale, is larger than 4, and is thus fine enough for investigating dissipation-scale motions. The last source term ( , ) represents the divergence-free random force used to maintain turbulence [20,21], through which the energy is supplied at an input rate of `a = 4 at input wavenumbers centered at `a = 4 with a range of ±2 throughout this paper. The second-last term represents the 8-th order super drag with H = 4 [22]. This term is added simply to absorb the energy inversely transferred from the energy source. It should be noted that the inverse-cascade process at <`a is not the main subject of this study and details of the super drag term do not alter the findings. The pseudospectral method based on the Fourier-Galerkin method was used to solve the governing equations [23]. After having confirmed that the flow reached a statistically steady state, the simulation was continued for a sufficiently long time, compared to the energy input time scale (`a`a 8 ) ID/] , and statistics were collected. with earlier studies [2,21,24]. The 2D-to-3D crossover from = −3 to -5/3 occurs at d~0 .5 pq , corresponding to roughly half of the inverse of the vertical system size Q , namely pq = 2 / Q = Q st1 (= ) [2,24]. The observation that the crossover from = −3 to -5/3 occurs always at a somewhat smaller wavenumber than pq has been previously reported [21,25]. A comparison between Figs. 2(a) and (b) clearly shows that the crossover from = −3 to -5/3 at ~0.5 pq is a universal phenomenon, irrespective of the aspect ratio of the system. Figure 3 shows that the horizontal velocity P (and equivalently O ) follows essentially a Gaussian distribution, as expected, whereas the horizontal velocity increment at half-height separation P ( Q /2, 0) = P ( , , = Q /2) − P ( , , = 0) clearly deviates, exhibiting an intermittency (note that the x and y dependences are omitted in the notation). We demonstrate later that this non-Gaussianity of Δ P emerging at the CLS increment differs from the non-Gaussianity at small increments related to the dissipation scale and the anomalous scaling regime, which is restricted to asymptotically small scale in presently available simulations [5][6][7][8]. It should also be emphasized that such non-Gaussianity of P is not clearly visible in 3D homogeneous fluid even when one simulates by using presently available fastest supercomputers because the Reynolds number cannot be taken large enough (see such an example in Fig. B1(a)). Figure 4 shows snapshots of the magnitude of the horizontal velocity | d |M= w O 8 + P 8 R and the magnitude of the horizontal velocity increment The intermittency associated with the non-Gaussianity is manifested in the snapshot for |Δ d | as sparsely distributed areas that conspicuously exhibit promoted extreme events (as an assembled needle-like structure) (typically |Δ d |/ y z > 4) in Fig. 4(b), in contrast to the gentle structure in Fig. 4(a). Figure 5 shows the flatness of the horizontal component of the velocity | d } obtained after filtering by the spherical-shell filter (see Appendix C1), which picks up only the contribution within the wavenumber range − Δ /2 < < + Δ /2 with a window width of Δ = 1. The flatness of | d } shows a sharp peak at a filter size equal to the CLS, e.g., = 64 for N4096-λ64, irrespective of the aspect ratio. The peak at = pq clearly demonstrates that the present intermittency is distinct from the conventional asymptotically small-scale intermittency. Such a prominent intermittency is not visible in the inertial range in 3D homogeneous isotropic flow, as shown in Fig. B1(b). Figure 6 shows snapshots of the -plane cross section for the horizontal velocity increment, enstrophy, and FTLEs (see Appendix A), which measure unpredictability (chaos), at the same simulation time as that in Fig. 4. A remarkable feature that persisted during the simulations (see for example Figs. 6(a) and (b)) is that an intermittency consisting of prominent eddies of CLS size is assembled and forms SOV structures. Note that the raw enstrophy, i.e., even without any filter operations, exhibits the intermittency. This is due to the derivative operations applied to the velocity in the derivation of the enstrophy that give more weight to small scales, making the intermittency visible at the CLS. Figure 6(c) indicates that in regions with such SOV structures, the Lyapunov exponent is remarkably large. Later in the discussion, we will discuss about the origin of the chaotic behavior at the SOV structure. Since the SOV structure has much longer length than the CLS, one might argue that the structure could be induced simply by the external artificial forcing. However, this is not the case. Figures 7(a-e) show the low-pass filtered horizontally oriented vorticity at the same xy-plane cross section as that in Fig. 6(a,b) (see Appendix C2 for the low-pass filtering). The SOV structure appears only in Figs. 7(d) and (e), where the modes with = pq are included. This indicates the necessity of the CLS contribution for the SOV structure formation. It should be emphasized that the conventional asymptotically small-scale intermittency does not generate such a large coherent structure. Figure 8 shows an example of a 3D snapshot of horizontally oriented vorticity and enstrophy. It reveals that the long SOV structures consist of a mass of assembled small spots or specks, each of which has a scale comparable to the CLS or smaller. The temporal evolution of the large SOV structures is found in Supplementary Movie S2, which shows that the large SOV structures move on a slow time scale, such as the eddy turnover is the horizontal size of the system and q‚ƒ is the root mean square of the horizontal velocity). In order to further clarify the CLS intermittency, Gaussian filtering was applied, where the weight given by the Gaussian distribution around = 0 with width 1/ is imposed as the filter (see Appendix C3). This filter can also be regarded as the real-space filter to measure the local distribution with width because the Gaussian distribution is Fourier-transformed into another Gaussian in real space with the inverse width. Therefore, one can gain insight into the intermittency in real space. Figure D1 shows a peak in the Gaussian-filtered distribution at around 0.5 pq . It was also confirmed that isotropic 3D fluid does not have such an intermittency except at dissipative small scales when the Reynolds number is comparable to the F3DF simulation done in the present work (see Fig. B1). All additional simulations support that the intermittency is triggered exclusively at the CLS. The present DNS adopts periodic boundary conditions and spatial Fourier analysis, which are not applicable to usual flows in the real world. Instead, time series data at a measurement point are commonly analyzed. When the energy-containing large scale and the scale of interest are well separated, the Tennekes sweep hypothesis [19], which states that large-scale eddies advect small-scale ones, can map temporal data with time t into spatial data with length scale ℓ via the relation ℓ = , where is the representative velocity of the system. Figure 8 shows that strong intermittency starts rising up at * = 8 ‡ ℓ~p q . This supports that the strong intermittency at the CLS is separated from the conventional small-scale intermittency [1,[5][6][7][8][10][11][12][13][14]. The enhanced intermittency continues rising up above pq and plateaus at larger * . This spill-over effect may be the consequence of the influence of the higher-harmonic peaks in Fig. 5(a), which are broadened due to the approximate nature of the Tennekes hypothesis. Atmospheric simulation The results of a high-resolution simulation using the global atmospheric model MSSG-A [26] are described here. MSSG-A is one of the three models that participated in the global 7-km-mesh nonhydrostatic-model intercomparison project for typhoon predictions [27]. Its dynamical core is based on nonhydrostatic equations, and it predicts the three wind components, air density, and pressure. A six-category bulk cloud microphysics model is used for the equation of state for water; that is, MSSG-A is a cloud-resolving model. In contrast to the DNS for F3DF, this global simulation accounts for the effects of Earth's rotation, gravity, fluid compression/expansion, topography, moisture, and heat radiation. The five-day time integration from 00:00UTC on 13 September, 2013, performed for Typhoon Man-yi was analyzed. In the energy spectrum in Figure 10(a), the crossover of the slope is from -3 to -5/3 at d~1 0 In rad/m, which corresponds to O(100 km), as in earlier observations [15,16] and atmospheric simulations [17,18]. In contrast to the equality between the CLS and the domain depth in the DNS, the atmospheric CLS is larger than the atmosphere depth O(10 km) because of complexities such as nonzero fluid compressibility, gravitation, Earth's rotation, and boundary conditions. Similarly to the DNS, strong intermittency is observed at d~1 0 In rad/m, i.e., at the CLS, in addition to the dissipation scales ( Fig. 10(b)). Large structures observed in the figures of the wind increment and the Lyapunov exponents are also observed (Figs. 11(a) and (b)), in accord with the DNS of F3DF. Here, the large structures are synoptic-scale structures, e.g., tropical cyclones. Despite the various complexities of the atmospheric system, the structure of the intermittency shows remarkable similarities to the DNS results. Discussion The conclusion that the chain-like colossal structures of the assembled vortices, namely, SOV are induced at a relatively small CLS appears to break causality, because energy cascades from large to small scales. Our reasoning is as follows. A fraction of the vertically oriented vortices governing the 2D turbulence at scales larger than the CLS are transformed in the energy cascade process into horizontally oriented vortices around the CLS. There, the modes with ( d , Q ) = ( pq , 0) are scattered via mode coupling into same-energy modes at d~0 and Q = pq (Fig. 12), which necessarily generates a large horizontal structure because d~0 . The resultant colossal SOV structure develops a dominant rigid backbone, even at small scales, because the generated calm areas left behind at the CLS remain intact throughout the later cascade process. We here discuss the mode coupling process in more detail. Vortices in the turbulence are generated in the cascade from large-scale structures to smaller ones through the dynamics of vortex breakup and energy flow from low to high wavenumbers. In the cascade down to the CLS, the vortices are only 2D-like because the wavenumber Q is always zero. The 3D vortex structure (horizontally oriented vortices) emerges only when a mode whose wavenumber is larger than the CLS is involved in the breakup process. Note that a horizontally oriented vortex contains fluid flow in the opposite direction between two points at the same x,y coordinates but different z coordinates. In the Fourier analysis, this means that a mode with nonzero Q must be involved. For instance, the vortex with a size of Q (largest 3D vortex) mainly contains the mode with Q = pq , which is the smallest nonzero wavenumber in the z direction. Therefore, it is clear that the 3D vortices are generated from the breakup of 2D vortices in the cascade beyond the CLS. During the generation of horizontally oriented 3D vortices, momentum conservation must be satisfied by the mode coupling in Navier-Stokes equations, such as the scattering process of D = 8 + ] . Here, = ( , QD ) satisfies | |~p q and QD = 0 , contributing in real space to the mode of vertically oriented vortices with a radius on the scale of 1/ ‰Š‹ . The scattered modes = ( , Q8 ) and = ( , Q] ) must satisfy | |~| | + and | | = , respectively, whereas Q8 = − Q] = pq (see Fig. 12). The modes and both contribute to horizontally oriented vortices. The process with | | ≪ pq (or equivalently | | ≪ pq ) contributes to the colossal structure formation with length scale 1/| | (or 1/| |). These processes generate the horizontally oriented vortices via mode coupling in phase space expanded to the z direction. These mode coupling processes are clearly a consequence of the nonlinear coupling of the momenta in Navier-Stokes equations. Further discussion on mode coupling is given in Appendix E. The interscale mode transfer from the CLS to the larger length scale, d~Q~0 also exists. However, the energy transfer is expected to be small. It is intriguing to clarify its role on the formation of the SOV structure in the future. Since the forcing wavenumber `a introduces another characteristic large length scale, `a may be involved in determining the SOV structure through the mode coupling in addition to the system size itself in the horizontal direction. The involvement of `a for the structure and dynamics of the SOV structure is an interesting subject left for future study. The self-similarity assumed by Kolmogorov [2] addresses that the space is active everywhere filled with vortices via the self-similar energy cascade. If the fluid is isotropic 2D, the approximate Kolmogorov law (and self-similarity) results from the nature of the vortex breakup, where after every breakup the resultant broken-up vortices fill almost the whole 2D space at each wavenumber (Fig. 13). As a result, everywhere in real space is active, and thus there is no prominent intermittency. The same is true for the case of isotropic 3D fluid. The intermittency is generated by the breakdown of the self-similar cascade in the dissipative range and in the anomalous scaling range, where the active eddies break up into small eddies, filling only a fractal dimension smaller than the real spatial dimension [1,9]. This generates contrasting active (extreme) and calm (inactive) regions. However, the effect of this intermittency, restricted to the dissipative range or the anomalous scaling range for isotropic fluids, is small in presently available simulations because the involved energy is limited. In contrast, when the space is suddenly expanded to the third dimension at the CLS of F3DF by including nonzero Q , vortices whose size is smaller than the CLS are unable to fill the whole real space anymore because some modes escape to the 3D vortices, resulting in part of the space remaining calm without small 2D vortices (Fig. 13). Such expansion completely destroys the approximate 2D self-similarity [2,9,28]. The intermittent structure emerges from the contrast between the region filled with vortices (active region) and the empty region (calm region). It is important that this mechanism of the intermittency does not require the presence of dissipation and can be effective deeply inside the inertial region. The CLS scale can be much longer than the length scale of prominent anomalous scaling as well and the CLS intermittency can involve much larger energies as in the present simulation. This is a novel route to the breakdown of the self-similarity assumed by Kolmogorov. This mechanism is supported by the above-mentioned intermittency in 3D-like quantities, such as horizontal vorticity, and the vertical velocity but its absence in essentially 2D-like quantities. Here we discuss the origin of the chaotic behavior at the SOV. Since the motion of the SOV structure is slow and continuous as is observed in Supplementary Movie S1, the global motion of the SOV may not be the origin of the chaos. We speculate that the enhanced Lyapunov exponent is caused by the internal dynamics of the SOV. Inside the SOV, the vortices are actively created and annihilated with their interactions and collisions. Dynamical process of the nonlinear excitations such as vortices and solitons is known to cause chaos [29]. The present study provides new insight into intermittency, which had mainly been explored before in connection to dissipation and anomalous scaling. A new route for strong intermittency is opened by the suddenly expanded phase space at the dimensional crossover. The mechanism of intermittency generated at the CLS and the resultant SOV together with the enhanced Lyapunov exponent may have a deep impact and connection to the synoptic structure formation and the extreme weather. This will open important future research areas about the origin and mechanism of the climate and weather dynamics as well as atmospheric phenomena, which cause disasters on the earth. Simulations within the restricted grid resolutions that do not fully cover both 2D-and 3D-like regions would fail to estimate this dominant crossover-scale intermittency. For instance, climate models that assess extreme weather are required to well resolve the present CLS intermittency. That is, they need to use at least a few 10-km or lower resolutions to resolve the O(100 km) CLS. Fig. 3 PDF of horizontal velocity P (red curve with red squares) and Δ P ( Q /2,0) (blue curve with blue circles) for N4096-λ64, with the Gaussian distribution shown for reference (gray). Here and in the following figures, the abscissae of PDF and the color contours are normalized by each standard deviation ( ). Δ P ( Q /2,0) shows a wider tail, meaning stronger intermittency. Fig. 8 Distributions of the horizontally oriented vorticity (color contour) at three vertical walls whose top view coordinates are indicated by black dashed lines in Fig. 6(a) together with the color contour at the bottom plane at z=0 for N4096-λ64. The enstrophy is also plotted by white 3D iso-surface. The threshold of the white iso-surface was set at m + 3σ, where m and σ are the mean and standard deviation, respectively. The vertical real-space scale is doubled. For vorticity, calm regions are colored green in the planes, and active regions are colored red or blue (as in color scale bar). Large structures (SOV) consist of a mass of assembled tiny active spots and specks with sizes comparable to or smaller than the CLS. The bottom plane cross section corresponds to the cross section illustrated in Fig. 6(b) and the active SOV structure has one-to-one correspondence. See also Movie S2 for a temporal evolution of the structures. 12 Schematic illustration of the formation mechanism of the colossal structure. Vector k3, whose horizontal wavenumber is small, is generated via mode coupling that satisfies momentum conservation. Fig. 13 Schematic illustration of the emergence of the intermittency by phase space expansion at the 2D-3D crossover. In the 2D cascade, the self-similarity is approximately satisfied and the broken-up smaller vortices fill the space again. However, at the 2D-3D crossover, the active area is unable to fill the whole phase space anymore because of the expansion of the phase space in the z direction. This generates active and calm regions with intermittency. APPENDIX B Three-dimensional homogeneous isotropic turbulence For comparison, the data from the DNS of 3D homogeneous isotropic turbulence (HIT) simulated with 512×512×512 grids [30] were analyzed. The Taylor microscale-based Reynolds number was 210. Flatness of Gaussian-filtered velocity Here, the flatness of velocity © » ( ) obtained by the Gaussian filter with width 1/ centered at = 0 in momentum space is shown. Role of mode coupling in large SOV structure generation with intermittency Velocities band-pass-filtered at kcr exhibit strong intermittency, as shown in Fig. 5(a). There are multiple modes at | | = ÒÓ , e.g., = (0,0, ± pq ) and (± pq , 0,0). Figure E1 shows the role of mode coupling for the intermittency at | | = ÒÓ . The PDFs of the velocities that have a single mode with | | = ÒÓ exhibit Gaussian-like distributions (Figs. E1(a) and E1(b)), while those containing contributions from multiple modes clearly exhibit non-Gaussian distributions (Fig. E1(c)). However, if different modes follow independent Gaussian distributions, their linear combinations also follow a Gaussian distribution. Therefore, the non-Gaussianity supports the existence of correlations between different modes. Mode coupling is essential for the strong intermittency at the CLS, as discussed in the main text.
6,367.2
2018-03-15T00:00:00.000
[ "Environmental Science", "Physics" ]
Fault Diagnosis Method of Box-Type Substation Based on Improved Conditional Tabular Generative Adversarial Network and AlexNet : To solve the problem of low diagnostic accuracy caused by the scarcity of fault samples and class imbalance in the fault diagnosis task of box-type substations, a fault diagnosis method based on self-attention improvement of conditional tabular generative adversarial network (CTGAN) and AlexNet was proposed. The self-attention mechanism is introduced into the generator of CTGAN to maintain the correlation between the indicators of the input data, and a large amounts of high-quality data are generated according to the small number of fault samples. The generated data are input into the AlexNet model for fault diagnosis. The experimental results demonstrate that compared with the SMOTE and CTGAN methods, the dataset generated by the self-attention-conditional tabular generative adversarial network (SA-CTGAN) model has better data relevance. The accuracy of fault diagnosis by the proposed method reaches 94.81%, which is improved by about 11% compared with the model trained on the original data. Introduction As a crucial piece of equipment in the power system's transmission and distribution chain, the box-type substation performs a vital role in voltage regulation and electricity distribution, with widespread applications in urban and rural areas, industrial and mining enterprises, and public buildings.Because the box-type substation is mainly installed outdoors, its operating environment is complex and changeable, making it very susceptible to damage from natural factors and external forces.Therefore, combined with its own internal equipment diversity, there will be a variety of fault problems in the operation process, leading to various challenges in maintenance and management.This undoubtedly poses a severe challenge to the stability and reliability of the power supply system, directly affecting the safety of daily electricity use and the production efficiency of enterprises.Therefore, timely, effective, and reliable health monitoring of box-type substations is of great significance for the safe operation of the power system. At present, the traditional manual inspection mode requires that the inspection personnel have certain prior knowledge and experience.Furthermore, the box-type substation structure is complex, and there are numerous components, which makes the inspection task extremely challenging.In addition, traditional regular inspections have inherent lags, which not only seriously reduce work efficiency and increase unnecessary costs but also make it difficult to detect and troubleshoot hidden faults in time.Therefore, the research of fault diagnosis technology has gradually become a research hotspot. Fault diagnosis technology aims to identify both the normal and abnormal conditions of equipment, whether globally or locally, by monitoring and analyzing its operational status.In the case of malfunction, the technology can also classify the fault and pinpoint the faulty component accurately.Currently, the mainstream fault diagnosis techniques primarily include methods based on physical models, statistical models, and artificial intelligence.Among them, fault diagnosis technology based on deep learning has received widespread attention due to its high diagnostic accuracy and the popularity of data acquisition technology, without needing a deep understanding of the physical model of the diagnostic object.However, deep learning-based fault diagnosis methods face a challenge in practical applications: they rely on massive data accumulation.Since box-type power distribution equipment spends most of its time in normal operating conditions, fault samples are scarce, resulting in an imbalance between healthy samples and fault samples, which can affect diagnostic performance.To address this issue, current research tends to adopt generative adversarial networks (GANs).This approach directly addresses the problems of small sample sizes and class imbalance from the input source layer, simplifying complex data sampling and processing procedures while avoiding the tedious task of building specialized diagnostic models for different diagnostic objects. Therefore, for highly integrated and complex equipment, such as box-type substations, this article utilizes collected historical data of box-type substations to construct a data derivation model based on the improved CTGAN.Replacing the two fully connected layers in the CTGAN generator with self-attention layers transforms the static weights generated in the CTGAN generator into dynamic weights that are free from positional dependencies during data input, enabling better preservation of the correlation between different features.By learning the relationship matrix between input features through the self-attention mechanism, the correlation between different features is maintained, thereby improving the drawback of CTGAN's failure to model the dependency relationships between each feature.This approach generates more high-quality data from a limited number of faulty samples.By employing the data derivation method, the sample data are enriched, effectively addressing the problem of small samples in the fault diagnosis of box-type substations and thus enabling precise prediction of the equipment status of box-type substations. The remainder of this paper is structured as follows.Section 2 presents an overview of the current research status on fault diagnosis and deep learning-based fault diagnosis methods, both domestically and internationally.It further identifies the key research areas and existing shortcomings.In Section 3, the primary faults associated with the research subject, the box-type substation, are analyzed and categorized, laying the foundation for subsequent data analysis.Section 4 elaborates on the fundamental principles of generative adversarial networks and self-attention mechanisms and establishes an SA-CTGAN data derivation model based on these principles, along with a corresponding structural diagram.To assess the model's derived data performance in future applications, Section 5 introduces the AlexNet fault diagnosis model, detailing its network architecture.Subsequently, Section 6 designs a comparative experiment through case studies to evaluate the model's performance.Finally, Section 7 offers concluding remarks on the overall research. Current Research Status of Fault Diagnosis Methods Fault diagnosis technology is a technique that monitors and analyzes the operational status of equipment to determine whether it is functioning normally or abnormally in its entirety or specific parts.It categorizes the abnormalities and faults that occur in the equipment and pinpoints the faulty components.Currently, the mainstream fault diagnosis technologies are mainly divided into physical model-based diagnosis methods [1], statisticsbased diagnosis methods [2], and artificial intelligence-based diagnosis methods. (1) Physical models The diagnosis method based on a physical model usually has high diagnostic accuracy but lacks universality.It requires that the mathematical model of the object system be known, and as the structures of various equipment become increasingly complex and inte-grated, it is difficult to establish accurate mechanistic models.Therefore, the development and promotion of fault diagnosis methods based on physical models have been limited to a certain extent, and there is also less related research. (2) Statistical models The statistical model-based fault diagnosis method necessitates neither a profound comprehension of the equipment or the system's structure and principles nor the establishment of intricate mechanisms or mathematical models, thus exhibiting high universality.However, it lacks clarity in the physical significance of diagnosed faults, offers limited interpretation, and possesses slightly lower diagnostic accuracy compared to methods rooted in physical models.The diagnosis method based on the artificial network can excavate the fault knowledge contained in the data by analyzing massive amounts of data and self-learning to realize fault diagnosis, which has stronger explanatory properties than the first two diagnostic methods. (3) Artificial intelligence Fault diagnosis methods based on artificial intelligence can be divided into fault diagnosis methods based on expert systems [3], diagnosis methods based on shallow machine learning [4,5], and fault diagnosis methods based on deep learning [6,7].The diagnosis method based on the expert system uses expert knowledge and experience to form a knowledge base, so the diagnostic model has the judgment ability similar to that of experts and can take into account the uncertain factors in the future and the special situation of the diagnostic object, but it requires a large amount of knowledge accumulation and revision, and it is difficult to establish a perfect diagnostic knowledge base.Both shallow machine learning and deep learning-based fault diagnosis methods rely on their feature extraction capabilities to mine the hidden information from the data to complete the fault diagnosis work, but with the increasing amount and dimension of data, the deep learning method has better performance than the shallow machine learning method [8].Benefiting from the massive device state detection data and the rapid development of artificial neural networks, deep learning has been widely used in the field of fault diagnosis due to its excellent feature learning ability. The above research indicates that due to the lack of the need for a deep understanding of the precise physical model of the diagnostic object or system and the widespread application of data acquisition technology, fault diagnosis techniques based on deep learning have garnered the most widespread attention in related fields due to their high accuracy.Therefore, this article takes the box-type substation, a key piece of equipment in the distribution network, as an example to conduct research on fault diagnosis methods for distribution network equipment based on deep learning methods. Research Status of Small Sample Issues in Fault Diagnosis Deep learning-based fault diagnosis methods rely heavily on vast amounts of data accumulation.However, in practical application scenarios, equipment often operates normally under most conditions, resulting in a scarcity of fault samples.This imbalance between fault samples and healthy samples leads to a decline in the performance of deep learning-based fault diagnosis methods.In response to this issue, numerous scholars have proposed various solutions. (1) Research on methods based on data preprocessing and model structure Some scholars adopted sampling technology to solve the problems of sparse input data and class imbalance in diagnostic models, which effectively improves the diagnostic performance of the model [9,10].However, sampling technology has the potential to alter the distribution of the original data-set, resulting in distortion of the model, which will reduce the accuracy of fault diagnosis.Jia [11] designed a new learning mechanism to train the deep neural network by improving the loss function so that the deep neural network can maintain the accurate feature representation driven by the consistency of trend features and ensure the accurate fault classification driven by the consistency of the fault direction.The accuracy of this method can reach about 90% with only 100 samples polluted by strong noise.Zhang [12] proposed a compact convolutional neural network fault diagnosis model based on multi-scale feature extraction.This model utilized the multi-scale feature extraction unit to extract fault features of different time scales and comprehensively analyze them through the compact neural network, allowing for the extraction of more sensitive features with relatively shallow structures.This improvement led to enhanced diagnostic accuracy under conditions of small samples.Zhao [13] added a classification branch to the Siamese network, replaced the Euclidean distance measurement with a network measurement, and constructed an improved fault diagnosis model based on the Siamese neural network, consisting of a feature extraction network, a relationship measurement network, and a fault classification network.The similarity of the extracted features is measured by the relationship measurement network, which effectively guarantees the accuracy of fault diagnosis in the case of small samples.Xu [14] introduced a vision transformer model that incorporates multi-information fusion and leverages a time frequency representation graph.This model first decomposes the original vibration signal into various sub-signals of different scales through a discrete wavelet transform.Subsequently, it converts these sub-signals into time-frequency representation graphs using a continuous wavelet transform.Finally, the model serially inputs these graphs into its framework for accurate fault diagnosis.The experimental results show that this method can diagnose the fault of small sample bearing and has strong universality and robustness.Chen [15] combined wavelet and depthwise separable convolutional neural networks to design a few-parameter branch for time-frequency feature extraction.This branch captured fault features from a limited number of samples to realize fault diagnosis under small samples together with regular convolution. (2) Research on methods based on transfer learning The process of dealing with small samples and class imbalanced problems by data preprocessing and improved neural networks is often complex and less versatile.The rise of transfer learning [16] provides a new direction for solving this problem.Liu [17] introduced a generalized transfer framework equipped with evolutionary capabilities, aimed at tackling the challenge of limited fault samples in industrial process fault diagnosis.The framework employs a transfer learning strategy combined with the adaptive mixup method to adaptively expand the fault samples to ensure the number and diversity of extended samples and uses the transformation matrix as the evolutionary channel to reduce the diagnostic error with the increase in fault samples without retraining the framework.Based on simulation data, Dong [18] proposed a fault diagnosis method combining convolutional neural networks and parameter transfer strategies, which avoids the problem of diagnosis accuracy caused by insufficient model training under small samples.Fu Song [19] constructed an engine fault diagnosis framework combining deep auto-encoders with transfer learning.The framework uses a deep auto-encoder to establish an engine fault feature extraction model with sufficient samples and transfer learning to extract features in small samples, using a support vector machine as a classifier to complete fault classification of small samples.Zhang [20] used a global average pooling layer instead of the fully connected layer to reduce the number of parameters to be trained in the convolutional neural network.Based on the improved transfer learning method of pre-training and fine tuning, it avoids the problem of overfitting in the case of small samples and the fault diagnosis task in the same scenario.The classification accuracy of the method was 92.25% when fine tuning was performed with 1% of the training set data in the target domain.Xiao [21], based on the transfer learning framework, added a large amount of source data with different distributions as training data to the target data and used the convolutional neural network as the base learner to update the weights of the training samples by employing the improved Tr AdaBoost algorithm.This formed a high-performance diagnostic model, improving the diagnostic accuracy in case of insufficient data in the target domain. (3) Research on methods based on generative adversarial learning Transfer learning has a significant effect on the fault diagnosis of small samples, but it is difficult to find a suitable adaptive source domain for fault diagnosis knowledge transfer in equipment with complex structures (for example, the lack of fault data is common in the boxtype substation targeted in this paper).With the emergence of GANs [22], more and more scholars have been focusing on the input source layer to solve the problem of fault diagnosis of small samples.Some scholars [23] expanded the bearing vibration signal of small samples using WGAN with a gradient penalty as the data generation model and used the expanded samples as the input of the self-attention convolutional neural network for fault diagnosis.This effectively improved the accuracy of bearing fault diagnosis under small samples.The scholars in [24] proposed a fault diagnosis method combining a generative adversarial network with transfer learning, which used a generative adversarial network to generate dummy samples with similar fault characteristics to actual engineering monitoring data and then introduced domain adaptation regular term constraints in the residual network training process to form a deep transfer fault diagnosis model.This effectively addressed the problem of low accuracy of the fault diagnosis model caused by insufficient available data of mechanical equipment and large data distribution differences under multiple working conditions in practical applications.Huang [25] introduced a dropout layer into the auxiliary classifier generative adversarial network (AC-GAN) to prevent the model from generating duplicate samples and added a convolutional layer to the AC-GAN discriminant to improve the anti-noise ability of the discriminator.This was performed to enhance the performance of the auxiliary classification generative adversarial network and generate a large number of high-quality fault samples.This approach solves the problem of a low fault recognition rate in the case of small samples.XU [26] introduced conditional constraints to the semi-supervised generative adversarial networks and optimized the loss function to enhance its guidance for the generator and discriminator, thereby improving the generative adversarial network.The generative model and semi-supervised learning ability of the model were utilized to solve the problem of insufficient data samples and sample labeling in fault diagnosis.Zhang [27] proposed a multi-module generative adversarial network augmented with an adaptive decoupling strategy.This strategy uses an adaptive learning method to update the initialized random noise of the generator, enabling it to obtain a better combination for generating samples.Additionally, a reconstruction module provides stronger constraints for the generator, which greatly improves the quality of the generated samples. Based on the above research, it can be seen that the solution utilizing generative adversarial networks can directly address the issues of small sample size and class imbalance in fault diagnosis from the input source layer, reducing the complexity of data sampling and processing procedures.It also avoids the complicated process of building specific diagnostic models for different diagnostic objects.Consequently, focusing on complex integrated equipment such as box-type substations, this project constructs a data derivation model based on generative adversarial networks, aiming to solve the problem of small sample size in fault diagnosis of a box-type substation. To address the challenge of training a high-performance fault diagnosis model with small samples, this paper proposes a fault diagnosis method for box-type substations based on an improved CTGAN and AlexNet network.In this method, the self-attention mechanism is added to the generator of CTGAN.The SA-CTGAN data derivation model is constructed, and the data are enriched and enhanced based on the original samples, particularly those with fewer samples.This, in turn, addresses the imbalance of health status data and fault status data categories, as well as the scarcity of fault data, all at the input source level.Finally, the expanded data are used as the input for the AlexNet fault diagnosis model to complete the fault diagnosis task of the box-type substation. Main Fault Analysis of the Box-Type Substation The box-type substation, also known as a pre-installed substation, is a kind of distribution transformer.It is a factory-prefabricated indoor and outdoor compact distribution device arranged according to a certain wiring scheme, and it is an organic combination of transformer step-down, low-voltage distribution, and other functions.It is especially suitable for the construction and transformation of urban power grids and has a series of advantages, such as strong completeness, small size, minimal land occupation, deep penetration into the load center, improved power supply quality, reduced loss, a short power transmission cycle, flexible site selection, strong adaptability to the environment, and convenient installation. The box-type substation is composed of three parts: a high-voltage room, a transformer room, and a low-voltage room.There are two combinations, as shown in Figure 1.The high-voltage room consists of a high-voltage incoming cabinet, a high-voltage meter, and a high-voltage feeder cabinet.The dry-type transformers are generally placed in transformer rooms.The low-voltage room is composed of a low-voltage incoming cabinet, a capacitor compensation device, and a low-voltage outgoing cabinet. input source level.Finally, the expanded data are used as the input for the AlexNet fault diagnosis model to complete the fault diagnosis task of the box-type substation. Main Fault Analysis of the Box-Type Substation The box-type substation, also known as a pre-installed substation, is a kind of distribution transformer.It is a factory-prefabricated indoor and outdoor compact distribution device arranged according to a certain wiring scheme, and it is an organic combination of transformer step-down, low-voltage distribution, and other functions.It is especially suitable for the construction and transformation of urban power grids and has a series of advantages, such as strong completeness, small size, minimal land occupation, deep penetration into the load center, improved power supply quality, reduced loss, a short power transmission cycle, flexible site selection, strong adaptability to the environment, and convenient installation. The box-type substation is composed of three parts: a high-voltage room, a transformer room, and a low-voltage room.There are two combinations, as shown in Figure 1.The high-voltage room consists of a high-voltage incoming cabinet, a high-voltage meter, and a high-voltage feeder cabinet.The dry-type transformers are generally placed in transformer rooms.The low-voltage room is composed of a low-voltage incoming cabinet, a capacitor compensation device, and a low-voltage outgoing cabinet. The layout and structure are shown in Figures 1 and 2, respectively.To facilitate the timely location of fault components in the box-type substation, based on the overall structure and common faults of the box-type substation, the health state type can be divided into seven categories: normal operation F1, high-voltage circuit The layout and structure are shown in Figures 1 and 2, respectively. input source level.Finally, the expanded data are used as the input for the AlexNet fa diagnosis model to complete the fault diagnosis task of the box-type substation. Main Fault Analysis of the Box-Type Substation The box-type substation, also known as a pre-installed substation, is a kind of dist bution transformer.It is a factory-prefabricated indoor and outdoor compact distributi device arranged according to a certain wiring scheme, and it is an organic combination transformer step-down, low-voltage distribution, and other functions.It is especially su able for the construction and transformation of urban power grids and has a series of a vantages, such as strong completeness, small size, minimal land occupation, deep pen tration into the load center, improved power supply quality, reduced loss, a short pow transmission cycle, flexible site selection, strong adaptability to the environment, and co venient installation.To facilitate the timely location of fault components in the box-type substation, bas on the overall structure and common faults of the box-type substation, the health st type can be divided into seven categories: normal operation F1, high-voltage circ To facilitate the timely location of fault components in the box-type substation, based on the overall structure and common faults of the box-type substation, the health state type can be divided into seven categories: normal operation F1, high-voltage circuit breaker fault F2, high-voltage arrester fault F3, dry-type transformer fault F4, low-voltage incoming circuit breaker fault F5, low-voltage outgoing circuit breaker fault F6, and capacitor arrester fault F7.On this basis, fault diagnosis research is carried out, and 24 indicators, as shown in Table 1, are collected as data support for data mining. Research on the Data Derivation Method Based on CTGAN GANs are typical data generation methods used to address issues such as small sample sizes or unbalanced data categories.They generate high-quality samples through adversarial competition between their generative network and discriminative network, but they are currently mainly applied to image-based data.CTGAN is a variant of GAN that can model and sample the class table data distribution.CTGAN overcomes the long-tail distribution and multi-mode distribution by taking advantage of normalization across patterns and designing a condition generator that is trained by sampling to deal with unbalanced discrete columns and generate high-quality tabular data.The box-type substation monitoring data collected in this paper have the same properties and characteristics as the tabular data regulated by CTGAN, so this paper establishes a data-derived model based on CTGAN.Due to the insufficient modeling of the relationship between the features of high-dimensional samples by CTGAN, the correlation between the dimensions of the generated samples cannot be maintained, so this paper introduces the self-attention mechanism into the generator of CTGAN to maintain the coupling relationship between features and establishes an SA-CTGAN data-derived model to enhance the original data and improve the accuracy of fault diagnosis. Principle of CTGAN To complete the task of generating tabular data, CTGAN enhances the training process through normalization for patterns and framework changes for patterns and solves the problem of data imbalance using conditional generators and sampling training.By combining Gaussian mixture models with VAE, CTGAN is capable of learning the latent representations of data and generating new tabular data samples.This combined approach helps solve the problems of data encoding and generation and improves the sample efficiency and quality of the model. CTGAN consists of two models that present a competitive game relationship: the generative model G, which captures the distribution of data, and the discriminative model D, which estimates the probability of the sample coming from the original data.The G network generates fault samples by transmitting random noise through a multi-layer perceptron, and the D network is also composed of a multi-layer perceptron, learning and judging whether the samples come from the model distribution or the original data distribution.Under the definition of G and D by the multi-layer perceptron, the whole system can be trained by the backpropagation mechanism, and the two achieve an antagonistic game balance.In addition, there is an encoder that models the raw data, and a classifier trained on the raw data to better interpret the semantic integrity of the data.The CTGAN structure is shown in Figure 3. cess through normalization for patterns and framework changes for patterns and solves the problem of data imbalance using conditional generators and sampling training.By combining Gaussian mixture models with VAE, CTGAN is capable of learning the latent representations of data and generating new tabular data samples.This combined approach helps solve the problems of data encoding and generation and improves the sample efficiency and quality of the model. CTGAN consists of two models that present a competitive game relationship: the generative model G, which captures the distribution of data, and the discriminative model D, which estimates the probability of the sample coming from the original data.The G network generates fault samples by transmitting random noise through a multi-layer perceptron, and the D network is also composed of a multi-layer perceptron, learning and judging whether the samples come from the model distribution or the original data distribution.Under the definition of G and D by the multi-layer perceptron, the whole system can be trained by the backpropagation mechanism, and the two achieve an antagonistic game balance.In addition, there is an encoder that models the raw data, and a classifier trained on the raw data to better interpret the semantic integrity of the data.The CTGAN structure is shown in Figure 3.The CTGAN training process is as follows: Step 1: Random noise z and conditional vectors are input into the generator to generate data G(z) in the specified format; Step 2: The original data sample x is modeled through the encoder and input into the discriminator together with the generated data G(z) and the conditional vector; Step 3: The discriminator distinguishes the original data sample x and the generated data sample G(z), respectively, and then updates the weight of the discriminator D through the backpropagation of the loss function; that is, the discriminator continuously improves its ability to discriminate generated data samples; Step 4: According to the output of the discriminator, constantly adjust the parameters of generator G; that is, improve the ability of the generator to generate data, making the data generated as consistent as possible with the original data so that the discriminator cannot correctly discriminate; Step 5: Repeat Step 1~Step 4 until the loss function of the discriminator converges within a certain number of iterations and stops training. Principle of the Self-Attention Mechanism In high-latitude data, there is often a certain correlation between different dimensions.When mining key features, the influence of other features on this correlation cannot be ignored, so the self-attention mechanism needs to be used. Self-attention allows each unit to capture the overall information, while different units can be calculated or processed in parallel, which can be understood as self-attention, The CTGAN training process is as follows: Step 1: Random noise z and conditional vectors are input into the generator to generate data G(z) in the specified format; Step 2: The original data sample x is modeled through the encoder and input into the discriminator together with the generated data G(z) and the conditional vector; Step 3: The discriminator distinguishes the original data sample x and the generated data sample G(z), respectively, and then updates the weight of the discriminator D through the backpropagation of the loss function; that is, the discriminator continuously improves its ability to discriminate generated data samples; Step 4: According to the output of the discriminator, constantly adjust the parameters of generator G; that is, improve the ability of the generator to generate data, making the data generated as consistent as possible with the original data so that the discriminator cannot correctly discriminate; Step 5: Repeat Step 1~Step 4 until the loss function of the discriminator converges within a certain number of iterations and stops training. Principle of the Self-Attention Mechanism In high-latitude data, there is often a certain correlation between different dimensions.When mining key features, the influence of other features on this correlation cannot be ignored, so the self-attention mechanism needs to be used. Self-attention allows each unit to capture the overall information, while different units can be calculated or processed in parallel, which can be understood as self-attention, find the relationship between each feature and consider whether one feature will have an impact on the other.The basic principle is shown in Figure 4. find the relationship between each feature and consider whether one feature will have an impact on the other.The basic principle is shown in Figure 4.The workflow for self-attention is shown in Figure 5.The thought steps are as follows: Step 1: Transform the input X through the linear transformation matrix q W , k W , and v W into Q, K , and V, where Q is the query vector, K is the key vector, and V is the value vector. Step 2: Calculate the similarity by the dot product operation of Q and K . Step 3: SoftMax normalization of the similarity obtained in Step 2. , , , ˆexp( ) exp( ) Step 4: Calculate the comprehensive output B of each unit after self-attention.The workflow for self-attention is shown in Figure 5. Appl.Sci.2024, 14, x FOR PEER REVIEW 9 of 18 find the relationship between each feature and consider whether one feature will have an impact on the other.The basic principle is shown in Figure 4.The workflow for self-attention is shown in Figure 5.The thought steps are as follows: Step 1: Transform the input X through the linear transformation matrix q W , k W , and v W into Q, K , and V, where Q is the query vector, K is the key vector, and V is the value vector. Step 2: Calculate the similarity by the dot product operation of Q and K . Step 3: SoftMax normalization of the similarity obtained in Step 2. , , , ˆexp( ) exp( ) Step 4: Calculate the comprehensive output B of each unit after self-attention.The thought steps are as follows: Step 1: Transform the input X through the linear transformation matrix W q , W k , and W v into Q, K, and V, where Q is the query vector, K is the key vector, and V is the value vector. Step 2: Calculate the similarity by the dot product operation of Q and K. Step 3: SoftMax normalization of the similarity obtained in Step 2. αi,j = exp(α i,j )/ ∑ j exp(α i,j ) Step 4: Calculate the comprehensive output B of each unit after self-attention. SA-CTGAN Data-Derived Model Although CTGAN can generate data based on conditional vectors through the classifier and capture the general distribution of each variable well through the encoder, it does not model the dependency relationship of each feature.It only captures the possible connections between the features through two fully connected hidden layers in the generator, which is ineffective because there is a strong correlation among the indicators in the monitoring dataset of box-type substations.Using CTGAN to generate fault samples for box-type substations may produce suboptimal results.The weights of the fully connected layers Appl.Sci.2024, 14, 3112 10 of 18 in the CTGAN generator are determined based on position, meaning that the weight generation is static.In contrast, the weight generation of self-attention is dynamic, which frees it from positional dependency during data input and better maintains the correlation among different features.This paper inducts self-attention in generator G to construct the SA-CTGAN model.Specifically, it replaces the two fully connected layers in the generator of CTGAN with self-attention layers.The model can learn the relationship matrix between the input features through the self-attention mechanism to maintain the correlation between different features and make the data generated by CTGAN closer to the real data.The model of SA-CTGAN data-derived model is shown in Figure 6. Although CTGAN can generate data based on conditional vectors through the classifier and capture the general distribution of each variable well through the encoder, it does not model the dependency relationship of each feature.It only captures the possible connections between the features through two fully connected hidden layers in the generator, which is ineffective because there is a strong correlation among the indicators in the monitoring dataset of box-type substations.Using CTGAN to generate fault samples for box-type substations may produce suboptimal results.The weights of the fully connected layers in the CTGAN generator are determined based on position, meaning that the weight generation is static.In contrast, the weight generation of self-attention is dynamic, which frees it from positional dependency during data input and better maintains the correlation among different features.This paper inducts self-attention in generator G to construct the SA-CTGAN model.Specifically, it replaces the two fully connected layers in the generator of CTGAN with self-attention layers.The model can learn the relationship matrix between the input features through the self-attention mechanism to maintain the correlation between different features and make the data generated by CTGAN closer to the real data.The model of SA-CTGAN data-derived model is shown in Figure 6. AlexNet Fault Diagnosis Model AlexNet is a classical convolutional neural network model that can extract and classify depth features, and it is widely used in the field of fault diagnosis.AlexNet uses the ReLU activation function instead of Tanh and Sigmoid to speed up training, solving the AlexNet Fault Diagnosis Model AlexNet is a classical convolutional neural network model that can extract and classify depth features, and it is widely used in the field of fault diagnosis.AlexNet uses the ReLU activation function instead of Tanh and Sigmoid to speed up training, solving the gradient vanishing problem of deep networks.At the same time, AlexNet uses overlapping maximum pooling operations to avoid the fuzzy effect of average pooling, and the step size is smaller than the size of the pooling kernel so that it can extract features in more detail.In addition, AlexNet uses Local Response Normalization (LRN) to create a competition mechanism for the activity of local neurons, making neurons with larger responses more active and inhibiting those with less feedback, thereby enhancing the generalization ability of the model. In this paper, the box-type substation fault diagnosis model is established based on the AlexNet network model, which comprises a total of eight layers, five convolutional layers, and three fully connected layers.Finally, the samples are classified by the SoftMax classifier, as shown in Figure 7. more active and inhibiting those with less feedback, thereby enhancing the generalization ability of the model. In this paper, the box-type substation fault diagnosis model is established based on the AlexNet network model, which comprises a total of eight layers, five convolutional layers, and three fully connected layers.Finally, the samples are classified by the SoftMax classifier, as shown in Figure 7.The one-dimensional convolution and pooling layers are used to build the AlexNet network, and TensorFlow 2.0 is used to build the network model for completing the fault diagnosis task.The model structure is shown in Table 2. Network Layer Network Layer Structure Next, train the model in using the Adaptive Gradient Algorithm (Adagrad) as the optimizer and categorical cross entropy as the loss function.The one-dimensional convolution and pooling layers are used to build the AlexNet network, and TensorFlow 2.0 is used to build the network model for completing the fault diagnosis task.The model structure is shown in Table 2. Next, train the model in using the Adaptive Gradient Algorithm (Adagrad) as the optimizer and categorical cross entropy as the loss function. Evaluation Indicators of the Data-Derived Effect The model was evaluated by calculating the similarity between the generated data-set and the original data-set, and the performance effect of the model was evaluated from two perspectives: similarity of data distribution and correlation of different dimensions. (1) KL divergence Kullback-Leible (KL) divergence, also known as relative entropy, is a metric used to measure the similarity of two probability distributions, can used to express the difference or similarity between two distributions, and is calculated as follows: The smaller the KL divergence, the higher the similarity between P and Q. (2) Mean Cosine Similarity Cosine similarity (CS) is the cosine value of the angle between two n-dimensional vectors in n-dimensional space, which is equal to the dot product (vector product) of the two vectors divided by the product of the length (size) of the two vectors.The cosine similarity between n-dimensional vectors A and B is calculated as follows: In this paper, the cosine similarity between the original data-set and the generated data-set of the same category is calculated, the average cosine similarity is calculated cumulatively, and the data similarity is evaluated by the mean cosine similarity, which is calculated as follows: where RAW i is the ith indicator vector of the original data and GEN i is the ith indicator vector of the generated data.The value of Similarity ranges from [−1,1], with −1 being completely different and 1 being completely similar. (3) Cumulative deviation of the correlation coefficient The correlation coefficient is a statistic proposed by the statistician Pearson to measure the degree of linear correlation between two random variables.It is defined as the covariance of two variables divided by the product of their standard deviations as follows: The correlation coefficient matrix of the original dataset and the generated dataset is calculated, and then the cumulative deviation of the correlation coefficient of the generated dataset relative to the original dataset is calculated by Equation (8). where ρ X i ,X jRAW is the correlation coefficient between dimensions X i and X j in the original data and ρ X i ,X jGEN is the correlation coefficient between dimensions X i and X j in the generated data.The smaller the cumulative deviation of the correlation coefficient, the more similar the correlation between the different dimensions of the generated data-set and the original data-set. (4) Heatmap SSIM metric A heatmap is a way to express the correlation of different dimensions in a data-set in the form of an image, and the magnitude of the correlation is described by the values of different RGBA components.Therefore, the dimensional correlation between the generated data-set and the original data-set can be evaluated by comparing the heatmap of the original dataset and the generated data-set. The SSIM (Structure Similarity Index Measure) is an index used to measure the similarity of images, which consists of three parts: luminance, contrast, and structure. The SSIM of images x and y can be defined as follows: where µ x and µ y are the average gray scale of the image x and y, σ x and σ y are the standard deviation of the gray scale of the image x and y, C 1 = (K 1 L) 2 , C 2 = (K 2 L) 2 , and C 3 = C 2 /2, and by experience, K 1 = 0.01, K 2 = 0.03, and L, are the dynamic ranges of the pixel value. The range of the SSIM is [0,1], and the larger the value, the higher the similarity between the two images; that is, the closer the correlation between the original data-set and the generated data-set in different dimensions. Comparative Analysis of Data-Derived Models To verify the performance of SA-CTGAN, a box-type substation acquisition system was established for enterprise A, and a total of 700 different fault data and normal operation data were randomly selected from the database to form a small sample unbalanced dataset.The SMOTE model, CTGAN model, and SA-CTGAN model were used for data derivation experiments. For the above three data-sets, the number of nearest neighbors of the SMOTE model is set to five.The DE optimization algorithm was used to optimize the number of iterations, training batches, and learning rate in the CTGAN model and the SA-CTGAN model.The amount of the expanded data in all models is set to 1400. Select the original data and generated data of the fault type of the high-voltage circuit breaker and draw the heatmap of the raw data and the generated data, respectively, as shown in Figure 8, where (a), (b), (c), and (d) are the original data heatmap, the SMOTE-generated data heatmap, the CTGAN-generated data heatmap, and the SA-CTGAN-generated data heatmap, respectively.Based on the correlation coefficients shown in Figure 8, the correlation coefficie matrix of the data generated by SA-CTGAN is the closest to the original data.To quan tatively compare the data generation effects from the three derived models, four evalu tion metrics constructed in Section 6.1 were used to evaluate the derived data of the thr models.The evaluation results are shown in Table 3.Based on the correlation coefficients shown in Figure 8, the correlation coefficient matrix of the data generated by SA-CTGAN is the closest to the original data.To quantitatively compare the data generation effects from the three derived models, four evaluation metrics constructed in Section 6.1 were used to evaluate the derived data of the three models.The evaluation results are shown in Table 3.By comparing the KL divergence and the mean cosine similarity, it is found that the CTGAN model and SA-CTGAN model are better than the SMOTE model in the similarity of data distribution.Comparing the cumulative deviation of correlation coefficient and the heatmap SSIM metric, the cumulative deviation of the correlation coefficient of the SA-CTGAN model is significantly lower than that of SMOTE and CTGAN, and the heatmap SSIM metric of the SA-CTGAN model is significantly higher than that of SMOTE and CTGAN.It is deduced that the SMOTE model and the CTGAN model are similar in the maintenance of data correlation, the data generated by the SA-CTGAN model with the introduction of the self-attention mechanism is significantly better than the other two models in terms of correlation similarity; that is, the self-attention mechanism can effectively maintain the correlation between different indicators. Furthermore, draw a distribution histogram, comparing the derived data results through the distribution of 24 indicators, as shown in Figure 9, where (a), (b), (c), and (d) are the distribution of original data, SMOTE-generated data, CTGAN-generated data, and generated data, respectively.maintenance of data correlation, the data generated by the SA-CTGAN model with the introduction of the self-attention mechanism is significantly better than the other two models in terms of correlation similarity; that is, the self-attention mechanism can effectively maintain the correlation between different indicators. Furthermore, draw a distribution histogram, comparing the derived data results through the distribution of 24 indicators, as shown in Figure 9, where The distribution of the data generated by the three models is roughly similar to that of the original data.Among them, the SMOTE model has the worst generation effect, while the CTGAN model and SA-CTGAN model are better.This is because the SMOTE model can only generate edge data when generating data with positive samples distributed at the edge through sampling, which cannot solve the problem of distribution marginalization.The distribution of the data generated by the three models is roughly similar to that of the original data.Among them, the SMOTE model has the worst generation effect, while the CTGAN model and SA-CTGAN model are better.This is because the SMOTE model can only generate edge data when generating data with positive samples distributed at the edge through sampling, which cannot solve the problem of distribution marginalization. Fault Diagnosis Case Analysis of Different Datasets This section will compare the performance of AlexNet using original data, data enhanced by the SMOTE model, the CTGAN model, and the SA-CTGAN model as inputs.The specific scheme is as follows: the original dataset is divided into the training set, verification set, and testing set in a ratio of 7:2:1, which are input into the AlexNet model for training and testing.For the datasets enhanced by the three data-derived models, they are divided into a training set and a validation set according to the ratio of 7:3, while the original data are used as the testing set.The early stopping mechanism is set during model training based on the accuracy of the verification set; if the improvement is less than 0.05% over 20 iterations, the training will be stopped in advance. The specific iteration process of the four data-sets is shown in Figure 10.From the perspective of the iterative process, due to the small amount of data in the original data without data enhancement, the early stop mechanism was triggered after 121 iterations, the loss value of the verification set was always greater than that of the training set in the whole iteration process, and there was an obvious oscillation phenomenon.The dataset that underwent data enhancement using SMOTE exhibited the highest number of iterations, albeit with evident oscillation throughout the training process.In contrast, the dataset enhanced with the CTGAN model terminated training after 147 iterations, displaying significantly less oscillation compared to the first two datasets.The data From the perspective of the iterative process, due to the small amount of data in the original data without data enhancement, the early stop mechanism was triggered after 121 iterations, the loss value of the verification set was always greater than that of the training set in the whole iteration process, and there was an obvious oscillation phenomenon.The dataset that underwent data enhancement using SMOTE exhibited the highest number of iterations, albeit with evident oscillation throughout the training process.In contrast, the dataset enhanced with the CTGAN model terminated training after 147 iterations, displaying significantly less oscillation compared to the first two datasets.The data enhanced by the SA-CTGAN model established in this paper experienced 182 iterations, and there are obvious oscillations in the first 50 iterations of the model, but the oscillations gradually disappear after 50 iterations.Table 4 shows the accuracy of the four data-sets.From the accuracy of the four types of data-sets, it can be seen that the accuracy of the validation set is about 5% lower than that of the training set, and the accuracy of the testing set is about 6.5% lower than that of the training set.This is due to the lack of training data, as the model can only over-extract features unrelated to the direction of diagnosis and only learn the patterns related to the training set data.This pattern is wrong or irrelevant for the new data (validation set and testing set), resulting in the trained model not being universal.The accuracy of other data-augmented datasets is relatively close to that of the training set, validation set, and test set.Among them, the SA-CTGAN model constructed in this paper has the highest data accuracy and the smallest gap between the training set and the test set, which indicates that the model trained with the dataset generated by SA-CTGAN has higher versatility. Conclusions This article takes the fault diagnosis of box-type substations as an example to study and improve the fault diagnosis model under the conditions of scarce samples and unbalanced classes, aiming to enhance its prediction accuracy.An improved CTGAN data derivation method based on a self-attention mechanism is proposed, which can take into account the strong correlation between the monitoring data features of box-type substations while deriving and enhancing the samples. Figure 1 . Figure 1.The internal layout of the box-type substation. Figure 2 . Figure 2. Overall structure of the box-type substation. Figure 1 . Figure 1.The internal layout of the box-type substation. The box-type substation is composed of three parts: a high-voltage room, a tran former room, and a low-voltage room.There are two combinations, as shown in Figure The high-voltage room consists of a high-voltage incoming cabinet, a high-voltage met and a high-voltage feeder cabinet.The dry-type transformers are generally placed transformer rooms.The low-voltage room is composed of a low-voltage incoming cabin a capacitor compensation device, and a low-voltage outgoing cabinet.The layout and structure are shown in Figures1 and 2, respectively. Figure 1 . Figure 1.The internal layout of the box-type substation. Figure 2 . Figure 2. Overall structure of the box-type substation. Figure 2 . Figure 2. Overall structure of the box-type substation. Figure 4 . Figure 4.The principle of the self-attention mechanism. Figure 4 . Figure 4.The principle of the self-attention mechanism. Figure 4 . Figure 4.The principle of the self-attention mechanism. Figure 8 . Figure 8.Comparison of raw data and generated data heatmap: (a) original data distribution; ( the generated data distribution of SMOTE; (c) the generated data distribution of CTGAN; (d) t generated data distribution of SA-CTGAN. Figure 8 . Figure 8.Comparison of raw data and generated data heatmap: (a) original data distribution; (b) the generated data distribution of SMOTE; (c) the generated data distribution of CTGAN; (d) the generated data distribution of SA-CTGAN. Figure 9 . Figure 9. Distribution comparison of original data with data generated: (a) original data distribution; (b) the generated data distribution of SMOTE; (c) the generated data distribution of CTGAN; (d) the generated data distribution of SA-CTGAN. Figure 9 . Figure 9. Distribution comparison of original data with data generated: (a) original data distribution; (b) the generated data distribution of SMOTE; (c) the generated data distribution of CTGAN; (d) the generated data distribution of SA-CTGAN. Figure 10 . Figure 10.The iterative process of different data-sets: (a) original data-set; (b) data-set enhanced by SMOTE; (c) data-set enhanced by CTGAN; (d) data-set enhanced by SA-CTGAN. Figure 10 . Figure 10.The iterative process of different data-sets: (a) original data-set; (b) data-set enhanced by SMOTE; (c) data-set enhanced by CTGAN; (d) data-set enhanced by SA-CTGAN. It solves the problem of CTGAN being unable to model the dependency relationship between various features.The established SA-CTGAN data derivation model can generate enough samples similar to the original data based on a small amount of data to support the training of the fault diagnosis model.Furthermore, a box-type substation fault diagnosis model based on AlexNet is established to verify the proposed SA-CTGAN data derivation model.Experimental results show that compared with SMOTE and CTGAN data derivation models, the model trained with the dataset generated by the SA-CTGAN model has the best performance.The proposed method can effectively improve the fault diagnosis accuracy, and the diagnosis accuracy reaches 94.81%.Compared with the model trained with the original data, the accuracy is improved by about 11%, effectively solving the problem that the high-performance diagnosis model cannot be trained due to the scarcity of box-type substation fault data. Table 1 . Fault indicators of box-type substations. Table 3 . Comparison of model results. Table 3 . Comparison of model results. Table 4 . Performances of different input datasets.
11,622.8
2024-04-08T00:00:00.000
[ "Engineering", "Computer Science" ]
Preparation of 2-diazo-2-oxopiperidin-3-yl-3-oxopropanoates. Useful reagents for Rh(II)-catalyzed cyclization-cycloaddition chemistry 2-Diazo-2-oxopiperidin-3-yl-3-oxopropanoates containing a tethered indolyl group have been identified as useful intermediates for the Rh(II)-catalyzed cyclization-cycloaddition cascade for the synthesis of the core skeleton of various aspidosperma alkaloids. Several synthetic methods were developed to rapidly construct these important diazo imide substrates using cheap and readily available reagents. Introduction In recent years, a widespread upsurge of activity in the stereoselective preparation of highly substituted nitrogen heterocycles, especially structurally complex alkaloids has occurred. 1In particular, members of the Aspidosperma alkaloid family have occupied a central place in natural product chemistry because of their diverse biological activity. 2This family of indole alkaloids contains over 250 members that share in their molecular structure a common pentacyclic ABCDE framework, with the C-ring being of critical importance because all six stereocenters and most of the functionalities are located in this ring. 3Individual members differ mainly in functionality and stereochemistry.Over the years, efficient and elegant routes to this molecular framework have been developed. 4,5ur approach to the Aspidosperma skeleton was guided by a long-standing interest in developing new applications of the Rh(II) cyclization/cycloaddition cascade for the synthesis of complex natural products. 6The generation of onium ylides by a transition-metal promoted cyclization reaction has emerged in recent years as an important and efficient method for the assembly of ring systems that are difficult to prepare by other means. 7In earlier studies we described the formation of cyclic carbonyl ylide dipoles by a process involving cyclization of an electrophilic metallo-carbenoid onto an adjacent carbonyl group. 8N H Et MeO The general reaction investigated is illustrated in Scheme 1; variations in chain length (n = 0, 1, 2) and nature of the activating group (G) were explored. 9With limited exceptions, 10 alkyl and aryl ketones were employed and dipole 5 was generated by the rhodium(II)-catalyzed decomposition of the diazoalkanedione in benzene at 80 ºC. 11 Rh(II) Scheme 1 More recently, we became interested in the formation of push-pull dipoles from the Rh(II)catalyzed reaction of α-diazo imides 12 and noted that a smooth intramolecular 1,3-dipolar cycloaddition occurred across both alkenyl and heteroaromatic π-bonds to provide novel pentacyclic compounds in good yield and in a stereocontrolled fashion. 13,14Our recent total synthesis of (±)-aspidophytine nicely demonstrates the utility of this cascade methodology for the construction of complex aspidosperma alkaloids. 15Thus, the Rh(II)-catalyzed reaction of diazo imido indole 7 produced cycloadduct 9 in 97% yield via the intermediacy of the carbonyl ylide dipole 8.The acid lability of cycloadduct 9 was exploited to provide the complete skeleton of aspidophytine in several additional steps (Scheme 2). Results and Discussion Several methods for preparing the diazo imides necessary for dipole formation have been explored.One option that we have used involves treating the commercially available 3carboethoxy-2-piperidone (11) with n-BuLi at -78 ºC followed by the addition of an indole acid chloride such as 12.This results in the joining of the two fragments to give imide 13 in 45% yield.A subsequent reaction of 13 with n-butylmagnesium chloride in THF at 0 ºC followed by the addition of ethyl 2-diazomalonyl chloride 16 afforded the indolyl substituted diazo imide 14 in 59% yield (Scheme 3). Since the overall yield of diazo imide 14 obtained by this method was somewhat low, we opted to study some alternate procedures to prepare the starting diazo substrates.With this in mind, diazo imide 19 was synthesized in the manner outlined in Scheme 4. 3-Ethyl-2oxopiperidine-3-carboxylic acid 16 was first prepared in three steps from diethyl ethylmalonate (15).Treatment of 16 with 1,1-carbonyldiimidazole followed by reaction with the dianion of mono-methyl malonate furnished β-ketoester 17 in 60% yield.This compound was then converted to the indolyl-N-acylamide 18 (65%) by reaction with acid chloride 12 using 4Aº molecular sieves as a neutral acid scavenger.Finally, the requisite α-diazo imide 19 was easily obtained from 18 using standard Regitz diazo transfer conditions 17 and was isolated in 90% yield.Several other 3-substituted diazo-imides related to 19 could be prepared according to the reaction sequence outlined in Scheme 5. Deprotonation of the piperidone 11 with 1.1 equiv of nbutyllithium followed by reaction with 2-iodoethyl benzyl ether afforded lactam 20 in 70% yield.The ethyl ester portion of 20 was converted into the methyl 3-oxopropanoate group using a modified Masamune procedure 18 which furnished β-keto ester 21 in 82% yield.A related sequence of reactions was also used to prepare lactams 23 and 24.Thus, the anion derived from the piperidone 11 was allowed to react with t-butyl bromoacetate together with a catalytic amount of t-butyl ammonium iodide which lead to the formation of lactam 22 in 80% yield.Treatment of the resulting t-butyl ester 23, derived from heating 22 with (MeO) 3 CH/MeOH in the presence of p-TsOH gave the corresponding methyl ester 24 in almost quantitative yield (Scheme 5).When these lactams were allowed to react with the acid chloride derived from 2-(Ntosyl-1H-indol-3-yl)acetic acid, the expected imides were readily formed in high yield and were easily converted into the corresponding diazo substrates 25 and 26 using the Regitz diazotization procedure. 17 Scheme 5 Still another method that was used to prepare the key diazo-imide substrates needed for the Rh(II) cascade involved the initial preparation of a methyl 2-diazo-3-(3-alkyl-2-oxopiperidin-3yl)-3-oxopropanoate (i.e., 27 or 28) and then coupling it with an appropriate acid chloride (Scheme 6).By carrying out the synthesis of the indolyl substituted diazo imides in this manner, the Regitz diazo transfer reaction 17 can be avoided in the final step thereby simplifying the synthesis.Thus, piperidinones 21 and 23 were easily converted to the corresponding diazo lactams 27 and 28 in excellent yield.These compounds, in turn, were treated with indolyl acid chloride 12 which resulted in the formation of the desired diazo imides 29 and 30 in 82% and 73% yield, respectively. Scheme 6 In conclusion, several synthetic methods have been developed to rapidly prepare various indolyl substituted 2-diazo-2-oxopiperidin-3-yl 3-oxopropanoates in high yield.Treatment of these substrates with Rh 2 (OAc) 4 generate push-pull 1,3-dipoles that undergo ready intramolecular dipolar cycloaddition across the indolyl π-bond.We are currently investigating the scope and limitations of the Rh(II) cyclization-cycloaddition cascade as a method for the synthesis of various aspidosperma alkaloids, the results of which will be disclosed in due course. A general procedure for the synthesis of indolyl diazo imides 25 and 26 In a 200 mL round bottomed flask a sample of 2-(1-tosyl-1H-indol-3-yl) acetic acid (1.5 equiv) was taken up in CH 2 Cl 2 .After stirring for 5 min, (COCl) 2 (4.0 equiv) was added dropwise together with 2 drops of DMF.The solution was stirred at RT for 4 h and was then concentrated under reduced pressure.The resulting solid was dissolved in CH 2 Cl 2 and the solution was added dropwise to a solution of the appropriate lactam 24 (1.0 mmol) containing an excess of 4Å mesh molecular sieves in CH 2 Cl 2 .The reaction mixture was allowed to stir at RT for 12 h, filtered through a pad of Celite and concentrated under reduced pressure.The residue was subjected to flash silica gel chromatography.To the above keto ester (1.0 equiv) in 140 mL of CH 3 CN at 0 ºC was added 2.3 mL (1.0 equiv) of Et 3 N.The solution was allowed to stir for 20 min and then 1.9 g (2.0 equiv) of mesyl azide was added and the reaction mixture was allowed to stir for 1.5 h.The solution was concentrated under reduced pressure and the residue was subjected to flash chromatography on silica gel to give 25 as a pale yellow oil in 88%
1,743
2006-11-13T00:00:00.000
[ "Chemistry", "Biology" ]
The expediency of identifying strategic alternatives to the development of the state's economic security system through SWOT analysis ▪ Anatolii, Loishyn, PhD student, The National Defence University of Ukraine named after Ivan Cherniakhovskyi, Ukraine, e-mail<EMAIL_ADDRESS>▪ Mykola, Tkach, PhD, Chief of Department of Defense Management, The National Defence University of Ukraine named after Ivan Cherniakhovskyi, Ukraine, e-mail<EMAIL_ADDRESS>▪ Sergey, Levchenko, PhD student, Ministry Defense of Ukraine, Ukraine, e-mail<EMAIL_ADDRESS>▪ Vitalii, Getmanskii, PhD student, The National Military Medical Clinical Center “Main Military Clinical Hospital”, Ukraine, e-mail<EMAIL_ADDRESS>▪ Pavlo, Parkhomenko, PhD student, Senior lecture of Department, The National Defence University of Ukraine named after Ivan Cherniakhovskyi, Ukraine, e-mail<EMAIL_ADDRESS> Introduction As of today, the urgency of the problem of ensuring the economic security of Ukraine, which is one of the most important national priorities, the guarantor of Ukraine's independence, the condition of its progressive, peaceful economic development and growth of the well-being of citizens, has never increased. The problem of economic security of the state is of fundamental importance not only within the sphere of national security, but also in the context of the general level of development of the country. In today's environment, where competition in certain sectors of the economy is exacerbated as a result of reduced demand and decline in production, the need for the state leadership to define a clear plan of action that will effectively use the strengths and opportunities of activities. Seeking to impede the will of the Ukrainian people for a European future, Russia has occupied part of the territory of Ukraine -the Autonomous Republic of Crimea and the city of Sevastopol, unleashed military aggression in the east of Ukraine and tries to destroy the unity of the democratic world, revise the post-war world order and end the war. The basics of international security and international law, it is allow impunity to use force in the international arena. All of the above directly affects the level of economic security. In the macroeconomic aspect, Ukraine has structural deficiencies that give rise to three types of macroeconomic threats that impede the rapid and sustainable growth of the economy: Too big a government (measured by the share of GDP that is redistributed by the state), High inflation and permanent risks of breach of fiscal and currency stability, which periodically lead to corresponding crises. Political, economic and institutional factors make Ukraine vulnerable to fiscal crises (and, accordingly, debt accumulation), while the structure of the economy, especially exports, and leads to significant fluctuations in currency and budget revenues, depending on the global environment. Added to this is a previous distrust of the national currency, the banking system and its regulator, and high risk of lending, which in turn is associated with weak and unequal protection of property rights (2019, April 22). The main challenges facing globalization today, which pose a threat to the country's economic security, are: instability of the world financial system, accompanied by an imbalance in world trade and investment flows between the world's largest economic centers; expansion of world markets for certain types of products, goods and services; the spread of crisis; expansion of the world's leading countries; scarcity of resources for advanced development (Levchuk O. V., 2017). As of today, Ukraine has been identified as the poorest country in Europe. According to the International Monetary Fund, Ukraine has been ranked Moldova as the poorest country in Europe by the gross domestic product per capita in 2018 at the level of 2963 thousand US dollars, which is 8% less than the same indicator of the Republic of Moldova. In addition, Ukraine has a stable net capital outflow of 4% of gross domestic product per capita per year. (Aslund A., 2019). However, according to the forecast macroeconomic indicators of economic and social development determined by the Government of Ukraine for 2020-2022 (Decree of the Cabinet of Ministers of Ukraine No. 883/2019, 2019), the economic situation should stabilize and develop in a positive vector (Table 1). At the same time, both the existing conceptual documents of the threat and the possibility of new ones should be considered. Such rational planning will significantly reduce the risk of negative consequences when making certain managerial decisions. One of the main strategic management tools that evaluate inhouse the internal and external factors that influence development is SWOT analysis. Material and Method The purpose of the article is to explore the possibility of using SWOT analysis as a tool for identifying strategic alternatives. To ensure achievement of the article's purpose, it is recommended to decompose the objective of the scientific research and to describe each stage of the research: first, consider the hinge concepts and nature of economic security; second, analyse national security threats; third, to explore the possibility of using SWOT analysis as a tool to identify strategic alternatives. During the study, the following were analysed and used: analysis, synthesis, induction, deduction, generalization. Results and discussion 3.1. Despite the large number of definitions of the concept of security, there is currently no established and accepted interpretation of the term. On the one hand, "security" is seen as a state and a tendency to develop the protection of the vital interests of the society and its structures from external and internal threats, social activities to ensure the protection of man, society and the state. Security emerges as a social phenomenon in the course of resolving the contradictions between such objective reality as danger and the need for a person or social groups to prevent, locate and eliminate the consequences of danger (Kuzmenko YE. S. , 2010). But under any circumstances, safety cannot be considered a state where there is no danger. There is no historical experience of such a situation. With this in mind, V. L. Manilov (Manilov VA, 2010) proposes to consider a certain state of harm prevention as a basis for revealing the meaning of the concept of «safety». On the other hand, as V. Sadovnichy notes, security in the broad sense is a system of conditions and factors in which a country functions and develops according to its domestic laws, delegating to management the right to stimulate positive trends and shifts, and to correct negative deviations while protecting This country is threatened by the external environment (Sadovnychyy VA, 2009). Economic security -is an integral part of national security and is the state of the economy that provides fairly high and sustainable economic growth; effective satisfaction of economic needs; state control over the movement and use of national resources; protection of the economic interests of the country at national and international levels. An integral part of national security, its foundation and material base. The object of economic security is both the economic system as a whole and its constituent elements: natural resources, productive and non-productive funds, real estate, financial resources, human resources, economic structures, family, and individual. Economic security as the ability of individuals, households or communities to cover their essential needs sustainably and with dignity. This can vary according to an individual's physical needs, the environment and prevailing cultural standards. Food, basic shelter, clothing and hygiene qualify as essential needs, as does the related expenditure; the essential assets needed to earn a living, and the costs associated with health care and education also qualify (2015, June 18). Economic security indicators are the most significant parameters that give an overall picture of the state of the economic system as a whole, its sustainability and mobility: GDP growth, standard of living and quality of life of the population, inflation, unemployment, economic structure, property stratification, criminalization of the economy, state of the technical base of the economy, expenditures on research works, competitiveness, import dependence, openness of economy, internal and external debt of the state. When investigating economic security issues, one should bear in mind its systemic nature. Economic security of the state, on the one hand, is above the system, which consists of systems of different directions of economic security of the state (which also have a certain internal structure consisting of elements), and on the other hand, is a subsystem of international economic security, which, in its turn, is an integral part of the system -international security. This is a description of the vertical structure of significance. And by the horizontal structure of significance, economic security is a component of national security of the state (Muntiyan V. I., 2004). Economic security, reflecting on the spheres of influence of national security, penetrating into it and interacting with it, in turn, accumulates its effects, while remaining the basis (basis) of national security. Economic security has both its own object of study -the economic system of the country, as well as objects that intersect with other possible spheres of activity of the state: military, social, political, information, etc. However, not only the state, its economic system and all its natural resources, but also the society with its institutions, as well as every individual, belongs to the objects of economic security. The object of economic security is the state of the economy that is desirable on the part of society to maintain or develop on a progressive scale. The objects of economic security include not only the state, its economic system and all natural resources, but also society with its institutions, branches, as well as every individual. The subjects of economic security are the functional and branch ministries and other public authorities, tax and customs services, banks, exchanges, funds and insurance companies, as well as manufacturers, sellers of products and domestic consumers. The subject of the study of economic security is the activity of the individual, society and state to protect their interests from internal and external threats both in the economic sphere as a whole, and in certain sectors of the economy, its components: conceptual foundations, general laws, principles and basic directions of ensuring economic security (Vlasyuk O. S., 2008). It should be noted that the system of economic security must be stable, self-sufficient and tend to continuous development. (Fig. 1). Figure 1. Elements of national economic security. It is the development of any system that will be the driving force that will effectively respond to modern threats. The development of the economic security system will be possible under the conditions of continuous improvement of the scientific and technical market, constant modernization of production, development of educational institutions, creation of a favorable climate for investment, substantial state support of innovative activity, etc. 3.2 It should be noted that a number of threats to economic security have been identified, which could be an obstacle to its development and effective functioning. Threats to economic security are phenomena and processes that adversely affect the economy of the country, depressing the economic interests of the individual, society and the state. According to the Decree of the President of Ukraine On the decision of the National Security and Defense Council of May 6, 2015 «On the National Security Strategy of Ukraine» (2015), one of the urgent threats to the national security of Ukraine is the economic crisis, the depletion of financial resources of the state, the decline in the standard of living of the population, including: monopoly-oligarchic, low-tech, resource-consuming economic model; lack of clearly defined strategic goals, priority directions and tasks of socio-economic, militaryeconomic and scientific and technical development of Ukraine, as well as effective mechanisms of concentration of resources to achieve such goals; high level of «shadowing» and criminalization of the national economy, criminal-clan system of distribution of public resources; deformed state regulation and corruption pressure on business; excessive dependence of the national economy on external markets; ineffective management of public debt; decrease in household well-being and rising unemployment; activation of migration processes as a result of hostilities; the destruction of the economy and life support systems in the temporarily occupied territories, the loss of their human potential, the illegal export of production assets to the territory of Russia. The main content of economic reforms is to create conditions for overcoming poverty and excessive property stratification in the society, bringing social standards closer to the level of the EU member states of Central and Eastern Europe, achieving the economic criteria necessary for Ukraine to become a member of the EU. The key to a new quality of economic growth is to ensure economic security by: deprivation of monopolization and regulation of the economy, protection of economic competition, simplification and optimization of the tax system, formation of favorable business climate and conditions for accelerated innovation development; effective application of the mechanism of special economic and other restrictive measures (sanctions), making it impossible to control strategic industries with the capital of the aggressor state; creating the best possible conditions for investors in Central and Eastern Europe, attracting foreign investment in key sectors of the economy, in particular in the energy and transport sectors, as a tool for national security; ensuring the economy's readiness for repulsion by Ukraine of armed aggression; development of the defense-industrial complex as a powerful high-tech sector of the economy, capable of playing a key role in its accelerated innovation modernization; legal protection in the international institutions of the property interests of individuals and legal entities of Ukraine and the Ukrainian state violated by Russia; improving the resilience of the national economy to negative external influences, diversification of foreign markets, trade and financial flows; ensuring the integrity and protection of infrastructure in times of crisis threatening national security and a special period; the effective employment of the activities presented, the international economic assistance and resources of the international organizations that have been achieved for the sustainable struggle; stable financial system, open transparent monetary system and restoration of confidence in domestic various institutions; systematic counteraction to organized economic crime and "shadowing" of the economy based on the formation of the benefits of legal economic activity and at the same time consolidation of the institutional capacity of financial, tax, customs and law enforcement agencies, identification of assets of organized criminal groups and their confiscation. 3.3. In view of the above, it is possible to identify strategic alternatives to the development of the state's economic security system. This can be achieved by analyzing the state's economic security system for the strengths and weaknesses of the economic security environment. Accordingly, the identified weaknesses must be taken into account in order to further develop appropriate management decisions to implement the development of the system. So, SWOT analysis can be used to put this into practice. In 1963, at the Harvard Business Policy Conference, Professor Andrews publicly voiced the acronym SWOT. This acronym was presented visually as a SWOT matrix. Initially, SWOT analysis was based on the sounding and structuring of knowledge about the current situation and trends, and later -it was used more broadly -to construct strategies. That is, with the advent of the SWOT model, analysts were given the tool for their intellectual work. SWOT analysis is an analysis of the external and internal environment of an organization. Strength, Weakness of the internal environment, and Opportunities and Threats of the external environment of the organization are subject to analysis. The components of SWOT analysis are shown in the figure (Fig. 2). The SWOT methodology involves first identifying strengths, weaknesses, opportunities, and threats, and then establishing links between them that can be used to formulate an organization's strategy (Nyemtsov VD & Dovhan L. YE., 2002) ( Table 2). Table 2. Building assessment using SWOT analysis The main purpose of the SWOT analysis is to obtain reliable data on the capabilities of the company and the threat of its promotion in the market of goods and services. Therefore, in order to achieve this goal, the following tasks are assigned to SWOT analysis: identifying marketing opportunities that match the firm's resources; identifying marketing threats and developing measures to mitigate their impact; identifying the strengths of the firm and comparing them with market opportunities; identifying weaknesses of the firm and developing strategic directions for overcoming them; identifying the competitive advantages of the firm and forming its strategic priorities (Shershn'ova Z. YE., 2004). The SWOT analysis process is conducted on the basis of an analysis of the organization's activities with the help of the following blocks of questions. General characteristics of the object of study cover a number of issues: history of organization development; organizational and legal form of organization; the organization's infrastructure; activity of the organization. Grouping factors of the internal environment according to the functions of the enterprise: production activity; marketing; enterprise management; finances; personnel; supply; nature of customer interaction; organization capabilities, own resources, infrastructure; innovative activity; Grouping of environmental factors according to the functions of the enterprise: political and administrative factors; legislative and regulatory factors; economic factors; the social environment; competition; scientific and technical factors; natural factors. When conducting a SWOT analysis, you must carefully define the scope of each SWOT analysis, understand the differences between its elements, be objective and use various input information, avoid spatial and ambiguous statements. SWOT analysis should be conducted with the participation of all major members of the organization. SWOT analysis can be performed using the brainstorming method. The quality of the analysis can be improved by engaging non-organizational entities in its conduct. Such individuals may act as impartial arbitrators who are able to evaluate the proposals and, by asking specific questions, to provoke the organization to rethink its provisions and actions. When conducting a SWOT analysis, and especially the chance and threat analysis, previously conducted public opinion surveys should be used. The SWOT analysis methodology involves the following steps: 1. Identify your own strengths and weaknesses of the enterprise. The first step is to identify the strengths and weaknesses of the enterprise. For this purpose it is necessary to: make a list of parameters on which economic security will be evaluated; to determine by each parameter what strength is and what is a weakness. Strength can be: competitive advantages (uniqueness), an offensive strategy or other important strategy protection against competitors strong position in specific market segments, well-known leader; higher than average awareness of the market; product differentiation, reasonable diversification; sufficient financial resources; 2. Identification of market opportunities and threats. This is a kind of "exploration" -an assessment of the market. This step allows you to evaluate the situation outside your business and understand what opportunities you have, as well as what threats you should fear. SWOT analysis has advantages and disadvantages compared to other methods. Advantages of the method: the method is applied in various fields of economics and management; adaptable to the object of study at any level; free choice of the analyzed elements depending on the set goals; used for the operational control of the organization and strategic planning for a long period. Disadvantages of SWOT analysis: shows only common goals, and specific actions to achieve them must be developed separately; the results are presented as a qualitative description, which complicates its use in the monitoring process; subjective and research significance of the results of the analysis is extremely dependent on the level of competence and professionalism of the analyst; conducting a high-quality SWOT analysis requires the involvement of a large number of specialists in the respective fields, which increases its value. SWOT analysis requires the involvement of large amounts of information, which requires considerable effort and expense. These shortcomings lead to the fact that the use of SWOT analysis requires the simultaneous use of other modern research methods (scenario planning). Certainly, in any case, to obtain a complete picture of the enterprise and ultimately its competitiveness, you must use the traditional methods of financial analysis, which provide information on the dynamics of generalizations. Nevertheless, SWOT-analysis allows identifying existing or probable problems of the enterprise, to develop a tree of goals for crisis management and to formulate a scenario of enterprise development for the planned period in order to prevent or withdraw the organization from crisis. Conclusions SWOT analysis is a kind of tool; it does not contain the definitive information for managerial decision-making, but it allows streamlining the process of considering all available information using one's own opinions and evaluations. The widespread use and development of SWOT analysis is explained by the fact that strategic management involves a large amount of information that must be collected, processed, analyzed, used, and thus there is a need to find, develop and apply methods of organizing such work. Given that the analysis of the economic security system can identify strengths, weaknesses, opportunities and threats, SWOT analysis can be applied to the specified system. Thus, the conducted research makes it possible to make a reasonable conclusion about the expediency of using SWOT analysis in the process of strategic management not only at the level of the institution, organization, but also at the state level as a whole.
4,806.2
2019-12-31T00:00:00.000
[ "Economics", "Political Science" ]
Targeting RORα in macrophages to boost diabetic bone regeneration Abstract Diabetes mellitus (DM) has become a serious threat to human health. Bone regeneration deficiency and nonunion caused by DM is perceived as a worldwide epidemic, with a very high socioeconomic impact on public health. Here, we find that targeted activation of retinoic acid‐related orphan receptor α (RORα) by SR1078 in the early stage of bone defect repair can significantly promote in situ bone regeneration of DM rats. Bone regeneration relies on the activation of macrophage RORα in the early bone repair, but RORα of DM rats fails to upregulation as hyperglycemic inflammatory microenvironment induced IGF1‐AMPK signalling deficiency. Mechanistic investigations suggest that RORα is vital for macrophage‐induced migration and proliferation of bone mesenchymal stem cells (BMSCs) via a CCL3/IL‐6 depending manner. In summary, our study identifies RORα expressed in macrophages during the early stage of bone defect repair is crucial for in situ bone regeneration, and offers a novel strategy for bone regeneration therapy and fracture repair in DM patients. | INTRODUCTION Diabetes mellitus (DM) is one of the most common chronic metabolic diseases and the global prevalence of DM in 2019 is estimated to be 9.3% (463 million people), rising to 10.2% (578 million) by 2030 worldwide. 18][9] Unfortunately, bone regeneration of DM remains a clinical challenge, with defect of stem cells in a high-glucose microenvironment being the primary obstacle. 10Hence, it is imperative to develop an effective strategy to recruit autologous stem cells to improve osteogenesis in DM patients. Yufeng Shen, Qingming Tang and Jiajia Wang contributed equally to this work. 4][15][16] What causes macrophages hypofunction and stem cell deficiency in diabetic bone defect during the acute phase of healing remains unknown, prompting us to revisit this issue?Retinoic acid-related orphan receptor α (RORα) is a multi-faceted nuclear receptor in tissue regeneration beyond an ability to regulate immune signalling. 179][20][21] Therefore, we speculated that RORα may be a vital factor in regulating inflammatory microenvironment in the early stage of bone defect and inflammatory imbalance in DM, providing a novel target for treating diabetic bone regeneration deficiency. In this study, we found that RORα expressed in macrophages is Overall, our study thus provides newly fundamental insights into the osteogenesis under DM conditions and offers a novel strategy for bone regeneration therapy in diabetic patients. | Activation of RORα by SR1078 boosts in situ bone regeneration of DM rats To test whether activation of RORα could promote DM bone regeneration, we established a calvarial defect model in type 2 DM rats and SR1078, a selective agonist of RORα, was administered to activate RORα driven transcription (Figure 1A).Bmal1 and Clock are the main target genes of RORα, and qRT-PCR assays firstly indicated that the mRNA transcript of Bmal1 and Clock in the calvarial bone was increased 2-h after SR1078 injection, and the increasement was more significant after 8-h, suggesting that SR1078 was existing in the calvarial defect (Figure 1B).Micro-CT analysis showed limited bone healing in the DM rats, with less than 30% new bone in the defect area after 28 days (Figure 1C,D).Significantly, the amount of new bone in the defect area at day 14 in the SR1078 group was comparable to that in the vehicle group at day 28, indicating an accelerated osseous regeneration by SR1078, which was evidenced by bone volume per tissue volume (BV/TV) and trabecular thickness (Tb.Th) measurement (Figure 1C,D).Masson staining showed that the newly formed bone marked by red was much more in the SR1078 group (Figure 1E).To further assess the osteogenesis at molecular biology, we conducted alkaline phosphatase (ALP) and type I collagen (COL1A1) IHC staining, which were the markers of early and late osteogenesis, respectively.The staining data showed remarkably higher osteogenesis activities in the SR1078 group during the whole healing period (Figure 1F-H). QRT-PCR data showed that the mRNA levels of osteogenesis indicators Osx, Alp, Bone morphogenetic protein 2 (Bmp2), Runt-related transcription factor 2 (Runx2) and Osteocalcin (Ocn) were obviously up-regulated in the SR1078 group compared with the Vehicle group (Figure 1I).Taken together, these results suggested that functional activation of RORα by SR1078 can significantly promote in situ bone regeneration of DM rats. | RORα expressed in macrophages of DM rats is deficient in early bone repair To reveal the underlying pro-regenerative effect of SR1078, we detected the expression change of RORα in the cranial defect tissue of normal rats and DM rats at 3, 7, 14 and 28 days post-operatively. IHC staining showed low expression of RORα in the normal control rats and positive expression of RORα could be seen as early as 3 days after calvarial defect (Figure 2A,B).Marked increasement of RORα continued to day 7 and decreased afterwards (Figure 2A,B).As RORαpositive cell morphology was biased towards macrophages, we surmised that RORα in calvarial tissue is mainly derived from macrophages.To test this, IF double staining for CD68, a pan macrophage marker, and RORα was carried out.We found that the overlap rate of the two fluorescence is high and CD68-positive cells showed absolutely high RORα level in contrast to the stroma cells (Figure 2C,D). RORα staining intensity peaked at day 7-post modelling in the normal group consistent with the IHC results and the percent of double positive cells within RORα-positive cells showed the same tendency (Figure 2C,D).In DM rats, the proportion of CD68-positive cells in the bone defect area was not significantly decreased compared to normal mice.However, RORα expressed in CD68-positive cells in the DM group was lower than that in the normal group at all time points and lacked an early tendency to increase (Figure 2E,F), suggesting that RORα in macrophages is inhibited by DM microenvironment. Together, we speculated that RORα fails to increase physiologically in the early stage of bone defect repair from DM rats, which may be a vital cause of diabetic regeneration deficiency. | Inhibition of RORα by SR3335 impedes physiological in situ bone regeneration the early stage of bone healing in the normal rats (Figure 3A).qRT-PCR results of Bmal1 and Clock in the calvarial bone confirmed the efficacy of SR3335 (Figure 3B).Micro-CT analysis showed that the amount of new bone in the defect area of the rats in the vehicle group increased significantly while no notable rise was observed in the SR3335 group from day 14 to 28, suggesting an impeded bone repairing process (Figure 3C,D).Masson staining indicated that the newly formed bone marked by red in the SR3335 group was less than that in the vehicle group at day 28 (Figure 3E).We speculated that the difference in the different groups may be due to the impact on osteogenesis of bone defects after intervention of RORα.To confirm this hypothesis, IHC staining of RUNX2 was carried out and the results showed that RUNX2 expression in the SR3335 group was lower than that in the vehicle group (Figure 3F).qRT-PCR assays showed that osteogenesis markers Osx, Alp, Bmp2, Runx2 and Ocn were remarkably down-regulated after SR3335 administration (Figure 3G), suggesting attenuated osteoblast function after pharmacological inhibition of RORα.To be summarized, these results indicated that RORα is an essential player for physiological in situ bone regeneration. | Insufficient IGF1-AMPK signalling of DM rats blocks upregulation of RORα Deficiency of insulin-like growth factor 1 (IGF1) is one hallmarker of the diabetic microenvironment, and its expression is sharply upregulated in early bone repair of normal individuals. 22,23Hereby, we supposed that inhibition of RORα in DM may be due to IGF1 abnormity. We first detected the level of IGF1 in the serum of normal and DM rats by ELISA, and the results showed that IGF1 was significantly reduced in the serum of DM rats (Figure 4A).Further, we investigated the expression of IGF1 in calvarial defect region in rats.qRT-PCR analysis illustrated that IGF1 expression was significantly lower throughout the whole bone healing process in the DM group compared with the normal group (Figure 4B).The most significant difference was observed at day 14, with a nearly 50% decrease (Figure 4B). IF staining results showed the change more visually (Figure 4C).We next explored whether IGF1 could regulate RORα in macrophages. THP-1, a human monocyte-derived cell line, were treated with IGF1 or IGF1 combined with IGF1R inhibitor PPP for 12, 24 h and mRNA level of RORA was detected.qRT-PCR data indicated that IGF1 remarkably upregulated RORA transcription, which could be eliminated by PPP administration, suggesting a positive regulation role of IGF1 on RORα (Figure 4D).Moreover, we explored the regulation of IGF1 on RORα in vivo.Diabetic rats received calvarial surgery and IGF1 loaded in methylpropenyl acylated gelatin (GelMA) was applied topically (Figure 4E).We could clearly see that Rorα expression increased significantly by IGF1 in newly formed tissue (Figure 4F).Consistently, Micro-CT analysis showed that more newly formed bone can be seen in the GelMA+IGF1 group at day 14 and day 28, compared with the GelMA group (Figure 4G,H).These findings suggested that IGF1 is the vital activator of RORα in early bone repair. It is well known that adenosine monophosphate-activated protein kinase (AMPK) and mitogen-activated protein kinase (MAPK) pathways are the classical downstream intracellular signal pathways of IGF1, 24,25 | RORα actuates macrophages-induced migration and proliferation of BMSCs After the appearance of bone defect, macrophages can rapidly recruit BMSCs through secreting chemokines, and BMSCs undergo osteogenic differentiation and exert bone regeneration effects. 6Therefore, we tested whether RORα is involved in the regulation of macrophages on BMSCs.Primary bone marrow derived macrophages (BMDMs) were isolated from SD rats and identified by flow cytometry of CD68 (Figure 5A,B).We overexpressed or knocked down Rorα in BMDMs, respectively, and the efficiencies were verified by QRT-PCR (Figure 5C). Cellular supernatant of Rorα-overexpressing or Rorα-knockdown BMDMs was used as conditioned medium to incubate BMSCs (Figure 5D).Using a transwell co-culture model (Figure 5E), we found that BMDMs-conditioned medium promoted vertical migration of BMSCs (Figure 5F,G).This migration-promoting effect was dramatically enhanced by RORα overexpression and abolished by RORα knockdown (Figure 5F, G).Scratch assay was also performed in specially designed 6-well plates (Figure 5H).Similar with the results of transwell test, images and quantitative analysis of scratch assay showed that overexpression of RORα strengthened BMDMs-mediated BMSCs horizontal migration whereas knockdown of RORα inhibited this process (I) THP-1 derived macrophages cultured in 25 mM glucose containing medium were pretreated with IGF1 (100 ng/mL) for 1 h followed by administration of AMPK activator (AICAR, 0.5 mM) or AMPK inhibitor (Dorsomorphin, 2.0 μM) for 24 h.The relative protein levels of p-AMPKα1, AMPKα1 and RORα were detected by Western Blot.(J) Quantitative analysis of AMPKα1 phosphorylation and RORα levels.(K) THP-1 derived macrophages cultured in 25 mM glucose containing medium were pretreated with IGF1 (100 ng/mL) for 1 h followed by administration of MAPK activator (C16-PAF, 1.0 μM) or MAPK inhibitor (PD98059,10.0μM) for 24 h.The relative protein levels of p-MEK, MEK and RORα were detected by Western Blot.(L) Quantitative analysis of MEK phosphorylation and RORα levels.(M) IF staining of RORα in THP-1 derived macrophages with different treatments and quantitative analysis (N).*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. (Figure 5I,J).We also investigated the effect of RORα in BMDMs on BMSCs proliferation.CCK8 test demonstrated that after 48-h or 72-h incubation, the proliferation capacity of BMSCs treated with Rorα-overexpressing conditioned medium was remarkably upregulated, while Rorα-knockdown conditioned medium impaired BMSCs proliferation (Figure 5K).This result was further intuitively confirmed by EDU assays (Figure 5L,M).In summary, these results showed that RORα is vital for BMDMs to induce migration and proliferation of BMSCs.(KEGG) pathway enrichment to confirm the function of DEGs.The KEGG pathway 'Cytokine-cytokine receptor interaction' was significantly down-regulated in RORα-deficient mice (Figure 6A).We then constructed protein-protein interaction (PPI) network to display the DEGs of "Cytokine-cytokine receptor interaction" pathway and found Ccl3 and Il-6 were among the most highly connected genes (Figure 6B). Based on this result, we speculated that Ccl3 and Il-6 may be underlying target genes that are responsible for the biological function of RORα in BMDMs.qRT-PCR analysis indicated that SR1078 remarkably increased Ccl3 and Il-6 mRNA transcription in THP-1 cells while SR3335 downregulated transcription of these two genes (Figure 6C), suggesting positive transcription regulation of RORα on Ccl3 and Il-6.In the DM calvarial defect model, Ccl3 and Il-6 mRNA levels were lower than those in normal individuals in early bone repair, which was in line with RORα expression (Figure 6D).Then, we performed JASPAR analysis, identified RORE sites of RORα (Figure 6E) and predicted possible RORα binding sites in the promoter region of Ccl3 (Figure 6F).Further CHIP-qPCR assays confirmed RORα-binding sites on Ccl3 (Figure 6F).The transcriptional regulation of Il-6 by RORα was explored in a previous report 26 and we verified the binding by CHIP-qPCR analysis (Figure 6G).These results suggested that RORα may alter the transcriptional activity of Ccl3 and Il-6 by direct binding.Next, we tested whether CCL3 and IL-6 are essential for RORα-mediated recruitment of BMSCs.Conditioned medium collected from SR1078 treated macrophages was used in transwell assay of BMSCs and neutralizing antibodies against CCL3 and IL-6, BX471 and Tocilizumab, respectively, were also administrated in the transwell system.Crystal violet staining and quantity analysis illustrated that BX471 and Tocilizumab decreased the vertical migration of BMSCs induced by macrophages (Figure 6H,I).Additionally, the results from scratch assay were consistent with the transwell assay (Figure 6J,K). Together, these results demonstrate that RORα promotes migration of BMSCs in a CCL3/IL-6 dependent manner. | DISCUSSION In this study, we outlined the role of RORα in in situ bone healing. Under physiological conditions, significant upregulation of RORα in macrophages was observed in the early stage of bone repair after defects.Macrophage RORα promoted BMSCs recruitment through transcriptional activation of chemokines CCL3 and IL-6.In diabetes melitus, RORα was not upregulated after bone defect due to deficient upstream IGF1-AMPK signalling, resulting in impaired bone regeneration.Based on these results, we explored the potential of treating diabetic bone regeneration by targeting RORα and found that the small molecule drug SR1078 can promote diabetic bone regeneration. Numerous studies have proved the significant role of RORα in regulating physiological activities of tissues and organs.As a constitutive transcription factor, RORα is widely expressed in various tissues such as liver, kidney, skin, and adipose.8][29][30] Staggerer mice, which is a mutant strain with lacked functional RORα, usually die 3-4 weeks post birth due to impeded generation of Purkinje cells, 31 reflecting the indispensability of RORα in maintenance of homeostasis.In adipose tissue, RORα rhythmically inhibits the thermogenic program of white adipose tissue (WAT). 32Lau et al. reported that RORα was a key factor in fat accumulation, staggerer mice had reduced level of serum triglycerides and exhibited resistance to dietinduced obesity. 33Clinical studies also showed that RORα modulated adipose tissue inflammation in obese patients. 34In the content of the liver, RORα is an essential regulator in bile acid and cholesterol homeostasis and mediates reprogramming of glucose metabolism in glutamine-deficient hepatoma cells. 35,36In accordance with individual performance, researchers observed abnormal thymus and spleen sizes and impaired cellularity of lymphoid tissue in staggerer mice, 37 so it is reasonable to assume that RORα is critical in lymphocyte development.Widely expressed in myeloid and lymphoid cells, RORα promotes T and B cell development by providing appropriate microenvironment and controls immune response by regulating cytokines. 19The exclusive balance of Th17/Treg cell generation is pivotal for immune homeostasis, RORα was reported to act as an elaborate molecular switch in this teeterboard. 38Another study illustrated that RORα regulates the migration and activation of neutrophil, contributing to the host defense against microbial infection. 20With the progressive exploration of the biological effects of RORα, its role in bone metabolism is gradually revealed.Meyer et al. demonstrated that RORα is strongly upregulated during the differentiation of BMSCs into osteoblasts.The staggerer mice of deletion within RORα were osteopenic with thin long bones and remarkably decreased total mineral content. 39Several in vitro studies have shown that RORα regulated the metabolism of human and mouse osteoblasts and promotes osteogenic differentiation through upregulation of osteogenic mediators such as ALP, OCN, and RUNX2. 40,41In the current study, RORα was inhibited in the calvarial tissue of diabetic rats after bone defects (Figure 2).Restoration of RORα function by SR1078 promoted expression of Col1a1, Alp, Bmp2, Runx2 and Ocn, leading to increased bone formation rate (Figure 1).This study illustrates that manipulating RORα to promote bone repair is a viable therapeutic strategy. Several studies suggested roles of RORα in mesenchymal generation and differentiation.RORα, but not RORβ was expressed in mesenchymal stem cells derived from bone marrow and RORα acts in bone biology by direct modulation of bone matrix component. 39Similarly, in human mesenchymal stem cells, RORα was reported to act as a regulatory molecule essential for osteogenic differentiation, genetic intervention of RORα down-regulated expression of bone sialoprotein and dentin matrix protein 1 and led to failed bone matrix formation and mineralization. 42Cho et al. studied RORα in cardiac function and found that RORα was vital in mesenchymal stem cells-mediated tissue repair. 43RORα is increased by IL-1β and binds to angiopoietin-like 4, blunting the conversion of macrophages to the proinflammatory phenotype, ultimately facilitating regeneration under pathological conditions.Interestingly, in our study we found that RORα expressed in macrophages promotes recruitment of BMSCs (Figure 5).Taken together, these findings suggest that RORα may be a key node in the crosstalk among different cells and directly or indirectly modulate the tissue regeneration microenvironment. The molecular mechanism by which RORα exerts its biological effects has been explored in various models.5][46] The regulation role of RORα on LPS response has been intensively studied.Staggerer mice showed elevated levels of IL-1β, IL-6 and MIP-2 in alveolar lavage fluid and were more sensitive to LPS induced lethality. 47In another LPS-induced septic shock model, mice exhibited reduced susceptibility in the absence of RORα, 48 which was due to passivated macrophages.Treatment with selective RORα inhibitor also reduced the severity of LPS-induced endotoxemia.These seemingly contradictory results demonstrate the indispensability of RORα in sensing inflammatory stimuli and regulating immune cell function.Specialized pro-resolving mediators (SPMs) are essential for inflammation resolution, host defene, and tissue regeneration. 49RORα was reported to recognize maresin-1, a classical SPM, activates monocyte phagocytosis and forms a positive feedback loop to promote maresin-1expression thereby consolidating its anti-inflammatory effect. 50These investigations indicate that RORα can not only sense inflammatory stimuli in the early stage and activate immune response, but also promote resolution of inflammation in the late stage.Melatonin is widely distributed in the organism and has multiple effects such as rhythm regulation and antioxidative stress which are mediated mainly by interacting with specific receptors.Although it is still controversial whether it binds directly to melatonin, RORα is a recognized melatonin receptor and induces the biological function of melatonin. 51 Choi et al. revealed the link between cholesterol metabolism and osteoarthritis by RORα.RORα in chondrocytes responded to locally elevated cholesterol by upregulating matrix degradation factors MMPs and downregulating anabolic factor SOX9, promoting bone abnormalities. 52In the current study, we demonstrated that RORα in macrophages receives upstream IGF1-AMPK signalling (Figure 4) and transfers the signal to BMSCs by manipulating CCL3/IL-6 secretion (Figures 5 and 6), ultimately promoting bone regeneration after defect.Under DM conditions, insufficient IGF1-AMPK signalling impairs the function of RORα.Corroboratively, a recent research found that high glucose deactivates AMPK signalling by production of ROS 53 and this is consistent with our findings.Our study, along with those existing investigations, suggests that RORα is a key signalling switch that senses microenvironmental cues and drives downstream pathways to modulate cell behaviours. RORα is a deeply shared molecule in a number of interlinked diseases, thus exploration of therapeutic strategies targeting RORα has significant potential for clinical use.Nowadays, small molecule drugs are the mainstream direction of drug development.Among the new drugs approved by FDA in 2021, small molecules account for more than half of the drugs.RORα is extremely sensitive to small molecule drugs and has potential as a drug target for the treatment of different diseases.In this study, a selective agonist of RORα, SR1078, was systematically administrated to diabetic rats and we did not observe unexpected abnormality in animals, indicating predictable biosafety of the drug.By examing the transcription level of well-recognized downstream genes of RORα in calvarial tissue, we verified the efficiency of SR1078 (Figure 1).Modulation on RORα-targeted genes sustained even 8 hr after a single injection, suggesting consistent long-term effect of SR1078.Finally, through molecular biology, histology and morphology test, we confirmed that SR1078 promotes diabetic bone repair.Overall, we made a preliminary attempt to boost bone regeneration by targeting RORα, further studies of pharmacodynamics and pharmacokinetics are needed to develop a refined application strategy and broaden the scope of clinical applications. essential for in situ bone regeneration.Targeted activation of RORα by SR1078 in the early stage of bone defect boosts bone regeneration of DM rats.Macrophage RORα fails to upregulate as hyperglycemic inflammatory microenvironment induced insulin-like growth factor 1 (IGF1) scarcity and 5 0 -AMP-activated protein kinase (AMPK) signalling inactivation in the early stage of bone defect repair from DM rats, which causes regeneration deficiency severely.RORα is vital for macrophage-induced migration and proliferation of BMSCs via a C-C motif chemokine 3 (CCL3)/interleukin-6 (IL-6) depending manner. To further test the function of RORα in the physiological bone regeneration process, SR3335, an inverse agonist of RORα, was performed to suppress the constitutive transactivation activity of RORα during F I G U R E 1 Activation of RORα by SR1078 Boosts in situ bone regeneration of DM rats.(A) Schematic diagram of the experiment.(B) qRT-PCR analysis of Bmal1 and Clock mRNA in calvarial bone tissues of DM rats at 0, 2 and 8 h post SR1078 injection.(C) Micro-CT scanning of calvarial defects on days 7, 14 and 28 post surgery.The 4 mm-diameter defect area (white dashed lines) was selected as the region of interest (ROI).Scale bar =1 mm.(D) BV/TV and Tb.Th analysis of the selected ROI.(E) Masson staining of calvarial defects on days 7, 14 and 28 post surgery.Scale bar = 100 μm.(F-H) IHC staining of ALP (F) and Collagen I (G) in calvarial defects and corresponsive quantity analysis (H).Scale bar = 50 μm.(I) qRT-PCR analysis of Osx, Alp, Bmp2, Runx2 and Ocn in calvarial bone tissues on days 7, 14 and 28 post surgery.*p < 0.05, **p < 0.01, ****p < 0.0001. F I G U R E 3 Inhibition of RORα by SR3335 impedes physiological in situ bone regeneration.(A) Schematic diagram of the experiment.(B) qRT-PCR analysis of Bmal1 and Clock mRNA in calvarial bone tissues of normal rats at 0, 2 and 8 h post SR3335 injection.(C) Micro-CT scanning of calvarial defects on days 7, 14 and 28 post surgery.The 4 mm-diameter defect area (white dashed lines) was selected as the region of interest (ROI).Scale bar =1 mm.(D) BV/TV and Tb.Th analysis of the selected ROI.(E) Masson staining of calvarial defects on days 7, 14 and 28 post surgery.Scale bar = 100 μm.(F) IHC staining and analysis of RUNX2 in calvarial defects.Scale bar = 50 μm.(G) qRT-PCR analysis of Osx, Alp, Bmp2, Runx2 and Ocn in calvarial bone tissues on days 7, 14 and 28 post surgery.*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. so we tested whether IGF1 regulated the expression of RORα by these two pathways.AMPK activator AICAR promoted phosphorylation of AMPK and RORα expression in THP-1 cells (Figure 4I,J).Administration of AMPK inhibitor Dorsomorphin after IGF1 restrained the upward trend of AMPK phosphorylation and markedly inhibited RORα expression (Figure 4I,J).Similarly, MAPK activator C16-PAF and inhibitor PD98059 were applied to examine the effect of MAPK on RORα.However, no significant difference was observed in the expression of RORα either by activation or inhibition of MAPK signalling (Figure 4K,L), suggesting that the regulation of IGF1 on RORα was independent of MAPK pathway.Moreover, IF staining reconfirmed the IGF1-AMPK-RORα axis (Figure 4M,N).These results indicated that IGF1 may regulate the expression of RORα through AMPK rather than MAPK. signalling of DM rats blocks upregulation of RORα.(A) IGF1 content in the serum of normal and DM rats was detected by ELISA.(B) qRT-PCR analysis of Igf1 mRNA levels in calvarial tissues on days 7, 4 and 28 post surgery.(C) IF staining and quantitative analysis of IGF1 in calvarial defects from normal and DM rats on days 7, 14, and 28 post surgery.Low magnification scale bar = 100 μm and high magnification scale bar = 25 μm.(D) THP-1 derived macrophages cultured in 25 mM glucose containing medium were treated with IGF1 (100 ng/mL) or IGF1R inhibitor PPP (5 μM) for 12, 24 h and RORA mRNA levels were examined by qRT-PCR.(E) Schematic illustration of topical administration of IGF1 in calvarial defects of DM rats.(F) qRT-PCR analysis of Rorα mRNA in calvarial bone tissues at 24, 48, 96 h post IGF1 administration.(G) Micro-CT scanning of calvarial defects on days 14, 28 post IGF1 administration.The 4 mm-diameter defect area (white dashed lines) was selected as the region of interest (ROI).Scale bar =1 mm.(H) BV/TV and Tb.Th analysis of the selected ROI. 2. 6 | CCL3/IL-6 secreted by BMDMs transfer the RORα signalling to BMSCs To investigate the mechanism underlying the RORα-induced BMSCs recruitment, we searched and obtained gene expression data for wild type (WT) and Rorα-deficient mice fed with a high fat diet (GSE23736).After identifying differentially expressed genes (DEGs), we performed gene ontology (GO) and Kyoto Encyclopedia of Genes and Genome F I G U R E 5 RORα actuates macrophages-induced migration and proliferation of BMSCs.(A) Flow cytometry was used to identify the primary cultured rat BMDMs with anti-CD68.(B) Representative images of rat BMDMs in P1 generation under light microscopy.(C) Rorα in BMDMs was over-expressed via lentivirus or knocked down via CRIPER/Cas 9 system and the efficiencies were examined by qRT-PCR.(D) Operation diagram of the co-culture system.Rorα-overexpressed or knockdowned BMDMs were culture for 48 h and the supernatant was saved as conditioned medium to culture BMSCs.(E) Schematic diagram of the transwell system.(F) BMSCs were incubated in conditioned medium from Rorαoverexpressed or knockdowned BMDMs and the vertical migrated BMSCs were stained with crystal violet.Scale bar = 200 μm.(G) Quantitative analysis of transwell assay.(H) Schematic diagram of the scratch assay.(I) Horizontal migration of BMSCs in different conditioned media was determined by scratch assay.Scratch borders were indicated by green dashed lines.Scale bar =500 μm.(J) Quantitative analysis of scratch assay.(K) BMSCs were cultured in different conditioned media for 48, 72 h and the rates of cell growth were examined by CCK8 assay.(L) EDU staining of BMSCs cultured in different conditioned media and quantitative analysis (M).Scale bar =100 μm.*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001.
6,087.2
2023-04-13T00:00:00.000
[ "Biology", "Medicine" ]
The Grid Independence of an Electric Vehicle Charging Station with Solar and Storage The UK government has set a ban on the sale of new petrol and diesel cars and vans by 2030. This will create a shift to electric vehicles. which will present a substantial impact on the grid. Therefore, methods to reduce the charging station’s impact on the grid have to be developed. This paper’s objective is to evaluate how integrating solar and storage affects a charging station’s de‐ pendence on the grid. A photovoltaic electric vehicle charging station (PVEVCS) is first designed, and then four charging profiles are selected to assess the station through a simulation using MATLAB. The array produces 3257 MWh/yr which, on average, offsets 40% of the electric vehicle (EV) load experienced by the station. Furthermore, with the integration of storage, the dependence is further reduced by 10% on average. The system also exported energy to the grid, offsetting close to all the energy imported. Introduction The world is currently in the midst of a climate crisis, with global powers working toward mitigating climate change by limiting the average temperature increase by 1.5° C by 2030 [1,2]. With the signing of the Paris Accord in 2015, many countries have committed to this pledge. In particular, the UK government published the "Ten Point Plan", a pathway toward a green industrial revolution, in 2020 [3]. Point 4, "accelerating the shift to zero-emission vehicles", details the ban on the sale of new petrol and diesel cars and vans by 2030. Electric vehicles (EVs) are currently the most popular zero-emission vehicle [4] propulsion type, due to their low carbon emissions when powered by renewable sources and their potential for vehicle-to-grid (V2G) applications [5]. However, there are several foreseeable challenges that need to be addressed as EV uptake rises [6][7][8], one being their refueling or charging [9] and preventing overloading of the grid [10]. With a current vehicle stock of 31,695,988 [11] vehicles in the UK, a number that is set to increase yearly, once it becomes predominately EV, there is a concern with how the grid will cope with the additional demand [12] placed upon it, both for at-home fast charging and charging station usage. It is expected that between 6 GW to 18 GW [13] of additional power is going to be needed to power the vehicles, and so the UK must be mindful of increasing the production capacity but also developing charging stations that are optimized for grid independence. There are currently 8380 forecourts in the UK, and their eventual conversion to electric forecourts forecasts a substantial load on the grid, so proper management and mitigation of their impact is vital [14]. Improving their grid independence makes them self-sufficient and present a minimized impact upon the grid, reducing the risk that millions of additional EVs are going to have upon it. One such method to improve the grid independence is the integration of solar [15,16] and storage [17] in the EV charging station (EVCS), creating a photovoltaic-powered electric vehicle charging station (PVEVCS). Solar photovoltaic (PV) systems are used to supply and offset the demand of the EVCS. The storage then improves the forecourt's energy utilization by storing excess electricity during periods of low demand and then discharging it when the grid is experiencing a high level of demand. These methods, when employed together, reduce the amount of energy imported during critical periods and, as such, the EVCS's impact on the grid. Forecasting charging load demands and developing charging profiles for EVCS are vital steps in their development process [18]. Regarding PVEVCS, Minh et al. conducted technical economic analysis on a theoretical station in Vietnam. They found that when the cost of energy (COE) was larger than the feed-in tariff (FIT), further capital needed to be mobilized to have a sustainable system [19]. A feasibility study on a PVEVCS was conducted in Shenzhen City, China [20], which found that the net present cost of an EVCS with a demand of 4500 kWh was USD 3,579,236, whereas the COE of the PVEVCS was USD 0.098/kWh, making it economically feasible. The station also had pollutant reduction potentials of 99.7% and above for CO2, SO2, and NO. Other work by Ul-Haq et al. modeled a PVEVCS with V2G using SIMULINK and found that a PV-powered charging station is a promising method for managing the substantial load EVs will present in the future [21]. A PV array on a university campus in Dhaka, Bangladesh was used as a power source for charging two electric buses [22]. Chowdhury et al. found that only 21% of the production was needed, and the rest could be exported, making it feasible. It was also noted that an energy storage system would maximize the power flow from PV to EV. They also found that it reduced CO2 emissions by 52,944 kg/year, as their energy mix was predominately thermal, being fueled by coal and natural gas. Another PVEVCS was designed and simulated in Romania using hybrid optimization by genetic algorithms to optimize the PV system's configuration in [23]. Savio et al. also developed a PVEVCS as a microgrid in India and modeled 11 energy management strategies using MATLAB and Simulink [10]. Following the integration of a battery energy storage system (BESS), Nishimwe et al. proposed an optimization framework to maximize the profit from a PVEVCS with a BESS [24]. The simulation found that when the EVCS load was similar to the PV output, the BESS was not needed, thus providing useful information to factor into real-world PVEVCS design decisions. Robinson et al. continued with these developing business models for PVEVCS to simplify investment for large entities in the US [25], while Liu et al. evaluated the effectiveness of a commercialized PVEVCS [26]. Using actual statistical data, the paper evaluated a theoretical PVEVCS in China. It was found that PVEVCSs have the potential to produce satisfactory environmental and economic benefits while reducing the impact and dependence on the grid. Since more PVEVCSs with storage have been developed worldwide, Liu et al. proposed a portfolio optimization model with a sustainability perspective and then verified this using a case study of 10 feasible projects in South China [27]. The existing literature indicates that PVEVCSs are a commercially feasible and effective measure to both manage EV load and reduce CO2 emissions. Research has also been conducted in modeling and developing the EV load profiles of EVCSs. Schuabe et al. used empirical data of three EV fleets in southwest Germany to simulate EV load profiles to develop a simulation model for allowing realistic representations of EV demand [28]. Shepero et al. completed a review in 2018 on PV EV charging, finding that more variation was needed in modeling to include EV's various modes of charging, like at-home and destination charging [29]. However, by considering the entire city of San Francisco, USA, Ko et al. revealed associations between population density, vehicle travel, and on-site PV potential for reducing greenhouse gas emissions. Moving on, Godde et al. used a Gaussian mixture to model the charging probability of EVs, and Bae et al. used a spatial and temporal model to characterize the demand [30,31]. A stochastic model was also used to simulate the EV load profiles by Soares et al., due to the inherent uncertainties [32]. Similarly, Farkas et al. stochastically modeled EV charging at charging stations and traffic queue theory, finding that the station parameters did not seriously affect the system parameters, but the charging time did [33]. Further work has been completed on simulating EV fast-charging station usage and the power requirements by Bryden et al. This work differs, as they used a petrol vehicle's driving data from the northwest United States and considered at-home and destination charging to develop a daily profile of the frequency of fast charges [34]. Brady et al. also used GPS data but from EVs collected in Waterloo, Canada [35]. The authors again used a stochastic simulation methodology to simulate daily driving schedules and, as such, their charging profiles. Dixon et al. used a Monte Carlo-based method to simulate the likely demand of EVCSs in the UK [36]. The paper found similar results, as did all of them, in the daily EV load profile patterns. Typically, the focus of the literature is on the power management method, optimizing the PV system configuration, the economic feasibility of a PVEVCS, or simulating the EVCS's demand. This paper uniquely provides focused insight into how solar PV and storage integration reduces the grid dependence of an EVCS, building upon the existing literature and providing new information and insight. The analysis is compounded by using four different charging scenarios from the UK, America, and Canada with a variety of methodologies, either via simulation or from real-life datasets. This paper also proposes further methods to improve the grid independence of the EVCS based on the results of the simulation, whereas other work only provides the results of the simulation. This paper consists of four sections. Section 2 details the design of the PVEVCS and its components. It also details the four charging profiles that have been used in the simulations and the novel metrics used to analyze the EVCS variations. Section 3 contains the results and discussion of the simulation results and meaningful conclusions of the analysis, together with proposed methods to further reduce grid dependence, before closing with the conclusion in Section 4. Materials and Methods To investigate how solar and storage affect a charging station's grid independence, first, a PV array was developed using MATLAB software. Then, using publicly available data and scholarly articles, four charging profiles were developed that characterized the demand placed upon the charging stations in four different scenarios. Using MATLAB, the different PVEVCS configurations were assessed against each of the charging profiles as a batch simulation, whose results were then evaluated using MATLAB. This allowed for analysis of the effect that the integration of solar and storage had on an EVCS's grid independence in a variety of different charging scenarios. Figure 1 shows the main phases: design the charging station, define the charging profiles, batch simulation, and analysis methods, which are explained below. Table 1 details the important information relative to the site. There was negligible horizon shading minimizing the losses, making it an ideal location for a solar farm. It also had immediate road access, as can be seen in Figure 2, making it suitable for a charging station and allowing access for construction and maintenance. A multi-criteria decision-making methodology was used to select all components [37]. They all had to meet the following criteria: being produced within the last 5 years and by a reputable manufacturer. Due to the constraints on this project, these criteria ensured that the components selected were high-quality and reliable, since conducting a manufacturing site inspection was not possible. The NeoSun NS-410M-144 panels with 23.2% efficiency were selected [38]. They were set at a tilt angle of 38° to optimize yield and an azimuth of 0° [39]. The Huawei SUN2000-105KTL-H1 string inverter was selected [40]. The solar array used 22 inverters in the string in total, with a Pnom of 1.24, minimizing clipping while optimizing for performance all year round [41]. Lithium-ion batteries were chosen, as they are used in EVs and are widely employed [42]. The LG Chem rack JH4 SR19 4P 296Ah was selected, as it provided a continuous high-power supply over periods longer than 3 h. The batteries were connected in series to increase the capacity [43]. LG Chem was selected as they are prominent within the industry, investing in the R&D of Liion and have experience with many utility-scale storage projects across the globe. The use of 22-kW chargers, however, was omitted due to size constraints at the site, and as it was assumed all EVs arriving were fast charging, slow chargers were not needed. Therefore, one Tesla supercharger (350 kW) and three additional fast chargers (250 kW) were selected for the charging station. This provided a total output of 1100 kW for charging at any one time or an average of 275 kW. The weighted average of EV energy efficiency of the most popular EVs in 2021 was 4.15 miles/kWh, meaning a charger on average provided 1141 miles of charge per hour. Furthermore, all four chargers provided 4566 miles of charge an hour, or 109,599 miles a day, at max capacity. Tables 2 and 3 include all the relevant important information concerning the PVEVCS, such as the results and components of the system. Figure 3 shows the annual PV production, which was typical for an array in the southeast part of the UK, with peak production during July and the least production in December. Charging Profile Selection and Development This article utilized four charging profiles for the simulation that were either selected from well-established scholarly articles or derived from real-time datasets. They were the petrol station charging profile, road usage charging profile, Drive-4-Data charging profile, and the fast-charge charging profile. Figure 4 displays all four charging profiles, while Table 4 lists all the essential details. The petrol station charging profile provided an understanding of how a charging station survived when it experienced the same level of demand as a petrol station. This illuminates what would happen if the petrol stations were replaced by EVCSs at a one-to-one ratio. The road usage charging profile was developed using the site's local road's trip counts. The results of this analysis provide insight into whether the PVEVCS is sufficient to satisfy local demand. The Drive-4-Data charging profile was from Waterloo, Canada and was developed using GPS data of actual EVs and their driving behaviors. Using actual EV driving behaviors improved the accuracy of the simulation results, as it factored in current EV driving behaviors like range anxiety. Finally, the fast-charge charging profile was from the US and was the only dataset that factored in at-home and at-work charging. It also factored in long journey times, which are crucial aspects of the usability of EVs, therefore making it a strong dataset to apply in the simulation. All charging profiles were selected due to their variety of methodologies and locations. Petrol station data was used from [36], who used the Google Maps "Popular Times" feature that collected positional data from smartphone users to estimate the average popularity of a petrol station in Edinburgh, UK on a Saturday. The data underwent a state sampling simulation deriving the arrival rate and a time-sequential simulation to characterize the demand on a forecourt. The fast-charge charging profile data were from [34], who employed GPS data that recorded long journeys of ICE vehicles. This paper assumed long journeys were split into two segments, where the driver fast-charged their EV and rested in between. It was assumed that for extended stops (>5 h), the car was at-home or destination charging. By this method, the number of fast charges as a function of the driving ranges could be estimated. By the same method as the road usage charging profile, it was possible to estimate the total number of EVs in the southeast. It was possible to use the average number of vehicles per household (1.41) [44] and the total number of households in the southeast (3,801,000) [45] and multiply them together to offer the total number of vehicles in the southeast. By converting the fast charges per million vehicles to the fast charges per vehicle and then multiplying it by the total number of EVs, this would offer an estimated EV arrival rate per hour in the southeast. The road usage charging profile dataset was derived from the road usage statistics of the main roads within a 10-mile radius of the charging station as an estimate for the local demand for EV charging. The roads were the M25, M23, A22, and A264. The data are publicly available from the Department for Transport Road Traffic Statistics website [46]. This data only shows the number of vehicles by class and not by propulsion type. Therefore, a method to determine the number of EVs was derived. First, there were 193,992 EVs out of the 31,695,988 licensed vehicles in the UK, meaning EVs represented 0.61% of the total vehicle stock in the UK [11]. This number was then assumed to be the national EV penetration rate which, when multiplied by the number of vehicles counted, estimated the number of EVs traveling on the local roads. After finding the daily total number of EVs, it was multiplied by the fast charge per million EVs daily variation (which was divided by 1 million to convert it to per EV) from Bryden et al. to define the daily variation in demand [34]. The Drive-4-Data charging profile used data from an article from the Department of Electrical and Computer Engineering at the University of Waterloo in Canada [47]. Hefez et al. investigated the optimal design of an electric vehicle charging station considering various energy resources. The load experienced by the charging station was obtained from Drive-4-Data, a publicly available real-world dataset of EVs maintained in Waterloo, Canada. The NHTS 2009 data for light-duty vehicles were used to further distribute the PEV charging demand over the day by assuming the EVs had the same pattern for arrival as petrol stations. Once the system and charging profiles were defined, multiple simulations were conducted for each variation of the system, assessing their performance against each of the charging profiles. For each iteration, the energy supplied to the user (Euser), energy generated by the array (Eavail), energy imported from the grid (Eimp), and energy exported to the grid (Eexp) were collected, where applicable, in CSV format to allow for further analysis using Excel. For the non-grid connected system's simulation, there were no values for Eimp and Eexp, and so only Euser and Eavail were recorded. Novel Metrics Method To analyze the performance of the systems, multiple novel metrics were defined. These were the success ratio, health rating for the grid-connected systems (GSCs), and the energy difference and success rate for the non-grid-connected systems. The novel metrics were defined as within the context of this project, they provided a clear perspective on the system's grid independence. Similar methods were used by Brenna et al. [48] to analyze the performance of proposed systems. The success ratio for the non-grid-connected system was calculated with Equation (1): (1) where Eavail is the total energy available from the sun and Euser is the total energy supplied to the PVEVCS. A value close to 1 meant that the system generated sufficient energy to power the charging station. A value of 0 indicated that the system needed to import the majority of its energy to satisfy the load from the charging station. The success rate for the non-grid-connected system was calculated with Equation (2): A value closer to 100% indicates the system can satisfy demand as experienced by the charging station and vice versa when closer to 0%. Therefore, the larger the success rate, the more effective the system is. The success ratio for the GCS was calculated with Equation (3): where Eimp is the energy imported from the grid. This follows the same logic as the nongrid-connected system success ratio. The heath rating was calculated with Equation (4): where Eexp is the energy exported to the grid. If the heath rating is negative, this indicates an overall energy deficit generated by the PVEVCS and vice versa should it be positive. The magnitude demonstrates the scale of the effect. Results The results of the simulation are displayed, analyzed, and discussed here. A nongrid-connected system and three grid-connected systems with 0 MWh, 0.5 MWh, and 1 MWh of storage were analyzed. This provided insight into grid connection and how adding capacity affected the station using each charging profile. Figure 5 displays the success ratio of the non-grid-connected system, which had a direct connection with the PV array. The graph indicates that from March to October, for three out of four charging profiles, the system produced enough energy to offset the demand. Outside of these months, however, all of them except the road usage charging profile did not generate enough energy, resulting in the station shutting down due to a lack of supply. For the petrol station charging profile, in January, the system was unable to produce enough energy to satisfy the demand, as shown by its 0.17 rating. In order for the PVEVCS to satisfy demand all year round, including in January, the nominal power of the array would need to increase by 588%, meaning a PV plant with a nominal power of approximately 17 MW would always generate enough power to satisfy demand. Figure 6 shows that in the petrol station charging profile, it was only able to meet demand 12% of the time throughout the year, which indicates that 88% of the time, vehicles were unable to charge. The road usage charging profile was able to supply electricity only 45% of the time, and for the Drive-4-Data charging profile and fast-charge charging profile, this was 25% and 33%, respectively. Even though the system produced sufficient energy to offset demand in the road usage charging profile, Drive-4-Data charging profile, and fast-charge charging profile, the system only supplied the demand less than half of the time. This shows that the array did not meet demand as and when it was needed, requiring other methods to be employed to better utilize the energy. This demonstrates that a grid connection is necessary for a PVEVCS to function. Figure 7 shows the difference between the power demand and supply of the nongrid-connected system with no storage. There were substantial differences between the energy supplied and the energy demand in the petrol station charging profile, indicating the system was not generating enough energy. The road usage charging profile was the only one with a substantial difference between supply and demand. Drive-4-Data and fast charge's demand were also less than the supply, indicating the PV array was sufficient, but an increase in nominal power would have benefits. Figure 8 shows that the petrol station charging profile had the lowest success ratio (SR). This was due to the magnitude of the demand that was placed upon the system. Table 4 shows that the petrol station charging profile represented a yearly load of 9.03 GWh/yr, whereas the system only produced 3.26 GWh/yr, which was 36% of the petrol station charging profile's demand. Therefore, an SR of 23% was expected. This also indicates that the EVCS was importing 5.77 GWh/yr, representing a substantial impact on the grid. The road usage charging profile had the highest SR value, indicating that it was able to supply itself 54% of the time without importing energy while being above the average across the charging profiles. However, considering that the road usage charging profile defined a daily arrival rate of 11 cars, and considering the overall production of the system, the SR would be expected to be higher. This was due to cars arriving once the sun had set, and so the PVEVCS was forced to import energy to meet demand. This suggests that there is a limit on how much integrating solar affects the EVCS's grid independence. This indicates that other measures such as on-site storage would be an effective measure to reduce the proportion of imported energy once the sun has set. The Drive-4-Data charging profile had a 51% success ratio, indicating again that the system was able to cope with the demand placed upon it while the sun was shining, while outside of these hours, energy had to be imported. Furthermore, since there was a 3% difference between the Drive-4-Data charging profile and road usage charging profile SR ratings, this again points toward there being a limit to how much solar integration affects grid independence. The fast-charge charging profile performed worse than the Drive-4-Data charging profile, despite having a lower arrival rate by 17% of 167 cars. This was due to the fast-charge charging profile having the majority of its vehicles arriving in the evening between 4:00 p.m. and 7:00 p.m., typically when the sun was setting. Overall, this suggests that the SR of a system is dependent heavily on when the cars arrive and, therefore, the pattern of the EV load profile. This suggests that an EV load profile pattern similar to the PV output maximizes the SR and, as such, the system's grid independence. Figure 9 displays the health rating of the grid-connected system with no storage. The petrol station charging profile represented a large negative value of −6.7 GWh, indicating a significant energy deficit caused by the PVEVCS. This, combined with the low SR in Figure 8, suggests that the PV array was not sufficient to match demand. Therefore, the size of the array must be increased to account for the energy needed. The road usage charging profile had a high health rating of 2.15 GWh, indicating a large energy surplus that offset the energy imports. However, considering the system's 53% success ratio rating and its health rating together, improving the management of its surplus energy could further increase its success ratio. The Drive-4-Data charging profile and fast-charge charging profile both had low negative health ratings of −0.559 GWh and −0.213 GWh, respectively. This indicates a small impact on the grid as a result of it importing more energy to meet demand than it exported. kWh Charging profiles Figure 10 shows that the petrol station charging profile demanded a substantial amount of energy. Since the supply was 36% of the demand, the system had to import the remainder which, in turn, possessed a large negative health rating. The road usage charging profile exported a large amount of surplus energy to the grid, whereas the Drive-4-Data charging profile and fast-charge charging profile had similar levels, although the Drive-4-Data charging profile did import nearly twice the amount it exported. The systems in the Drive-4-Data charging profile and fast-charge charging profile both exported significant amounts of electricity to the grid, reducing their impact and acting as prosumers to the grid. Grid-Connected Systems with 0 MWh, 0.5 MWh, and 1 MWh In Figure 11, for both the petrol station charging profile and Drive-4-Data charging profile, the addition of storage had no effect. For the petrol station charging profile, this was because the production was approximately one third of the annual demand. Therefore, the addition of 1 MWh of storage was ineffective, as the true issue was a lack of supply. For the Drive-4-Data charging profile, however, since a large proportion of the arrivals were during periods of low production, the batteries had a limited opportunity to charge and were therefore ineffective. A solution to this would be to introduce alternative BESS power management strategies. The road usage charging profile improved but plateaued as the storage increased. This reinforces the concept of a limit being in place for how grid-independent solar integration can aid the charging station, since solar power is entirely dependent on solar radiation, which is low in the winter and uncontrollable. The fast-charge charging profile had a significant improvement of 12% through the addition of storage, although the scale of improvement reduced as the storage increased, reinforcing the concept of a limit as discussed with the road usage charging profile. The fastcharge charging profile improved the most, since the majority of vehicle arrivals were during the evening hours. The low demand in the morning allowed excess electricity generated during days of good sun coverage to be stored and then used during peak demand between 3:00 p.m. and 6:00 p.m. This ultimately reduced the quantity of energy imported by the EVCS, further reducing its grid dependence. It also reduced the impact of the grid during peak demand, helping manage local grid loading. This indicates that if a charging station's demand is characterized to have a peak during the evening, a BESS is an effective method to reduce the PVEVCS's impact on the grid when there is good sun coverage. Figure 11. The success ratio of the grid-connected systems with incrementing storage. Figure 12 demonstrates that the introduction of storage had a negligible effect on the health rating, and the energy imports and exports of the systems remained largely the same. This suggests that storage is not always an effective method to employ in order to utilize generated electricity more efficiently. This is dependent on the array's production and the charging station's demand. Only in the fast-charge charging profile was there a reduction in energy imports. This was due to the pattern of vehicle arrival being during the morning and increasing as the day continued. The low demand and high PV output in the morning allowed excess energy to be stored by the batteries and then used later on, reducing the energy imported and exported. Overall, this reduced the PVEVCS's impact on the grid. In Figure 13a, the success ratio of the grid-connected system increased but experiences a tapering effect, indicating that integration of storage improved the PVEVCS's grid independence but with a limited effect. This shows, however, that adding 1 MWh of storage improved the success ratio by 10% on average across all four charging profiles. It also reduced the health rating, suggesting that the increase in storage increased the overall energy deficit of the system, although the decrease was only 1%, so there was a small trade-off for the increase in grid independence. Figure 13b shows that increasing the BESS capacity reduced the quantity of energy imported and exported. This was because the energy that would be exported was stored in the BESS and then used at a later date when production did not meet demand, therefore reducing the energy imports. One can see that on average, however, the introduction of 1 MWh of storage had a small impact, suggesting a greater increase in BESS capacity is needed an increase in the nominal power of the PV array. Overall, the data show that the integration of solar and storage improved the grid independence of the PVEVCS, albeit with a limited effect. This was due to solar power's dependence on the weather and sun coverage. Therefore, if the EV load profile presented a similar pattern to the PV output, then the integration of solar and storage was an effective method for reducing the EVCS's dependence on the grid. Dissimilar patterns suggest it will be less effective and that alternative methods should be used to provide a direct power source to the EVCS. Further Method Proposed to Improve the Grid-Connected System Concerning grid independence, based on the results, there are several methods that will improve the grid-connected system's grid independence and overall performance. Some measures, such as new BESS power management strategies, can better utilize excess electricity to reduce the system's dependence on the grid. Figure 5 shows that the PV production did not offset the demand from the EVCS. A method to improve this involves increasing the nominal power of the PV array by installing more modules. If there are site constraints, constructing an additional array to Exports Imports offset the energy imports of the EVCS is effective, or similarly, a sleeved PPA can be employed. Although this does offset more energy, the EVCS still impacts the local grid. Therefore, using modules with higher peak power or bifacial modules may be more effective. Further forecasting would be needed to evaluate its cost-effectiveness factoring capital expenditure and future electricity market prices. The additional production from the larger array provides another revenue stream and better prepares the EVCS as EV uptake increases. Increasing the capacity is only effective at offsetting the energy imported from the grid. It only reduces the impact during peak grid demand when the load and PV output have a similar pattern. When this is not the case, other methods are necessary. Increasing the BESS Capacity In Figure 11, the success ratio increases with the storage capacity, indicating storage improved the grid independence of the system. The fast-charge charging profile experienced the most improvement, as during the morning, there was low demand, allowing the batteries to charge for later during peak periods. Therefore, it follows that if the storage is increased further, the PVEVCS's grid independence will also increase. Since the BESS is dependent entirely on the solar output of the array, it also has a limited effect on grid independence during days with low solar irradiation. When considering the other charging profiles such as the petrol station charging profile and Drive-4-Data charging profile, there was no improvement. This indicates that the effectiveness of integrating a solar storage system is heavily dependent on the demand profile of the EVCS. The fast-charge charging profile had the greatest improvement in its success ratio, indicating that it benefited most from the addition of storage. Figure 14a,b demonstrates why demand was low during peak power production at 12:00 p.m., enabling it to export and store the excess electricity. As production reduced, the sun dropped, and the demand increased, the stored energy was discharged to meet the demand. At 7:00 p.m., the energy imported matched the load, indicating that the BESS was depleted as there was no power being supplied via the array, and there was no other power source. This and the quantity of exported energy in the morning suggest that increasing the capacity of the BESS would enable the PVEVCS to use that stored electricity past 7:00 p.m., improving its grid independence and reducing its impact on the grid, most significantly during peak grid demand. Figure 14b also indicates that with an increased capacity, excess electricity can be better utilized during periods of low demand even during the winter, since at 3:00 p.m., energy was still being exported, which could instead be stored within the BESS for later use. In Figure 15a,b, the high demand during the morning and late-night hours resulted in the system importing much of its energy throughout the day. Figure 15a shows that by 5:00 p.m., the stored energy on average had been depleted, and again the system had to import electricity to make up the difference. Figure 15b illustrates that by 12:00 p.m., the system was importing energy to meet the demand. This suggests increasing the capacity for the Drive-4-Data charging profile will have a negligible effect. Further emphasizing this, Figure 16a,b shows no change in its performance even with 1 MWh of storage, alluding to other methods being necessary to reduce the EVCS's dependence on the grid. Employing Alternative BESS Power Management Methods Employing alternative BESS power management methods could improve the BESS's effectiveness. As demonstrated by Figure 15a,b, when the demand was high and there was little or no PV production, the BESS had little to no chance to charge and store energy. Therefore, by charging the BESS during periods of low demand and low cost and discharging between 8:00 a.m. and 7:00 p.m. at high demand, the impact on the grid was reduced, as the PVEVCS required less or no imported energy to meet demand. The system imports similar quantities of energy during the night as it would during the day, thus minimizing its impact during peak grid demand. It is also in less demand and cheaper, presenting a minimal impact on the grid. Financial forecasting would be necessary to identify whether the cost of implementing the power management method is cheaper than importing the energy during peak grid demand. However, if the goal is to minimize dependence on the grid, then it is effective. Another method is to use trend-based prediction modeling to predict periods of high demand for the PVEVCS and for the grid to discharge during these periods in order to reduce the cost and grid strain. The limitations here are that a large dataset of real-world usage statistics of the PVEVCS has to be recorded first after construction of the PVEVCS. Limitations The EV penetration rate was calculated assuming an even distribution across the UK. However, this may vary due to the variation of total charging points by region. An improved method would be to estimate the number of EVs using the total number of charging points and their usage. This would improve the accuracy of the charging profiles and, as such, the results of the simulation. The EV battery capacity is the weighted average of the top three most popular newly licensed vehicles in the UK in 2021. However, since the arrival of new vehicles to the charging station has uncertainties and fluctuations, selecting the battery capacity of a new vehicle arrival randomly from a weighted list of newly licensed EV's capacities accounts for it. Power /kW Solar Export Import Demand The SoC of each vehicle was fixed at 20% on arrival and 90% on departure. Having the SoC follow a Gaussian or stochastic distribution would improve the accuracy of the simulation. The length of stay is currently a function of the battery capacity, SoC, and charger output. Hence, once a car is charged to 90%, another instantly begins charging, which is impossible. By having the time after charging follow either a Gaussian or stochastic distribution limited between two set lengths of time, the accuracy is improved. Having access to a UK EV driving dataset would allow the PVEVCS to be assessed using data from the actual users of the charging station. It would improve the realism of the simulation relative to EV charging in the UK, thus providing valuable insight. The current resolution of the simulation is limited to hourly steps. Increasing the resolution to 5-min steps would increase the resolution by a factor of 12 and allow for more detailed daily and hourly analysis. This affords a better understanding of the PVEVCS's performance seasonally during the morning, afternoon, evening, and night, where the conditions are different. All four charging profiles are currently averaged over the year to account for seasonal modulation. However, since the production is not, modulating the charging profiles improves the accuracy of the results, as it factors in seasonal variation. Several factors such as driving behaviour, trip count, and distance alter throughout the year, which affects the results. Previous studies have considered at-home and destination charging in characterizing the demand of an EVCS. This study only accounts for this by using the fast-charge charging profile, while the other three charging profiles only consider a singular location. Selecting or developing further charging profiles that do factor in the multiple modes of charging would increase the accuracy of the simulation. Conclusions EVCSs remain a critical component for the widespread adoption of EVs. However, the substantial load they will place on the grid needs to be managed. One method is to integrate both solar and energy storage in order to reduce the EVCS's dependence on the grid. This paper investigated how integrating solar and storage affects the grid independence of an EVCS. Multiple simulations were run to analyze the performance of four variations of a PVEVCS using four different charging profiles: the petrol station charging profile, the local road usage charging profile, the Drive-4-Data charging profile, and the fastcharge charging profile. The different PVEVCS configurations were non-grid-connected, grid-connected, grid-connected with 0.5 MWh of storage, and grid-connected with 1 MWh of storage. It was found that the addition of solar had a significant effect in offsetting the demand and supplying electricity when the daily EV load profiles and PV outputs had similar patterns. The addition of a BESS further improved the grid independence for only two charging profiles: the road usage and fast-charge charging profiles. This was due to the pattern of the EV load profile peaking in the evening allowing the BESS to recharge during the morning and discharge during the evening peak. The petrol station and Drive-4-Data had no change in grid independence. The petrol station charging profile placed a considerable amount of demand on the system, and hence, storage had no effect, whereas the Drive-4-Data charging profile had its demand spread throughout the 24-h period, limiting the BESS's ability to store a charge. This indicates that a BESS is only effective if there is enough of an opportunity for it to recharge, so the EV load profile's pattern factors heavily into whether a BESS is effective. Alternative BESS power management methods have been proposed in order to improve its effectiveness but are untested. Further research should be conducted on testing the alternative BESS power management methods to build upon this research. Work on optimizing the BESS capacity as a function of the EV load profile and characterizing the EV load profile while considering a greater number of factors, such as local population statistics, local road trip count, and local area EV penetration rates, would improve the accuracy of the results.
9,443
2021-11-26T00:00:00.000
[ "Engineering", "Environmental Science" ]
1-(2-Hydroxyethyl)-3-(3-methoxyphenyl)thiourea In the title compound, C10H14N2O3S, the 3-methoxyphenyl unit is almost planar, with an r.m.s. deviation of 0.013 Å. The dihedral angle between the benzene ring and the plane of the thiourea unit is 62.57 (4)°. In the crystal, N—H⋯O and O—H⋯S hydrogen bonds link the molecules into a three-dimensional network. In the title compound, C 10 H 14 N 2 O 3 S, the 3-methoxyphenyl unit is almost planar, with an r.m.s. deviation of 0.013 Å . The dihedral angle between the benzene ring and the plane of the thiourea unit is 62.57 (4) . In the crystal, N-HÁ Á ÁO and O-HÁ Á ÁS hydrogen bonds link the molecules into a threedimensional network. Comment Melanin is the pigment responsible for the color of human skin and it is formed through a series of oxidative reactions in the presence of key enzyme tyrosinase (Ha et al., 2007) that converts tyrosine into melanin. It is secreted by melanocyte cells distributed in the basal layer of the dermis. Its role is to protect the skin from ultraviolet (UV) damage by absorbing the UV sunlight and removing reactive oxygen species. Therefore, its inhibitors are target molecules for developing anti-pigmentation agents. Numerous potential tyrosinase inhibitors have been discovered from natural and synthetic sources, such as ascorbic acid (Kojima et al., 1995), kojic acid (Cabanes et al., 1994), arbutin (Casanola-Martin et al., 2006) and tropolone (Son et al., 2000;Iida et al., 1995). Some thiourea derivatives, such as phenylthiourea (Thanigaimalai et al., 2010;Klabunde et al., 1998;Criton, 2006), alkylthiourea (Daniel, 2006), thiosemicarbazone ) and thiosemicarbazide (Liu et al., 2009) have been also described. However, only few of the reported compounds are used in medicinal and cosmetic products because of their lower activities, poor skin penetration, or serious side effects. Consequently, there is still a need to search and develop novel tyrosinase inhibitors with better activities together with lower side effects. To complement the inadequacy of current whitening agents and maximize the effect of inhibition of melanin creation, we have synthesized the title compound, (I), from the reaction of 3-methoxyphenyl isothiocyanate and ethanolamine under ambient condition. Here, the crystal structure of (I) is described (Fig. 1). The 3-methoxyphenyl unit is essentially planar, with a r.m.s. deviation of 0.013 Å from the corresponding least-squares plane defined by the eight constituent atoms. The dihedral angle between the benzene ring and the plane of the thiourea moiety is 62.57 (4) °. In the crystal, N-H···O and O-H···S hydrogen bonds link the molecules into a 3-D network (Fig. 2, Table 1). The H atoms of the NH groups of thiourea are positioned anti to each other. Experimental Ethanolamine and 3-methoxyphenyl isothiocyanate were purchased from Sigma Chemical Co. Solvents used for organic synthesis were distilled before use. All other chemicals and solvents were of analytical grade and were used without further purification. The title compound (I) was prepared from the reaction of 3-methoxyphenyl isothiocyanate (0.4 ml, 1 mmol) with ethanolamine (0.2 ml, 1.2 mmol) in acetonitrile (6 ml). The reaction was completed within 30 min at room temperature. The reaction mixture was filtered and washed with dry n-hexane. Removal of the solvent under vacuum gave a white solid (80%, m.p. 398 K). Single crystals were obtained by slow evaporation of the ethanol solution held at room temperature.
763
2010-09-04T00:00:00.000
[ "Chemistry" ]
Classification of Agriculture Farm Machinery Using Machine Learning and Internet of Things : In this paper, we apply the multi-class supervised machine learning techniques for classifying the agriculture farm machinery. The classification of farm machinery is important when performing the automatic authentication of field activity in a remote setup. In the absence of a sound machine recognition system, there is every possibility of a fraudulent activity taking place. To address this need, we classify the machinery using five machine learning techniques—K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF) and Gradient Boosting (GB). For training of the model, we use the vibration and tilt of machinery. The vibration and tilt of machinery are recorded using the accelerometer and gyroscope sensors, respec-tively. The machinery included the leveler, rotavator and cultivator. The preliminary analysis on the collected data revealed that the farm machinery (when in operation) showed big variations in vibration and tilt, but observed similar means. Additionally, the accuracies of vibration-based and tilt-based classifications of farm machinery show good accuracy when used alone (with vibration showing slightly better numbers than the tilt). However, the accuracies improve further when both (the tilt and vibration) are used together. Furthermore, all five machine learning algorithms used for classification have an accuracy of more than 82%, but random forest was the best performing. The gradient boosting and random forest show slight over-fitting (about 9%), but both algorithms produce high testing accuracy. In terms of execution time, the decision tree takes the least time to train, while the gradient boosting takes the most time. Introduction As the world is progressing towards the fourth Industrial Revolution [1], the use of state-of-the-art technologies, including the Internet of Things (IoT) [2], Machine Learning [3] and Cloud Computing [4], are becoming mainstream. Agriculture contributes a big chunk to the world economy [5] and modernizing it can drastically increase the production of farms, thus causing an overall growth in the world economy. Agriculture modernization requires the use of agriculture machinery to plant, cultivate and harvest the crops. The use of the right machinery at the right time is important for getting increased productivity. Typically, the on-field machinery usage is manually supervised and verified as the usage process and remote sensing is prone to fraudulent activities. There are various ways a fraudulent activity could take place. For example, a reservation was made, but the machinery was never used or only a partial activity was performed. Similarly, a reservation was made for a specific machinery type, but a wrong machinery was used. There is also a possibility that no machinery was used at all. This is possible as the machines don't come with a remote recognition feature (or mounted devices). To mitigate this and remotely authenticate the use of agriculture machinery, a two-pronged solution is required. First, determine the location and work area of the machinery. Second, determine the farm machinery type used for activity. The first problem, which requires the determination of the work area, was solved using the IoT (GPS), convex hull and AI algorithms. The methods for this are already published in our previous work [6]. The second problem, which is the determination of machinery type used, is being addressed in this paper. The solution is sought using the IoT-backed remote sensing and supervised machine learning algorithms. Machine learning, together with big data, is revolutionizing agriculture and producing better results than before. The classification is performed using neural networks, K-Nearest Neighbor (KNN) and Naïve Bayes classifier [7,8]. It is used in agriculture for machinery, crop, soil and livestock management [7]. The Support Vector Machine (SVM) algorithm has been used for soil texture classification and was found to be helpful in the choosing of crops [9]. Similarly, decision tree was used in performing the classification for predictive analysis [10]. These methods are also largely used for the classification of agriculture activities, including harvesting, bed-making, transplantation, walking and standstill. Moreover, there are studies on the classification of faults in rotating machinery using the vibration signals [11][12][13]. Furthermore, IoT and deep learning techniques have been used for sensing soil temperature, nutrients and humidity, controlling and analyzing water consumption [14]. To our knowledge, the machine learning algorithms have not been applied to the recognition of agricultural farm machinery. If applied, we believe that the modeling work will be useful for improving agricultural field operations and its remote monitoring. We are interested in developing a robust classification model that can be used to automatically classify the farm machinery. Historically, vibration and tilt have been endorsed as important characteristics of machines [15,16]. This endorsement was further strengthened by our analysis, where we collected data about the vibration and tilt of machinery from leveler, rotavator, and cultivator, and observed that all three types of machinery are distinguishable in terms mean and standard deviation. We discovered that the deviation between the data is because of the different vibrations and tilt caused by the machinery during usage. More information on this analysis is given in Section 4. We apply five supervised machine learning techniques-K-Nearest Neighbor, Support Vector Machine, Decision Tree, Random Forest and Gradient Boosting. Our experimental results reveal that all five machine learning techniques have shown good accuracy when classifying the machinery type on the vibration and tilt alone. The accuracy improves further when the vibration and tilt are used together for the training of the models. Out of the five techniques, the Random Forest showed more than 90% accuracy. The Gradient boosting and random forest show slight over-fitting (about 9%), but both algorithms produce the highest testing accuracy. It is also important to compare the models in terms of execution time. We found the decision tree as the fastest taking the least time to train, while the gradient boosting was the slowest (taking the most time to train). Since, the models need continuous re-training to keep up with accuracy and recognize new patterns, we outline a methodology for re-training of the models. The rest of the paper is structured as follows-Section 2 describes the related work, followed by Data Collection and Processing Framework in Section 3. Section 4 identifies the features required for classification. Section 5 discusses the classification algorithms in detail. Section 6 prepares the data for model fitting, applies the machine learning algorithms and evaluates the models and features in terms of accuracy. Section 7 outlines a mechanism for the deployment of models on the cloud, and its retraining. Discussion and concluding remarks are presented in Section 8. Related Work IoT and machine learning algorithms have been extensively used in agriculture for automation and analysis [17,18]. Several recent reviews can be found on the use of classification in agriculture statistical analysis [19]. Sharma et al. used a smart phone carried by a farmer and recorded accelerometer data to perform classification of agriculture activities, including harvesting, bed-making, transplantation, walking and standstill. The classification was performed using neural networks, KNN and Naïve Bayes classifiers, where neural networks was found to be the most accurate [8]. In the same way, different machine learning methods have been deployed for soil type classification including naïve bayes, SVM and deep learning [20][21][22]. Alternatively, Barman et al. used the support vector machine learning algorithm for soil texture classification. The classification of soil texture could be helpful when decide to cultivate the crops [9]. The predictive analysis was performed using the decision tree [10]. Dan et al. used big data analytics to perform data analysis on agriculture machinery, saving a lot of considerable time and cost [23]. In another study, the machine learning algorithms were applied towards the classification of faults in rotating machinery using the vibration signals [11][12][13]. To predict the growth of plants and crops, Singh et al. have developed a classification system using differential evolution algorithms [24]. In another work, Zhao et al. used the vision-based classification approach to navigate the agriculture machinery for efficiency [25]. The deep learning models are extensively applied towards the precision agriculture on data collected using the internet of things [26]. While Eleni et al. have developed a cloud middleware approach to support precision agriculture with massive data generated from IoT [27]. The recent trends in sensors and IoT were discussed in detail in the review work of Laura and Lorena [28]. Several IoT architectures are developed for smart agriculture [29][30][31]. Artificial Intelligence (AI) and cloud computing together with big data is revolutionizing the agriculture sector and producing better results than before. AI in agriculture is applied on sensors data for machinery, crop, soil, and livestock management [7,32]. Bhavani et al. proposed an IoT-based agriculture system to overcome the economic losses by predicting and preventing the harmful diseases affecting the farm [10]. In another study, Waleed et al. showed the application of IoT in calculation of field activity. They found that GPS could accurately record the field location and the operation time of agriculture machinery [6]. In a similar study, a yield monitoring system was proposed to efficiently collect the GPS data and perform operational analysis of the farm machinery [33]. To analyse the agriculture machinery operational cost, Sopegno et al. have developed a smart web and mobile platform called AMACA (Agricultural Machine App Cost Analysis). The application could be used on own machinery or to hire an agriculture machinery service [34]. Gard et al. described the use of IoT and deep learning for sensing soil temperature, nutrients and humidity, controlling and analyzing water consumption [14]. Prathibha et al. used the temperature and humidity sensors to monitor the agriculture field. The data from sensors was transferred to farmers mobile for analysis and storage [35]. To fulfill the demand of high-quality agriculture machinery, Zhang et al. have devised a design of agriculture machinery service management system that is based on latest IoT and cloud technology. It consists of a mobile application and a web server for effective monitoring of the system [36]. Data Collection, Storage and Processing In order to perform the model fitting, we need to collect data on the vibration and tilt of agricultural machinery. The platform that we have developed for the collection of data and its processing consists of an IoT module, a smartphone application and cloud modules as shown in Figure 1. The processing of data happens on the cloud supported by the event-handlers, data-storage units and machine learning modules. We use Amazon Web Services for cloud services. Details of each of the components are given below. Iot Module and Mobile Application We use the IoT Module TI CC2650 Sensor Tag developed by Texas Instruments as shown in Figure 2. The Sensor Tag has a Bluetooth low energy MCU and several sensors are mounted on it (IR Temperature, Movement 9 axis (accelerometer, gyroscope, magnetometer, and humidity, etc.). The main features of the IoT module are lower power consumption with longer battery life from the lithium battery cell, high performance ARM Cortex M3, complete development system and extreme flexibility for IoT based applications. Data from movement sensors (accelerometer and gyroscope) from the sensor tag are used to generate data for the machine learning algorithms (responsible for machinery classification). The data consist of 16 bits or 2 bytes for each axis (total 12 bytes). The data are read using a Bluetooth connection with the service provider smartphone app. The service provider application on the driver's smart phone integrates the GPS data (longitude, latitude), date and time with sensors data coming from the IoT module. After integration, these grouped data in CSV format are sent to the cloud via 3G or 4G/LTE interface in the smartphone. The Cloud Data Store and Processing For all the cloud operations (storage, processing and communication), we use the Amazon Web Services [37]. The smartphone app records the data from agriculture machinery and sends it to the cloud. The API gateway that receives the data is implemented using the lambda event-driven functions. The cloud has two storage classes-a simple storage and relational database [38]. The data from sensors are archived in non-relational storage also known as S3. The relational database (implemented using the RDS) stores the output of machine learning algorithms [39]. Machine Learning Modules The machine learning modules are hosted on the cloud. It implements several classification models. The algorithms are run over the IoT data collected using the IoT module. The machine learning models process the received data and classify it. The classification results are outputted to the relational storage. More details about the classification models are given in Section 5. Features Extraction Before applying any machine learning techniques, it is important to identify the distinguishing features of the targeted machines. Our analysis shows that the tilt and vibration of machinery are distinguishing features that can help train our classification models. We record the vibration and tilt using the IoT device placed on the machinery. The IoT device contains two sensors that we will use to train our classification model, (1) 3-Axis accelerometer (2) 3-Axis gyroscope. The accelerometer is used to measure the vibrations, and the gyroscope tells us about the tilt. We are using three types of agriculture machinery-leveler, rotavator, and cultivator. All three machines are tractor-drawn implement, with different uses and operations. The system could be scaled up and additional machinery could also be used. For now, we experiment with the above three machinery types only. We place the IoT module on these three implements and record the data. The data is recorded at the rate of one sample per second. After recording sufficient data, a subset is used to analyze and visualize the readings. The sample is visualized using the matplotlib package [40] of python. In Figure 3, it can be observed that the data for all three types of the machinery is distinguishable in terms on vibrations in 3-axis. The cultivator and leveler have low vibrations, while rotavator is having greater vibrations. For accelerometer, the value of 1 can be represented as 9.8 m/s 2 and value −1 can be represented as −9.8 m/s 2 (negative sign represents opposite direction), where 9.8 m/s 2 is Earth's gravity. Each reading is taken in terms of Earth's gravity. It can be observed that rotavator readings are mostly greater than 1 or less than −1, while cultivator and leveler are within the same range. The x-axis readings for the cultivator is observed to be low. However, the y-axis and z-axis readings are low for leveler compared to the cultivator. We now take a look at the mean and standard deviation for the given sample of data as shown in Table 1. The mean is the average of readings, while the standard deviation tells us about the spread of data. It can be observed that there is not much of a difference in the mean for all types of machinery. However, there are variations in standard deviation. It is much higher for rotavator compared to other machines. The difference between standard deviation for cultivator and leveler are also significant. This deviation between the data is because of the different vibrations caused by the machinery. Figure 4 visualizes these data points on a scatter plot and further establishes this distinguishable relation. It can be observed that each machinery data points are distinguishable from each other. The rotavator data shows more spread, while the leveler and cultivator data points are more compact. Still, the points could be separated from each other. The analysis shows a clear opportunity for the machine learning algorithms to classify the farm machinery based on the vibration and tilt. In Section 5, we discuss the classification algorithms that we use for the classification of agriculture farm machinery. Classification Algorithms In this paper, we focus on five multi-class supervised classification algorithms-K-Nearest Neighbor, Support Vector Machine, Decision Tree, Random Forest and Gradient Boosting. These types of techniques can also be called algorithm adaptation techniques. Finer details about each algorithm are given as follows. K-Nearest Neighbor (KNN) K-Nearest Neighbors is a supervised machine learning algorithm [41]. The data points are trained corresponding to their class label. The point for which the class is to be predicted calculates the distance with the nearest 'K' points, where K could be any number. The distance is calculated using the Euclidean distance formula. The label for the highest number of K Nearest points is determined as the predicted class. In agriculture, the KNN algorithm was found to be very effective for the classification of different grains/grain cultivars. Support Vector Machine (SVM) Support Vector Machine is a supervised learning algorithm that classifies data based on the separator(s) [42]. The separator(s) is hyperplanes, which distinguishes the data based on the training class labels. For example, in a two-dimensional space where two data variables are present, the hyperplane could be a line dividing the plane into two parts where each class lays on either side of the line. A similar analogy is followed for higher dimensions where data points are separated by hyperplanes. Essentially, SVM is about finding hyperplanes that best separates the data classes. The predicted classes in SVM are made according to the side of the hyperplane where the data point falls. Support vector machine is a kind of structural risk minimization based learning algorithms. As a popular machine learning algorithm, SVM has been widely used in many fields, such as information retrieval and agricultural, for crop and soil classifications [22]. Decision Tree (DT) Decision Tree is a supervised learning algorithm primarily used for classifications [43]. The basic intuition behind a decision tree is to map out all possible decision paths in the form of a tree. It forms a flow chart like tree structure, where each node denotes the test on attribute, and each branch represents the outcome of the test. The end-node (terminal node) holds the class label. Every time a class is to be predicted the data features pass through a certain decision path and class label is predicted at the terminal node. This technique is capable of dealing with both complete and incomplete data. And is applied into the classification problem for all kinds of agriculture datasets. Random Forest (RF) Random Forest is a supervised learning algorithm that uses ensemble learning techniques to make a strong classifier based on weak classifiers [44]. This bagging method is used to train the models, and this is responsible for an increased performance. As the name suggests random forest is a forest of decision trees, random forest builds multiple decision trees which work as a weak classifier and results of each weak classifier (decision tree) is merged together to make a strong classifier, in parallel. Random Forest Random is widely used in crop classification and has the ability to predict crop yield corresponds to the current climate and biophysical change [45]. Gradient Boosting (GB) Unlike Random Forest (which is a bagging method), the Gradient Boosting is a supervised learning ensemble method that uses boosting method for training [46]. It uses weak classifiers to train in a sequential manner rather than in a parallel manner. The output of first weak classifier is given to the second weak classifier. This way, the mis-classified data points of the first are improved in the second. The practice is continued till the number of weak classifiers are made in sequence. The weak classifiers are then merged together to form a strong classifier. In gradient boosting, the loss function of the weak classifier is optimized, using a learning rate and the next classifier has a decreased loss. It is widely used in agriculture for area estimation and irrigation planning [47]. Dataset Initially, we train and test our models on a dataset of a total of 14,488 data points. The data are recorded at a frequency of one sample per second. The provided sample dataset includes 3-axis accelerometer and gyroscope data along with the label for machinery type as shown in Figure 5. Initially, we train our model for three machines only (cultivator, leveler and rotavator). However, this system could be expanded and more machines could be added. Data Pre-Processing and Normalization The data require pre-processing to make it suitable for model fitting. As the nature of the data is numerical continuous, we need to normalize each feature. This way, each feature will be equally represented (having an equal weight). The goal of normalization is to bring all features to a common scale without affecting their unique patterns [48]. The data are split into two parts-training-set and testing-set. The training-set is used for training our machine learning models, while the testing-set is used for evaluation of the trained models. As a known procedure, we split the data with a 70:30 ratio, with 70% of the data used for training, and 30% used for testing. Implementation and Tuning of Models To apply classification algorithms to our data, we used an open-source package available for python called Scikit-learn [49]. Scikit-learn is an all-in-one package for machine learning, which helps us pre-process our data, implement various machine learning algorithms and evaluate them. We implement five machine learning models and compare them in terms of accuracy and training time. Tuning of Models The Hyperparameters are properties that govern the entire training process. Manually tuning hyperparameters is a time-consuming process and very hard to keep track of. It is important to remember the hyperparameters that have been tried and the one that has not been. Grid Search is an algorithm used to automate the process of finding optimal hyperparameters of certain machine learning algorithm. Grid Search is given a list of parameters, which are tested one-by-one and optimal parameters are found. Grid Search uses cross-validation technique to validate the models in terms of accuracy. In Table 2, we have applied Grid Search using GridSearchCV function available in Sklearn. The process involves the tuning of parameters for the chosen ML algorithms. Starting with KNN, the K-nearest neighbors was tuned, and the best value was found to be 6. For SVM, the best-performing kernel RBF was selected and the hyperparameters C and Gama were tuned. Both were found to have an optimal value of 1. For decision tree, we used the Grid Search to specify the depth of a tree. The optimal accuracy was determined at 15. For Random Forest, the number of estimators (weak learners) were tuned, and 50 estimators showed a promising result. The gradient boosted trees had an optimal depth of 8, and the optimal learning rate was found to be 0.2. After finding the best parameters, we evaluated each model in terms of accuracy. Choosing Best Classification Model The five classification machine learning algorithms are compared in terms of accuracy and execution time when trained on the dataset mentioned in Section 6.1. Table 3 and Figure 6 provides the results, including training and testing accuracy. The figure also shows the training time for each algorithm. It can be observed that all the algorithms have a training accuracy of more than 90%. The testing accuracy of Decision Tree, SVM and KNN is less than 90%. The decision tree model is highly over fitted as the difference between the training and testing accuracy is the largest. SVM and KNN models have less over fitting compared to others, but the testing accuracy is not the best that we are getting. The ensemble learning models, including Gradient Boost and Random Forest have a very high training accuracy. However, the results are slightly over fitted as the gap between training and testing accuracy is about 9%. Even though there is slight overfitting, both gradient boosting and random forest produce highest testing accuracy. In Table 3, the training time taken for each algorithm is also given. The Decision Tree has the lowest training time while Gradient Boosting has the highest. The reason for high training time for gradient boosting is understandable as it uses a sequential approach to train weak classifier and then make a strong classifier. SVM has the second highest training time, and the accuracy compared to other algorithms is not very good either. KNN and Random Forest both have a reasonable training time. Keeping in view the accuracy as well as the training time, we conclude that Random Forest is the best-performing algorithm in our case. It is important to note that the current algorithm's performances are case variant. We may get different results with these algorithms trained on another dataset. Therefore, our classification model may require constant retraining with more data in hand to generalize the classification time to time. Evaluating Important Features The proposed research work analyzes the features of gyroscopes and accelerometers and takes a look at which feature has the most impact on the accuracy of our models. For this, we trained the algorithms on gyroscope and accelerometer data separately and then compared the results. This way, we figured out which data feature is the most important. Tables 3 and 4 provide an overview of how much accelerometer and gyroscope contribute to the machinery classification accuracy. Each sensor data is trained separately on the five algorithms. It can be observed in Figure 6, that the testing accuracy for accelerometer on all algorithms is between 82-85%. For gyroscope, shown in Figure 7, it is significantly low (between 68-72%). It is clearly visible that the 3-axis accelerometer is a more important feature for classification than the gyroscope. Though, the training time for the accelerometer is slightly higher compared to the gyroscope. Gradient Boosting and Random Forest are both over fitted. For accelerometer, the over fitting is smaller compared to the gyroscope. The Decision Tree, SVM, and KNN do not show overfitting for both features. Table 5 and Figure 8 shows that the testing accuracy increased when gyroscope and accelerometer are used in combination (for classification). We are getting testing accuracy between 85-92%, which is very good compared to gyroscope or accelerometer alone. Figure 8. Comparing algorithms in terms of training and testing accuracy trained on 3-axis accelerometer and gyroscope data. To further highlight the strength of our classification model, we tested the data collected in a different agriculture field using another set of machines (leveler, rotavator and cultivator). It can be observed in Table 6 that our classification model has made correct predictions for all six datasets. The predictions were right, regardless of the number of data-points. A low number of data-points also showed good results. However, our assumption is that a very low number of samples in excess of 100 or less may misclassify, as it might not contain enough information for the classification model. When the model starts misclassifying occasionally, we might also need to update our model by retraining it to the variations in the new datasets. Deploying and Retraining Classification Model After evaluating the classification models on a sample dataset, we figured that the Random Forest is the best performing. Now, the selected model has to be deployed on the server to make it available as a service. For this, the trained model is stored in a file. Every time a classification is required, the stored model is used, thus, saving a lot of time in training the model repeatedly. We invoke the model using an API gateway specifically implemented for this service. The gateway is backed by lambda function of Amazon Web Services (AWS). With time, the deployed model may become outdated and its accuracy may start decreasing. This may be because it is not used to the new kind of data being given to it. For this, we need to retrain our model on some new data. The process of retraining is elaborated in Figure 9. Discussion and Conclusions Many of the agriculture field operations are automated to meet the food demands of rising global population. Automated field operations are good for increasing the overall field productivity; however, it requires supervision and automated work estimation. There is a potential risk of fraudulent activities taking place in remote field supervision. It has been observed that taking advantage of remote setup or the hope of getting increased subsidy from the government or donor organization, often the wrong (or cheaper) machinery is being used. In some cases, no machinery is used at all. Given the scope of this work, the focus of discussion is on the classification of agriculture farm machinery. The classification of farm machinery was performed using the Machine Learning algorithms. It was observed that the vibration and tilt of machinery are good indicators for recognizing a machine. The 3-axis accelerometer was used for recording the vibration and the tilt was recorded using the 3-axis gyroscope. The sensors were connected to the IoT device mounted on the farm machinery. We applied five machine learning techniques (K-Nearest Neighbor, Support Vector Machine, Decision Tree, Random Forest, and Gradient Boosting) on the vibration and tilt of farm machinery. As was anticipated, the vibration and tilt both showed significant results in identifying the farm machinery. However, the vibration-based classification showed better accuracy (82-85%) compared to the tilt-based classification (showing an accuracy of 68-71%). Interestingly, the accuracy improves further when the vibration and tilt are used together, showing a maximum accuracy of 91%. We found that out of the five machine learning techniques that we applied towards the model fitting, Random Forest showed the most accuracy. The Gradient boosting and Random Forest showed slight over-fitting, but both algorithms produced the highest testing accuracies. RF uses a multiple weak-classifier to build a strong classifier, and showed great promise in classification. It is widely used in crop classification and has the ability to predict crop yield corresponding to the current climate and biophysical changes. In terms of the execution time taken in training the model, we found that the Decision Tree was the quickest taking the least time to train. In contrast, the Gradient Boosting was the slowest, taking the most time to train. In order to keep up with the accuracy and enable the new recognition of new patterns, we have developed a system for retraining. The retraining involves the inclusion of new data in redoing model-fitting. The model is useful for the remote supervision of agricultural farm machinery and its authentication. The model alone will not be enough to avoid fraudulent activities and there is a possibility that the right machinery is used but the machine may be operational at a location that was not intended. To solve this problem, our model needs to be coupled with field recognition and area estimation algorithms. As part of future work, we would like to apply the unsupervised machine learning techniques and deep learning and apply different implementations of it. Furthermore, it will be interesting to train our models on a larger set of machines. Additionally, exploration of other features apart from the vibration and tilt that will be worth the review. More data could be collected, and models could be retrained to further generalize the classification process.
7,147
2021-01-01T00:00:00.000
[ "Computer Science", "Agricultural and Food Sciences" ]
Harnessing ICT Resources to Enhance Community Disaster Resilience: A Case Study of Employing Social Media to Zhengzhou 7.20 Rainstorm, China : This study aimed to explore how community disaster resilience can be enhanced via the utilization of ICT resources. Three social media applications were selected. Taking the 2021 Zhengzhou 7.20 rainstorm as an example, questionnaire responses were collected and analyzed, and a linear regression model was constructed to explore the impact of the relationships between responses. The findings showed that the use of WeChat, TikTok, and Weibo had positive effects on community disaster resilience. Specifically, the use of social media (WeChat, TikTok, and Weibo) by the general public during this rainstorm disaster was positively related to convenience and trust, creation and dissemination, emotion and communication, cooperation and collective action, and relief and release. We also analyzed the differences in the use of the three social media platforms during the rainstorm disaster and found that the number of people who used TikTok was the highest, but the variable scores for TikTok were not the highest. WeChat had the highest variable scores, and both the number of users and variable scores for Weibo were in the middle. Introduction Intense and widespread natural disasters are increasing at an unprecedented rate.Climate change driven by human behavior is accelerating the occurrence of natural disasters and exacerbating the risk of extreme weather disasters.As climate change and natural disasters intensify, these extreme challenges may occur in areas where they have never been encountered before [1].Natural disasters can happen anywhere in the world, but their effects depend on how vulnerable human communities are to these catastrophes and how severe the natural phenomena are [2].Natural disasters tend to be more devastating in developing countries due to economic, political, social, and cultural factors that increase vulnerability [3].Henan Province in China experienced an excessively strong rainstorm from 17 July to 23 July 2021, which triggered severe flooding.According to the "Investigation Report of the '7 20' Extraordinary Rainstorm Disaster in Zhengzhou, Henan Province," the 7.20 rainstorm was a natural disaster that resulted in significant property damage and casualties, severe flooding in cities and rivers, and numerous other disasters, including building collapses, landslides, and subway accidents [4].Verified sources have claimed that 14,786,000 individuals were impacted, and as of 30 September, there had been direct economic damage of 120.6 billion RMB.Three hundred and ninety-eight people perished or went missing as a result of the tragedy [5].The local government and grassroots district governments, counties, departments, and units displayed a serious lack of risk awareness social media platforms to be used to respond to and manage urban storm flooding.At the same time, this event offers important lessons for other cities and regions aiming to prevent such a tragedy from happening.This study explored ICT use cases and aimed to investigate the ways in which social media can be utilized to enhance or supplement the resilience of communities and to reduce the risk of disasters.Specifically, it examined how citizens in Zhengzhou used social media to enhance their disaster resilience during the 7.20 rainstorm, highlighting the importance of active citizen participation in disaster management as well as community-level measures to increase resilience to natural disasters. Theoretical Background 2.1. ICT Resources and Disaster Resilience Disasters are sometimes confused with crises, but crises are typically organizationbased, whereas disasters affect the community as a whole [27].Given the emergent and complex nature of disasters, the mitigation of their impact is critical [28].During disasters, more and more individuals are turning to new technologies to acquire accurate, trustworthy, and timely information.Information and communication technologies (ICTs) have become an effective tool in promoting disaster response [29].As stated by Tamilselvan et al. [30], ICT (information and communication technology) is a term that is often used interchangeably with information technology (IT), but it is a broader concept that highlights the importance of unified communications in addition to telecommunications (such as telephone lines and wireless signals): computers, middleware integration, storage, and audiovisual systems that allow users to create, access, store, transmit, and manipulate information.The use of ICT resources has been shown to be an effective method of disseminating information in disaster response scenarios [31].ICT resources provide real-time communications for lifesaving applications such as search and rescue actions, confirming the safety and security of family, friends, and assets, and providing disaster recovery services [32].ICT resources use aids in information generation and support better decision making for effective disaster management systems, and ICT resources are considered necessary to enhance adaptive capacity and support feedback, ensure access to information, promote active participation, and reduce vulnerability [33]. Today, ICT resources are utilized across various fields to ease and facilitate multiple aspects of human life.Existing ICT resources are already being used by the public, private, and civil sectors, where they offer the potential to reach a wide range of people, especially through mobile devices that allow unrestricted access to the internet from anywhere.ICT resources have advanced significantly and can make use of a variety of technologies to produce information during disasters.In addition to this, significant improvements in computing power have enabled the management and analysis of large datasets during disasters [34].Specifically, the rapid development of artificial intelligence, smart cities, social media, etc., has enabled researchers to collect and analyze detailed information, and the new generation of communication technologies provides high-speed voice, image, and data transmission, which was previously unimaginable [35][36][37]. The value of a technology lies in its application rather than in the technology itself, which could clarify why ICT resources that previously had no significant role in an organization may become the fundamental component of its technology infrastructure after a crisis.In practice, technology "focuses on emerging technological structures formulated in practice rather than specific structures fixed in technology" [38].Thus, people create and reconfigure their communication and technology systems in response to the environmental changes brought about by disasters in order to access the connections and resources required for recovery [19,39].Researchers are increasingly focusing on studying the role of ICT resources in disasters and disaster response [40][41][42].In particular, the increased usage of social media made possible by ICT has contributed to disaster resilience and management [31]. Social Media and Community Disaster Resilience Similarly to the mass media, social media has made it possible to easily disseminate enormous amounts of information to vast audiences.Recent years have seen a considerable rise in the use of social media in post-disaster environments, and popular services like Twitter and Facebook are now being used to meet various disaster data-gathering demands [43].Social media offers value in terms of information sharing through web-based platforms and services that can be accessible through information and communication technologies such as desktop computers, laptops, cell phones, and tablets [44]."Social media" and "resilience" are two terms that now appear frequently in the emergency and disaster management literature [45].The popular use of social media in disasters has increased its potential as a new source of data for understanding disaster resilience [46].Several studies have attempted to investigate social media activity during disasters.For example, one study found that people posted situation updates and losses on social media platforms during disasters [47].Katz and Rice [48] found that during a crisis, people used various social media platforms to develop temporary solutions in order to stay connected to their networks.Social media enables users to share, publish, manage, collaborate, and interact with members of the public in virtual communities at the click of a button [49].In general, social media has been described as facilitating an online community containing up-to-date crisis information in which members seek and share information and guidance in unfamiliar situations during times of crisis [50].Some important big-data technologies, such as social media data, are frequently used in different stages of disaster management and to enhance disaster resilience [51].Social media has enabled citizens to provide valuable help to those professional organizations affected without having to expend a great deal of time and energy. Currently, there are over 75 social media platforms, with the most popular ones being Facebook, YouTube, WhatsApp, Messenger, WeChat, Instagram, TikTok, Tencent QQ, QZone, and Sina Weibo, according to the number of users [52].Social media platforms such as Facebook, Twitter, and TikTok have become important disaster response technologies in recent years due to their instant connectivity and open platforms, which allow the dissemination of real-time information [53,54].Social media preferences vary from country to country: the Chinese prefer WeChat, and Brazilians and Indians prefer Orkut, in contrast to Facebook and Twitter [31]. Studies have demonstrated that social media fosters collective intelligence, which involves large and distributed groups of individuals working together to solve intricate problems.People in a community will step up to assist those who are in danger or distress [55].Social media enhances resilience by enlarging the community of impacted people, i.e., those who cooperate to solve problems, exchange information about the situation, offer assistance, and otherwise respond to the situation [56].Earle and others explored Twitter's role in reporting earthquakes and assessing their impact, and their results suggest that Twitter activity can help to identify affected areas more quickly than traditional monitoring methods [57].Using Hurricane Florence as a case study, Yuan et al. [58] explored the use of social media to analyze the ways in which citizens with different demographic characteristics exhibit different responses and behaviors during the same disaster.During disasters, social media is used for C2C communication.For example, Typhon Meranti in Xiamen, China showed that during catastrophic disasters, people depend on credible information from the government, even if they access it through official government sources on social media [59].Social media platforms are important for creating the situational awareness needed to coordinate actions among affected communities during natural disasters [60].Previous studies have focused on the use of social media to analyze citizens' capacity to cope during disasters, and have focused on social media platforms such as Facebook and Twitter.However, these studies seem to have ignored the impact of other social media platforms' use on disaster resilience in developing countries and the differences in people's choices and preferences for social media platforms.In the present study, we analyzed the use of different ICT-based social media tools to enhance community disaster resilience. Data Sources This research applied a questionnaire to collect relevant data on each index variable.The questionnaire was divided into two parts: a basic personal information section and the main part of the questionnaire.In the first part of the questionnaire, identification questions were used to differentiate the respondents.The question "Have you experienced the "7.20" rainstorm in Zhengzhou?" was designed to identify the respondents who experienced the 7.20 rainstorm in Zhengzhou to ensure a valid sample.The first part of the questionnaire also included the question "What kind of social media software did you mainly use to obtain and release information about the "7.20" rainstorm?"The questions were categorized to prepare the main part of the questionnaire, which was related to the use of WeChat, TikTok, and Weibo.The main part of the questionnaire was designed to measure the corresponding indicator variables of convenience and trust, creation and dissemination, emotion and communication, cooperation and collective action, relief and release, and usage behavior and willingness.The questionnaire was administered using a uniform Likert scale (1-5), and respondents were asked to select "strongly disagree", "disagree", "undecided", "agree", or "strongly agree" after reading each question. This study distributed the questionnaire as an online survey.The channels through which the questionnaires were distributed mainly included citizens living in Zhengzhou.The questionnaires were distributed via social media platforms such as WeChat groups, Moments, and QQ groups and through interpersonal relationships such as classmates and friends.Respondents directly opened the questionnaire link to fill in the questionnaire themselves.To enhance the breadth and diversity of participants, online questionnaires were distributed to individuals of varying age groups, both male and female, who filled out the questionnaire and then passed it to their acquaintances.After collecting the relevant data, we used SPSS 26 software to input and process the data, including reliability and validity tests, one-way ANOVA, t-tests, correlation analysis, and regression analysis, and obtain the final results.The specific steps and process are shown in Figure 1. Water 2023, 15, x FOR PEER REVIEW 5 of 17 social media platforms' use on disaster resilience in developing countries and the differences in people's choices and preferences for social media platforms.In the present study, we analyzed the use of different ICT-based social media tools to enhance community disaster resilience. Data Sources This research applied a questionnaire to collect relevant data on each index variable.The questionnaire was divided into two parts: a basic personal information section and the main part of the questionnaire.In the first part of the questionnaire, identification questions were used to differentiate the respondents.The question "Have you experienced the "7.20" rainstorm in Zhengzhou?" was designed to identify the respondents who experienced the 7.20 rainstorm in Zhengzhou to ensure a valid sample.The first part of the questionnaire also included the question "What kind of social media software did you mainly use to obtain and release information about the "7.20" rainstorm?"The questions were categorized to prepare the main part of the questionnaire, which was related to the use of WeChat, TikTok, and Weibo.The main part of the questionnaire was designed to measure the corresponding indicator variables of convenience and trust, creation and dissemination, emotion and communication, cooperation and collective action, relief and release, and usage behavior and willingness.The questionnaire was administered using a uniform Likert scale (1-5), and respondents were asked to select "strongly disagree," "disagree," "undecided," "agree," or "strongly agree" after reading each question. This study distributed the questionnaire as an online survey.The channels through which the questionnaires were distributed mainly included citizens living in Zhengzhou.The questionnaires were distributed via social media platforms such as WeChat groups, Moments, and QQ groups and through interpersonal relationships such as classmates and friends.Respondents directly opened the questionnaire link to fill in the questionnaire themselves.To enhance the breadth and diversity of participants, online questionnaires were distributed to individuals of varying age groups, both male and female, who filled out the questionnaire and then passed it to their acquaintances.After collecting the relevant data, we used SPSS 26 software to input and process the data, including reliability and validity tests, one-way ANOVA, t-tests, correlation analysis, and regression analysis, and obtain the final results.The specific steps and process are shown in Figure 1.With reference to educational level, 31 people (14.7%) had completed junior high school education, 36 people (17.06%) had completed high school education, 52 people (24.4%) had completed a college education, 59 people (28%) had a bachelor's degree, and 33 people (15.64%) had a master's degree or above.In terms of occupation, students accounted for 22.75%, government and institution staff accounted for 8.53%, company employees accounted for 25.12%, self-employed people accounted for 19.43%, and other occupations accounted for 24.17%.Further, 53.6% of respondents had a monthly income of less than 5000 RMB, 21.3% earned 5000-10,000 RMB, 13.74% earned 10,000-15,000 RMB, 10% had a monthly income of 15,000-20,000 RMB, and 1.4% earned more than 20,000 RMB. Measurement of the Variables In order to assess the questionnaire's reliability, a reliability analysis was performed.We chose Cronbach's coefficient for this purpose.In general, the stability of a dataset increases with the increase of the alpha value.According to the results of the reliability analysis (Table 2), it can be seen that the reliability coefficients of convenience and trust, creation and dissemination, emotion and communication, cooperation and collective action, relief and release, and usage behavior and willingness were 0.831, 0.858, 0.845, 0.851, 0.845, and 0.835, respectively.The reliability coefficient of the overall questionnaire was 0.968.The reliability coefficient has a range of 0 to 1, with a higher value indicating greater reliability.Therefore, the survey and its results were reliable. Data Analysis First, descriptive analysis was used to analyze people's choice of social media in the event of the rainstorm disaster.Secondly, the overall score and satisfaction of people's choice to use social media was assessed, while specific differences in the use of each of the three social media platforms were analyzed separately.Finally, linear regression was used to explore whether the use of ICT had an impact on community disaster resilience, focusing on the relationships between the use of social media and convenience and trust, creation and dissemination, emotion and communication, cooperation and collective action, relief and release. Choice of Social Media Platform According to the analysis results, in terms of social media choices when encountering heavy rainfall, the highest number of respondents (86) used TikTok to obtain disaster information, followed by WeChat and finally Weibo (Figure 2).Domestic internet applications in China have been growing significantly in recent years, with a steady growth across various social media services and news apps.Chinese citizens seldom access relevant information through traditional TV and news broadcasts, but more often use smartphones and computers to access and publish relevant information rapidly and in large volumes through social media platforms such as WeChat, Weibo, and TikTok.event of the rainstorm disaster.Secondly, the overall score and satisfaction of people's choice to use social media was assessed, while specific differences in the use of each of the three social media platforms were analyzed separately.Finally, linear regression was used to explore whether the use of ICT had an impact on community disaster resilience, focusing on the relationships between the use of social media and convenience and trust, creation and dissemination, emotion and communication, cooperation and collective action, relief and release. Choice of Social Media Platform According to the analysis results, in terms of social media choices when encountering heavy rainfall, the highest number of respondents (86) used TikTok to obtain disaster information, followed by WeChat and finally Weibo (Figure 2).Domestic internet applications in China have been growing significantly in recent years, with a steady growth across various social media services and news apps.Chinese citizens seldom access relevant information through traditional TV and news broadcasts, but more often use smartphones and computers to access and publish relevant information rapidly and in large volumes through social media platforms such as WeChat, Weibo, and TikTok.From the results, it appears that the largest number of users chose to use TikTok.Between these three social media options, the differences lie mainly in the fact that TikTok presents short videos with strong visual impact, representing an easy-to-understand and accessible way to communicate to people the events of the day.Compared with WeChat and Weibo, the more novel and interesting TikTok has objectively enriched the methods and means of releasing information and obtaining relevant disaster information.At the same time, some official TikTok accounts, such as that of the government, use the video release style of TikTok to present themselves differently from a previous serious image, showing their affinity with the public.In this way, government media is revitalized and fully integrates both voice and video, meaning that the public is more willing to engage with it. Overall Satisfaction of People Using the Three Social Media Platforms Figure 3 shows the overall satisfaction levels of people using the three social media platforms.The survey results showed that 39.7% of people reported the highest possible From the results, it appears that the largest number of users chose to use TikTok.Between these three social media options, the differences lie mainly in the fact that TikTok presents short videos with strong visual impact, representing an easy-to-understand and accessible way to communicate to people the events of the day.Compared with WeChat and Weibo, the more novel and interesting TikTok has objectively enriched the methods and means of releasing information and obtaining relevant disaster information.At the same time, some official TikTok accounts, such as that of the government, use the video release style of TikTok to present themselves differently from a previous serious image, showing their affinity with the public.In this way, government media is revitalized and fully integrates both voice and video, meaning that the public is more willing to engage with it. Overall Satisfaction of People Using the Three Social Media Platforms Figure 3 shows the overall satisfaction levels of people using the three social media platforms.The survey results showed that 39.7% of people reported the highest possible satisfaction when using WeChat (with a Likert scale score of 5), followed by 38.3% of people who were satisfied with using Weibo, and 32.6% of people who expressed satisfaction with using TikTok.Compared with TikTok, WeChat is mainly socially oriented, facilitating communication and contact between people beyond the limits of time and space and making communication between people convenient, fast, and free.At the same time, as the number of users has grown, WeChat has become more than just a communication tool: the Moments and WeChat official accounts accessed through the WeChat platform have become involved in every aspect of life.The group chat function of WeChat acts as a "meeting room," where there is a trend toward value homogeneity among different individuals based on one or more different connections and values.More than 10% of people reported the lowest possible satisfaction when using Weibo (10.6%), and people were less satisfied with Weibo compared to WeChat and TikTok in the context of the heavy rainstorm.In terms of communication content, Weibo is more like an open cultural square.Due to the openness and inclusiveness of Weibo, a non-homogeneous set of values is presented, and different values appear to collide during communication [61].As an open Water 2023, 15, 3516 9 of 16 platform, Weibo features frequent negative comments and controversial statements.As a result, users may encounter offensive comments or arguments with different viewpoints, which may negatively impact their experience. become involved in every aspect of life.The group chat function of WeChat acts as a "meeting room," where there is a trend toward value homogeneity among different individuals based on one or more different connections and values.More than 10% of people reported the lowest possible satisfaction when using Weibo (10.6%), and people were less satisfied with Weibo compared to WeChat and TikTok in the context of the heavy rainstorm.In terms of communication content, Weibo is more like an open cultural square.Due to the openness and inclusiveness of Weibo, a non-homogeneous set of values is presented, and different values appear to collide during communication [61].As an open platform, Weibo features frequent negative comments and controversial statements.As a result, users may encounter offensive comments or arguments with different viewpoints, which may negatively impact their experience. Social Media Scores for Each Variable Figure 4 shows the scores for each variable, broken down by the three different social media platforms-WeChat, TikTok, and Weibo.WeChat scored the highest on each variable, followed by TikTok, and finally Weibo, which indicates that compared with Weibo and TikTok, the stronger interpersonal social relationship of WeChat produces a stronger interaction and intimacy between people.With higher similarity and relevance between groups, the frequency of interaction is also relatively high.Particularly considering the highly time-sensitive nature of information about sudden natural disasters, the information release behavior of WeChat users greatly affects the dissemination of relevant and important information, especially through the WeChat platform in the form of Moments, and the WeChat official account has become involved in all aspects of life. Social Media Scores for Each Variable Figure 4 shows the scores for each variable, broken down by the three different social media platforms-WeChat, TikTok, and Weibo.WeChat scored the highest on each variable, followed by TikTok, and finally Weibo, which indicates that compared with Weibo and TikTok, the stronger interpersonal social relationship of WeChat produces a stronger interaction and intimacy between people.With higher similarity and relevance between groups, the frequency of interaction is also relatively high.Particularly considering the highly time-sensitive nature of information about sudden natural disasters, the information release behavior of WeChat users greatly affects the dissemination of relevant and important information, especially through the WeChat platform in the form of Moments, and the WeChat official account has become involved in all aspects of life. Differences between WeChat, TikTok, and Weibo According to Sociodemographic Characteristics In order to determine if there was a difference between the three types of social media in terms of sociodemographic characteristics, this study included a t-test and a one-way ANOVA. Figure 5 shows the differences between WeChat, TikTok, and Weibo according to so- Differences between WeChat, TikTok, and Weibo According to Sociodemographic Characteristics In order to determine if there was a difference between the three types of social media in terms of sociodemographic characteristics, this study included a t-test and a one-way ANOVA. Figure 5 shows the differences between WeChat, TikTok, and Weibo according to sociodemographic variables.We found that there were more women than men among the citizens who used WeChat, more men than women who used TikTok, and more women than men who used Weibo.In general, male citizens used TikTok more, and female citizens used WeChat more.Among the age categories, citizens under 20 and aged 20-29 and 50-59 used TikTok most often, citizens aged 30-39 used WeChat and TikTok most often, and citizens aged 40-49 and over 60 years old used TikTok most often.In the education category, people with a university education and those with master's degrees or higher used TikTok the most, while people with primary school and senior high school education favored WeChat.Further, citizens who worked for companies used WeChat the most, while students used the most.Finally, people earning less than 5000 RMB often used TikTok, and those earning 15,000-20,000 RMB and above often used WeChat. Regression Analysis In order to observe the relationships between the use of social media and community resilience, the data were analyzed using linear regression.The outcomes are displayed in Table 3.The results show that the use of WeChat had a positive effect on community resilience during rainstorms, particularly related to the variables of convenience and trust (β = 0.819, p < 0.000), creation and dissemination (β = 0.815, p < 0.000), emotion and com- Implications and Suggestions The results of all three social media analyses showed that the use of ICT had a positive impact on resilience. First, in terms of convenience and trust, social media may be a more reliable form of media than traditional media in a disaster situation [18].In addition to reliability, social media might potentially offer a more rapid and efficient means of disseminating accurate disaster information [62].In the face of natural disasters, governments and communities must be the first to respond to the public's needs and expectations during disaster risk communication efforts, which means using social media platforms to continuously track the public's information needs during natural disasters, such as the need for information about the disaster situation, disaster relief, casualties, and disaster supplies.This can facilitate the public's access to this information while satisfying the public's information needs, ensuring the public's trust. Second, in terms of creation and dissemination, social media tools are usually more reliable than others in disaster situations, so they can be used to ask for help after a disaster [63,64].Social media provides a new way for individuals and organizations to communicate during disasters, allowing for the rapid spread of information and the mobilization of resources [65].Relief information is posted through social media in order to access relief and help more quickly.Policymakers need to formulate relevant emergency and disaster response plans based on the different stages of disasters and, more importantly, based on the characteristics and functions of social media.In this way, plans can target different modes of information production and dissemination to ensure that the public can quickly and accurately access relevant information about the disaster. Next, we examined emotion and communication.Individuals often need a place to communicate and share information with others if the level of damage caused by a disaster is significant.Social media can help with these processes [21,66,67].In the midst of and following a disaster, individuals will be seeking reassurance that their loved ones who may be in the impacted region are safe. Cooperation and collective action are mainly reflected in the provision and receipt of preparedness information through social media in the event of a storm disaster.During a disaster, populations that are well informed and prepared are likely to be more resilient and flexible [68,69]; thus, individuals and organizations strive to learn how to prepare for disasters, and the dissemination of preparedness resources via organizations and governments benefits communities [18].In terms of social impact, the government should not only ensure the effective use of online content and interaction but also make efforts to connect with the community offline, e.g., by reaching out to the community, etc., so that the official social media accounts operated by the government will be trusted by the public and become an important channel of information for them.When a disaster strikes, these social media accounts can play a role in improving the effectiveness of communication and prompting collaboration across multiple platforms. Finally, we investigated relief and release.Social media may encourage positive attitudes and emotions that enhance behavioral and mental health.Social media may provide people with the opportunity to share their feelings about the incident, voice their concerns for those it has touched, express gratitude for their blessings, and mourn and remember those who died as a result of the event [64,66,[70][71][72].Therefore, the dissemination of emergency and disaster information must be based on ensuring the accuracy of the information, and it must be possible for the public to access disaster information very easily, especially in disaster situations where people are also in a state of fear.It is extremely important for disaster information to be timely, accurate, authoritative, and easily accessible so as to effectively alleviate the anxiety and panic of the affected members of the public.Different emergency management departments and other relevant organizations need to work together efficiently to complete a rapid assessment of events, improve the speed and efficiency of disaster responses, and combine information related to early warnings, developments, response measures, and public psychological care. Limitations While this study does provide insight into how the use of social media can enhance disaster resilience, it is worth noting its limitations.First, the results showed that social media (WeChat, TikTok, and Weibo) usage had a positive impact on community disaster resilience in relation to the variables of convenience and trust, creation and dissemination, emotion and communication, cooperation and collective action, relief and release, and usage behavior and willingness during a rainstorm disaster.However, it cannot be said that these variables encompass all factors relevant to the impact of social media usage on disaster resilience.Second, during the 7.20 rainstorm disaster in Henan Province in 2021, people did not use only these three social media platforms to obtain and disseminate disaster-related information.There may be other relevant social media tools, such as QQ, etc., depending on the population.Only three social media platforms were selected for this study.Finally, the subject of this study was specific and specifically limited to a rainstorm disaster.Whether the findings can be generalized to all natural disasters is not yet known, and further validation of the above findings based on increased data volumes may be carried out in future studies.In response to the limitations of this study, this study also provides ideas and prospects for subsequent research. Conclusions This study examined how community resilience was enhanced via the use of ICT resources during the Zhengzhou 7.20 rainstorm. Based on the collected and analyzed questionnaire responses, the major conclusions are summarized as follows. (1) The use of WeChat, TikTok, and Weibo had positive effects on community disaster resilience.Specifically, the use of social media by the general public (WeChat, TikTok, and Weibo) during this rainstorm disaster was positively related to convenience and trust, creation and dissemination, emotion and communication, cooperation and collective action, and relief and release.(2) From the results of a comparative analysis of the specific differences in the use of these three social media platforms, it appears that TikTok was used by the largest number of people during the storm disaster.The highest level of user satisfaction was found among those who used WeChat, while the variable scores for TikTok was not the highest.Instead, WeChat had the highest variable scores, and the number of users and variable scores for Weibo were both in the middle.There were also sociodemographic differences between the users of the three types of social media. Figure 1 . Figure 1.A model for using social media to influence community disaster resilience.Figure 1.A model for using social media to influence community disaster resilience. Figure 1 . Figure 1.A model for using social media to influence community disaster resilience.Figure 1.A model for using social media to influence community disaster resilience.The largest questionnaire company in China (Wenjuan Xing) was commissioned to carry out the online distribution of the questionnaire survey used in this study.The questionnaire was distributed from 1 November to 5 November 2022, and 229 responses were collected.After eliminating invalid data, the final number of valid responses was 211. use (WeChat/Weibo/TikTok) to help people in need.0.851 I can use (WeChat/Weibo/TikTok) to win the appreciation and recognition of others.I can use (WeChat/Weibo/TikTok) to post relevant information to establish or maintain a social relationship with others.Relief and release I can use (WeChat/Weibo/TikTok) to listen to others or talk to others about the rainstorm.0.845 I can use (WeChat/Weibo/TikTok) to (re)post information, pictures, videos, etc. to relieve the fear and tension caused by rainstorm disasters.I can use (WeChat/Weibo/TikTok) to (re)send relevant information to commemorate the victims of the accident.Behavior and willingness to useOverall, I am satisfied with the experience of using (WeChat/Weibo/TikTok) to get disaster information.0.835 I will use (WeChat/Weibo/TikTok) to get disaster information when I face a disaster again.I would suggest my friends and relatives use (WeChat/Weibo/TikTok) to get or post-disaster information in case of disasters. Figure 2 . Figure 2. Choice of social media platform. Figure 2 . Figure 2. Choice of social media platform. Figure 3 . Figure 3.The overall satisfaction of people using the three social media platforms. Figure 3 . Figure 3.The overall satisfaction of people using the three social media platforms. Water 2023 , 17 Figure 4 . Figure 4. Social media platform scores for each variable. Figure 4 . Figure 4. Social media platform scores for each variable. Table 3 . Results of regression analysis.
7,852.6
2023-10-09T00:00:00.000
[ "Computer Science", "Environmental Science", "Sociology" ]
Generation of Reverse Meniscus Flow by Applying An Electromagnetic Brake A numerical study is presented that deals with the flow in the mold of a continuous slab caster under the influence of a DC magnetic field (electromagnetic brakes (EMBrs)). The arrangement and geometry investigated here is based on a series of previous experimental studies carried out at the mini-LIMMCAST facility at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR). The magnetic field models a ruler-type EMBr and is installed in the region of the ports of the submerged entry nozzle (SEN). The current article considers magnet field strengths up to 441 mT, corresponding to a Hartmann number of about 600, and takes the electrical conductivity of the solidified shell into account. The numerical model of the turbulent flow under the applied magnetic field is implemented using the open-source CFD package OpenFOAM®. Our numerical results reveal that a growing magnitude of the applied magnetic field may cause a reversal of the flow direction at the meniscus surface, which is related the formation of a “multiroll” flow pattern in the mold. This phenomenon can be explained as a classical magnetohydrodynamics (MHD) effect: (1) the closure of the induced electric current results not primarily in a braking Lorentz force inside the jet but in an acceleration in regions of previously weak velocities, which initiates the formation of an opposite vortex (OV) close to the mean jet; (2) this vortex develops in size at the expense of the main vortex until it reaches the meniscus surface, where it becomes clearly visible. We also show that an acceleration of the meniscus flow must be expected when the applied magnetic field is smaller than a critical value. This acceleration is due to the transfer of kinetic energy from smaller turbulent structures into the mean flow. A further increase in the EMBr intensity leads to the expected damping of the mean flow and, consequently, to a reduction in the size of the upper roll. These investigations show that the Lorentz force cannot be reduced to a simple damping effect; depending on the field strength, its action is found to be topologically complex. A numerical study is presented that deals with the flow in the mold of a continuous slab caster under the influence of a DC magnetic field (electromagnetic brakes (EMBrs)). The arrangement and geometry investigated here is based on a series of previous experimental studies carried out at the mini-LIMMCAST facility at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR). The magnetic field models a ruler-type EMBr and is installed in the region of the ports of the submerged entry nozzle (SEN). The current article considers magnet field strengths up to 441 mT, corresponding to a Hartmann number of about 600, and takes the electrical conductivity of the solidified shell into account. The numerical model of the turbulent flow under the applied magnetic field is implemented using the open-source CFD package OpenFOAMÒ. Our numerical results reveal that a growing magnitude of the applied magnetic field may cause a reversal of the flow direction at the meniscus surface, which is related the formation of a ''multiroll'' flow pattern in the mold. This phenomenon can be explained as a classical magnetohydrodynamics (MHD) effect: (1) the closure of the induced electric current results not primarily in a braking Lorentz force inside the jet but in an acceleration in regions of previously weak velocities, which initiates the formation of an opposite vortex (OV) close to the mean jet; (2) this vortex develops in size at the expense of the main vortex until it reaches the meniscus surface, where it becomes clearly visible. We also show that an acceleration of the meniscus flow must be expected when the applied magnetic field is smaller than a critical value. This acceleration is due to the transfer of kinetic energy from smaller turbulent structures into the mean flow. A further increase in the EMBr intensity leads to the expected damping of the mean flow and, consequently, to a reduction in the size of the upper roll. These investigations show that the Lorentz force cannot be reduced to a simple damping effect; depending on the field strength, its action is found to be topologically complex. I. INTRODUCTION ENSURING the quality of continuous cast (CC) products is becoming increasingly important in view of growing production rates. Uncontrolled fluid flow in the continuous casting mold is suspected of being responsible for various casting defects. Turbulent jet flow is an important phenomenon during the continuous casting process as the mold flow is mainly driven by the submerged jet emanating from the submerged entry nozzle (SEN). It influences the free surface stability, promotes superheat transport to the solidified shell as well as to the slag band, or poses the risk of introducing impurities and inclusions into the bulk of the slab. Electromagnetic brakes (EMBrs) are considered a powerful tool to provide an effective flow control. The striking influence of uniform transverse magnetic fields on liquid metal flows in ducts with various wall conductance ratios was observed in the early studies of Cuevas et al. [1,2] The ability of DC magnetic fields to dampen fluctuations in highly turbulent flows is supposed to have attractive application potential for flow control in continuous casting, for example, to prevent undesired remelting of the solidified shell. [3][4][5] However, the magnetic field effect is anisotropic and, thus, could become rather complex. Studies on mixed convection, which represents a superposition of buoyancy and forced convection, found both a stabilizing and a redistributing impact. [6] There are already a couple of studies published that show, for the flow in the continuous casting mold, that the use of an EMBr does not exclusively result in a damping of the flow but also affects the flow structure in a way that can possibly lead to an acceleration and destabilization of the flow by large-scale fluctuations or to disturbances at the meniscus. [6][7][8][9][10] Systematic model experiments on laboratory scale in low melting point metal alloys were performed at Helmholtz-Zentrum Dresden-Rossendorf (HZDR) to study the effect of various externally applied magnetic fields on the flow inside a mockup of a conventional CC mold. [8,[10][11][12][13] The experimental setup was equipped with suitable measurement techniques to obtain quantitative data with high temporal and spatial resolution. In particular, new insights have been gained by varying the location of the EMBr system, as reported by Schurmann et al. [10] A wide range of the complex phenomena in the metallurgical field can be investigated a priori by means of numerical modeling, as reviewed by Thomas. [14] Nowadays, a powerful strategy has been established that combines experimental work with extended numerical studies by different research groups. [9,12,[15][16][17] The numerical simulations effectively complement the parameter space that can be covered by the experiments and provide data at a high density and resolution. That is especially valuable for the industrial applications where the flow observations are limited to the meniscus region due to the harsh environment and high temperatures. On the other hand, the numerical simulations need to be validated by robust experimental data. The application of electromagnetic fields for flow control in continuous casting requires a comprehensive understanding of the complex interactions. Improper application can also lead to unintended deterioration of the flow structure. For example, as recently reported by Schurmann et al., [13] an electromagnetic stirrer has the potential to induce a desired flow structure, but under certain circumstances, it can also, on the contrary, lead to problems such as destabilization of the free surface. This study demonstrates that when choosing the magnetic field settings, the other process parameters, such as the different SEN types, must be considered carefully to achieve a beneficial result. In a recent review, Cho and Thomas [18] classified the influence of the applied magnetic field on the formation of different casting defects and suggested corresponding guidance for practical use of EMBrs. Complementary to the studies mentioned here, the authors of this work have recently presented a very detailed numerical study of the induced electric current distribution during the EMBr process, focusing on the interaction with the turbulent flow and considering the effects of the presence of the solid shell, which is very important during real solidification in the CC process. [19] A new freestanding adjustable combination EMBr type (FAC-EMBr) was numerically investigated in Li et al. [20] by varying the magnetic induction intensity, the SEN immersion depth, and the port angle for different casting speeds. Garcia-Hernandez et al. observed perturbations of the meniscus level in thin slab castings due to periodic flow alterations and refer to this turbulent behavior of the flow under the term dynamic distortions (DDs). [21] The authors investigated whether and how the horizontal and vertical EMBrs can be applied to prevent the occurrence of DDs at the meniscus. While there seems to be some success in the case of the horizontal EMBr, the authors surprisingly failed to show a way to prevent or control the DD phenomenon using the vertical EMBr. Recently, Vakhrushev et al. [22] took both the viscoplastic behavior of the solidified shell and the magnetohydrodynamics (MHD) effects of the EMBr into account to simulate the turbulent flow and the shell thickness during the thin slab casting with and without the DC magnetic field. The mini-LIMMCAST experimental setup at the HZDR is based on the geometries of industrial plants and uses conventional SEN types that are typical for practice in most casting mills. [10,11] In these model experiments, it was found that the application of a horizontal magnetic field at Hartmann numbers of about 400 can also unintentionally accelerate and destabilize the meniscus flow in comparison to the situation without EMBr. Meanwhile, such a behavior was also reproduced by numerical simulations. [9,12,15,17,19] However, in view of a couple of unanswered questions in this context, the authors are not yet aware of any further work specifically devoted to this phenomenon. Therefore, this study is devoted to numerical simulations considering the application of a horizontal ruler-type EMBr in a wide range of Hartmann numbers up to 600 (B 0 = 0 … 441 mT). Our results reveal that the flow pattern dramatically changes with growing magnetic field strength. At a certain threshold value of the magnetic field, the formation of a ''multiroll'' structure is triggered, which is accompanied by an opposite flow direction at the meniscus. As the magnetic field grows, this flow pattern is consolidated and finally occupies the entire upper part of the CC mold. Our parametric study based on wide and highly resolved magnetic field variations addresses the following main questions: (1) How does the flow structure change with the growing magnetic field strength? (2) What is the origin of initial meniscus acceleration and its later deceleration? (3) Which conditions and mechanisms are responsible for the formation of the opposite meniscus flow? II. NUMERICAL MODEL In this section, a summary of the numerical model of the turbulent flow under the applied magnetic field is presented. The details of its in-house implementation using the open-source CFD package OpenFOAMÒ* [23] are described elsewhere. [19] By including MHD (Lorentz) force F L acting in the conducting melt under the applied constant magnetic field B 0 , the set of the incompressible Navier-Stokes equations becomes where u is the melt velocity; q is the liquid density; p is a pressure field; and s lam and s SGS are the laminar and the subgrid scale (SGS) Reynolds stress tensors, respectively. Chaudhary et al. [9] showed, that the wall-adapting local eddy-viscosity (WALE) turbulence model [24] gives a better prediction of the turbulent flow than the standard Smagorinsky (SM) model [25] based on the measurements in the mini-LIMMCAST experiment. The better performance of the WALE SGS model was recently confirmed by further numerical study of the same experimental setup including EMBr. [26] Thus, the WALE turbulence model is used in the present work to simulate s SGS : It is robust for complex geometries with strong mesh refinements, and it is capable of predicting the formation of coherent structures that can exist under the influence of the applied magnetic field. [9,24,27] Since the magnetic Reynolds number is low for the CC applications (Rm ( 1), [19] the Maxwell's equations are reduced using the electric potential method. [28] The induced current density j is given by the Ohm's law as follows: where u is the electric potential and r is the electrical conductivity of the solid or liquid steel, respectively. From the charge conservation law (r j ¼ 0), a Poisson equation is constructed for the electric potential, and the corresponding Lorentz force is calculated as The computational domain contains both liquid and solid regions: The outer boundaries of the liquid domain are electrically insulated; a thin layer of the highly conductive solid, attached to the mold walls, mimics the presence of the shell in the real continuous casting. The analysis of the solid conductance ratio, as well as the description and verification of the coupling algorithm, are presented elsewhere. [19] III. MODEL APPLICATION The present numerical model is applied to the mini-LIMMCAST setup equipped with a CC mold of a cross section of 140 Â 35mm 2 ( Figure 1) and a ruler-type electromagnetic brake. [8,11,12] The liquid Ga68In20Sn12 alloy was used in the experiment. The thermophysical melt properties were reported by Plevachuk et al. [29] Initial simulations and the analysis of the meniscus velocity growth are done for the EMBr case positioned at the SEN bottom 92 mm below meniscus level; the peak value of the magnetic flux density is 312 mT. All studies are performed for the casting speed u pull ¼ 1:35m=min, which relates to the SEN inlet velocity of 1.4 m/s. This specific configuration corresponds to the one reported in Thomas et al. [12] The simulated geometry and the distribution of the applied magnetic field B 0 are presented in Figures 1(a) and (b). The mold and the SEN walls are electrically insulating. It must be considered that the induced electric current can close in the solidified shell, which has a higher electrical conductivity than the liquid steel. In the experiments, 0.5-mm-thick brass plates are attached to the wide faces inside the mold to reflect the presence of the solid shell by matching the corresponding wall conductance ratio. [8] According to the experimental setup, [11] a CAD model and the numerical grid were constructed using the open-source package SALOME [30] and snappyHexMesh OpenFOAMÒ. The details of the hex-dominant mesh can be seen in Figure 1(c). The mesh refinement close to the side walls is necessary to resolve the viscous and electromagnetic boundary layers. MHD boundary layers can be defined based on the Hartmann number Ha: where L 0 stands for the domain's length scale, which corresponds to the half-size of the mold along magnetic field lines. [31] Electrical conductivity and kinematic viscosity of the fluid are expressed by the symbols r liq and g, respectively. The Hartmann magnetic boundary layer with the thickness D Ha exists on the walls perpendicular to the magnetic field. The Shercliff layer of size D Sh is formed at the parallel walls. [32] The thicknesses of these layers can be estimated as follows: [31] D Ha ffi L 0 Á Ha À1 ½7 In the current study, the Hartmann number is Ha % 417 for the reference magnetic field value of 312 mT. Thus, the Hartmann layer becomes D Ha % 50lm and the Shercliff layer is, correspondingly, D Sh % 1mm. The wall conductance ratio, of the attached brass plates with the thickness d wall ¼ 0:5 mm is sufficiently high (C wall ¼ 0:134) that a noteworthy part of the induced current closes in the solid wall. [8,12,19] Additionally, the transport of the induced current occurs in the Shercliff boundary layer. [31] No massive mesh refinement is required in the liquid bulk region, as discussed by the authors previously. [19] Based on the casting speed, the simulation results are averaged through the time interval of 39 seconds. [19] The second-order space integration of the gradient and advective schemes is used. The second-order backward time integration scheme was performed with an integration step of 5 Â 10 À5 s to achieve a Courant number of Co % 0:15. Hereafter, the time averaged velocity fields are presented and analyzed. For the induced current density distribution and for the interaction with the turbulent structures, the instantaneous results are used. Before proceeding to the main studies of the present work, the numerical results were verified based on the published experimental and simulated data. [8,9,11,12] In Figure 2, the comparison is shown for the flow without EMBr (Figure 2(a)) and with the applied magnetic field (Figure 2(b)). The distribution of the horizontal velocity component u x at the corresponding locations between the mold narrow face (NF) and the SEN showed good agreement both with the UDV measurements and with the modeled results. [8,9,11,12] To start, the features of the simulated melt flow and the induced current behavior for the standard experimental setup performed at the HZDR GaInSn experiment are discussed. The mean velocity fields for the case without magnetic field and with the default value of 312 mT are compared, as shown in Figure 3. Due to the strong turbulence in the flow, the mean field is quite smeared for the no EMBr case (Figure 3(a)). However, the jet region is clearly defined after the melt exits the SEN ports. In the presence of the EMBr, both strong upward and downward flows develop along the narrow (insulated) walls and along the meniscus surface, as shown in Figure 3(b). The following observations are made in Figure 3: In the EMBr case (Figure 2(b)), the flow at the meniscus accelerates and the flow structure, shown by white arrows, significantly changes in the bulk. The flow in the lower mold region transforms from a strong recirculation zone to a plug-type downward flow due to the Lorentz force action. Furthermore, there is a tendency for the upper roll in Figure 2(a) to be split by a newly formed countervortex. The details of the velocity field governed by the EMBr are presented in Figure 4. The upward bending of the jets is typically observed when the magnetic field is applied. [8,10] Recently, it was shown by Schurmann et al. [10] that the EMBr position could have even more impact on the shape of the jets. Two reverse flow zones are detected above and below the main jet, which are seen both in the midplane section (Figure 4(b)) and at the cross section B-B (Figure 4(c)). This phenomenon can be explained by the fact that the jet becomes elongated along the magnetic field lines between the wide faces, since an essential effect of the magnetic field manifests itself in the distinct reduction of any velocity gradient in the direction of the magnetic field. The continuity of mass requires the melt entrainment from the surroundings of the jet. The latter leads to the formation of a reverse flow at the flattened sides of the jet. [28] The appearance of the reverse flow is confirmed in the experiments. [8] When the wide walls are electrically conductive, the high velocity flow moves along the insulated (narrow) wall. [19] Likewise, a strong upward flow develops along the nonconductive SEN walls under the action of Lorentz force, where an OV is observed. The details for the SEN and narrow wall flow, showing a velocity vector field under applied 312 mT magnetic field, are seen in Figure 4(b). The velocity field of the meniscus and the reverse flow zone above the jet are presented in the cross sections A-A and B-B (Figure 4(c)), respectively. IV. FORMATION OF THE OPPOSITE MENISCUS VORTEX The experimental measurements and numerical results revealed the possibility of an acceleration of the meniscus velocity, when an external DC magnetic field is applied in the zone of the submerged jets. [8,9,11,12,15] In the present parameter study, we focus on the specific case, rigorously discussed in the literature, [8,12] considering the presence of the solid shell by the attached brass plates and using a ruler-type EMBr applied at the SEN level. We investigate the origin of meniscus acceleration by studying its evolution during variations of the Ha number in 12 precisely selected steps from 0 to 600. Furthermore, we explore the occurrence of reverse meniscus flow, which is associated with the manifestation of a multiroll flow pattern. The simulated parameters, which follow a gradual increase of the magnetic field strength and, thus, the characteristic Hartmann number, are summarized in Table I. Case A corresponds to the flow without EMBr, while the other cases reflect the continuous increase of Fig. 2-Verification of the simulation results based on the mini-LIMMCAST experiment data: [8,9,11,12] (a) flow simulation without EMBr and (b) modeling of the flow under the applied magnetic field (312 mT). The probe lines are located 90, 100, and 110 mm below the meniscus parallel to L1 in Fig. 1(a). the magnetic field, which is expressed by means of the absolute value of the field strength and its relative change compared to the experiment. [12] A. Influence of the Magnetic Field Magnitude: Qualitative Observation The first part of the results is shown in Figure 5 as the velocity magnitude distribution in the midplane, where the magnetic field is varied from without EMBr (case A) up to 221 mT (case E). Applying a weak field of 39 mT shows almost no changes to the flow pattern ( Figure 5(b)). However, when the magnetic field reaches 78 mT ( Figure 5(c)), the impact of the EMBr becomes verifiable: The flow along the narrow wall becomes stronger and the velocities at the meniscus are accelerated as well. This phenomenon becomes dominant at 156 mT, where the meniscus velocity reaches its maximum value ( Figure 5(d)). When the EMBr of 221 mT is applied (case E), the meniscus velocity starts to slow. Case E is the point when the OV is initiated, as marked in Figure 5(e). Velocity distributions at the meniscus are presented in Figure 6 for cases A through E. It is observed that the meniscus flow is continuously accelerated toward the SEN with growing magnetic field. No significant changes occur in the flow pattern up to a magnetic field value of 156 mT. At this point, multiple vortices near the narrow wall (Figures 6(a) through (c)) transform to a single corner vortex apparently aligned with the magnetic field, as shown in Figure 6(d)). B. Effect on Turbulent Structure The action of the applied magnetic field is anisotropic; it does not change the linear momentum of the system, but a transport of vorticity and linear momentum is initiated along the field lines, as pointed out by Davidson. [28] This leads to the formation of quasi-two-dimensional (2-D) flow structures that are not directly affected by Joule dissipation. While three-dimensional (3-D) flows are effectively suppressed, dissipation of the quasi-2-D vortices takes place at sufficiently high field strength in the Hartmann layer in the form of the so-called Hartmann braking. [33,34] To analyze the change of the turbulent flow structure, it is visualized using the so-called Q-criterion, which is estimated based on the velocity gradient as [35] Q crit ¼ 1 2 tr ru ð Þ ð Þ 2 Àtr ru ru ð Þ h i ½10 This visualization method is common in the CFD community and defines a vortex structure based on the second invariant of the velocity gradient, representing the local balance between the shear strain rate and vorticity. [36] As shown in Figure 7, there is a clear redistribution of the turbulent structures. Small-scale 3-D vortices are strongly damped, while large-scale 2-D structures become much more prominent. Thus, the flow along the narrow walls toward the meniscus is enhanced when the EMBr is activated. In fact, part of the turbulent kinetic energy is dissipated in the form of Joule heating, while another part is transferred to the mean flow. C. Quantitative Analysis of the Reverse Flow For the applied magnetic field of 221 mT, it is observed in Figure 8 that the initial OV is developed right above the SEN port exit. It is initiated as a mass compensation for the long reverse flow zones right above the jets, which are marked in Figure 8(a). The existence of the corner vortex at the meniscus in the vicinity of the narrow wall becomes obvious in Figure 8(b). It is parallel to the narrow wall of the mold and rotates opposite to the main meniscus flow. In a next step, we analyze the evolution of the flow pattern for higher Ha numbers starting from the experimental settings in case F. The results are shown in Figure 9: The OV, which is initiated at a field strength of 221 mT (case E), continuously expands toward the meniscus and reaches the top surface at 349 mT (Figure 9(b)). This is also clearly seen in the meniscus velocity distribution in Figure 10(b). The newly formed opposite meniscus flow (case I, 366 mT) progresses from the SEN toward the narrow wall (Figure 9(c)) and merges with the corner vortex in case J (382 mT), as shown in Figure 9(d). Finally, with the increase in the magnetic field up to 441 mT (case L), the opposite meniscus flow occupies the entire space between the SEN and the narrow wall, as shown in Figure 9(e). The same phenomenon is presented in Figure 10 from the top view. A clear emergence of the opposite meniscus roll at the top surface, its expansion toward the narrow wall with the increasing strength of the EMBr, and the final merge with the corner roll can be seen in Figures 10(b) through (d). The detailed development of the submeniscus velocity for the magnetic field in the range between 0 and 349 mT is shown in Figure 11. The time-averaged meniscus velocity immediately starts to grow when the magnetic field is applied. The corresponding range between 0 and 156 mT is displayed with gray colors in Figure 11 and However, a further increase of the applied magnetic field to 221 mT and continuing to a value of 349 mT causes a reduction of the meniscus flow again; it is marked with the downward arrow in Figure 11. Furthermore, the development of a corner vortex at the narrow wall can be observed. Its intensity appears to behave in the same way as the velocity value of the dominant submeniscus flow. A qualitative change of the meniscus flow pattern occurs in case G (marked with a dash line) for B 0 j j= 349 mT: A top part of the OV occurs at the meniscus close to the SEN wall (also Figures 9 through 10(b)) and the transition to the multiroll flow regime starts. Further details are given in Figure 12, containing the horizontal velocity component profiles at the meniscus for the EMBr in the range between 312 and 441 mT. It should be mentioned that an additional case K (EMBr 413 mT) is used to show the asymptotic behavior of the opposite meniscus flow development. It is shown with a thin blue dash line in Figure 12 and lies very close to the line of 441 mT (case L). Since a very negligible difference in the flow pattern with the stronger EMBr (case L) is detected, it is not necessary to go for the magnetic field values above 441 mT. With the fully developed multiroll, the flow direction in the upper region of the mold is inversed in comparison with the initial double roll pattern. Here, in case L, the highest speed is around 0.15 m/s near the SEN and continuously decreases to 0.07 m/s close to the narrow wall. For the typical converging meniscus, the highest speed (~0.18 m/s) is found to occur closer to the NFs and decrease toward the SEN. Figure 13 focuses on a section of the velocity profiles near the narrow wall, showing in detail how the corner vortex there develops under the influence of the magnetic field. In the real casting process, such corner vortices also occur due to shell withdrawal. Vakhrushev et al. [4] showed that both types of corner vortices can combine and reinforce each other, possibly leading to an enhanced entrainment of liquid slag into the mold/shell gap. Figure 13 also makes it obvious that the higher the magnetic field, the further the corner vortex extends into the mold. A maximum velocity of about 0.08 m/s appears at a magnetic flux density of 312 mT. Further increase of B 0 up to 382 mT reduces the velocity to about 0.04 m/s before the opposite roll merges with the corner vortex and covers the entire liquid surface (case L). D. Action of the Lorentz Force To assess the action of the Lorentz force, a special function is defined as follows: which represents a normalized dot product of the Lorentz force and the melt velocity in the range between -1 and + 1. The negative L F L ; u ð Þ corresponds to damping, while the positive value means that, locally, the Lorentz force accelerates the flow. According to Eq. [5] for the magnetic field, applied in the normal direction to the mold's wide face, the Lorentz force will act in the vertical plane only. The distribution of the Lorentz force function L F L ; u ð Þin the vertical center plane is investigated in Figure 14 together with the magnitude of the magnetic force and the induced current density to distinguish wherever its action is important or not. A colored stripe on the very left of Figure 14 indicates the EMBr location and intensity. The regime where the OV is initialized (case E, 221 mT) is analyzed in Figure 14(a): The Lorentz force brakes the flow in jets, along the narrow walls, inside the upper rolls as well as below the SEN. However, as shown by Schurmann et al., [10] the Lorentz force does not decelerate the jets; it flattens the exit angle of the flow, transforming it from ''banana'' shape to almost parallel or even to ''S'' shape ( Figure 14, top row). The EMBr force accelerates the flow at 221 mT in the lower part of the mold; however, the magnitude is weak, and the effect is negligible. With the initiation of the OV, a braking zone appears near the SEN wall right above the port outlet. Despite the fact that it goes outside the EMBr effective range, the Lorentz force magnitudes are not negligible close to the top surface since the induced currents are concentrated at the upper part of the mold and continuous flow structure reorganization happens. The braking zone develops as the OV grows toward the meniscus. Simultaneously, the flow is accelerated in the upper roll. Thereby, the OV grows to satisfy the mass conservation. For cases H and J (Figures 14(b) through (c)), the pronounced acceleration zones (red) are detected above and below the main jets in the reverse flow region. The detailed formation of these recirculation zones is previously discussed and is presented in Figure 9. For an applied magnetic field of 441 mT (Figure 14(d)), the OV is fully developed along the top surface (dashed line with an arrow). It fully occupies the upper part of the mold and is under acceleration below the meniscus. However, the main acceleration zone is now in the upper roll (solid line with an arrow). In comparison to the standard double-roll flow, the upper roll is now pushed deep under the meniscus surface due to the action of the Lorentz force. Right below the SEN, the Lorentz force dominantly acts as a braking mechanism of the flow. Its value is significant due to the high magnetic field values in the effective EMBr zone, leading to the plug-type flow at the lower part of the domain resulting in a uniform downward motion of the melt. It should be emphasized that the action of the Lorentz force cannot be reduced by its damping effects. As seen from the results in Figure 14, the MHD force action is very complex and its topology significantly changes under the growing magnetic field. E. Transition from Double to Multiroll Based on the Hartmann Number To summarize the studies for the EMBr positioning at the SEN bottom (92 mm below the top surface), the dependency of the maximum meniscus velocity based on the Hartmann number is shown in Figure 15. It can be detected that the meniscus accelerates starting from the case without EMBr to the case where the Hartmann number reaches 200. Then a velocity drop is observed since internal vortices develop inside the bulk region withdrawing kinetic energy from the meniscus. As marked in Figure 15, for the case I, the upper roll and the opposite meniscus become equally strong. That finally leads to the fully developed opposite meniscus vortex at Ha % 510. The flow at the top surface changes its direction toward the narrow walls of the mold cavity. The double roll regime, despite momentum redistribution, is kept in the range up to Ha % 300. Further, with the growth of the magnetic field, the OV is formed. Afterward, the competition starts between the Lorentz force action and the momentum conservation in the liquid flow. The sizes of the OV and the upper roll are comparable in this regime. However, with Hartmann numbers in the range between 450 and 550, the flow pattern totally changes. The bottom part represents plug flow, the jets are surrounded by recirculation zones above and below them, and the upper part of the mold is totally occupied by the reverse flow. F. Influence of the Shell Location and Withdrawal In the real casting process, the solidifying shell is continuously growing against a water-cooled copper mold; additionally, the CC slab is being withdrawn with a corresponding casting speed. To reveal the importance of the shell distribution and its withdrawal on the flow pattern under the EMBr, two additional studies were performed ( Figure 16). The initial setup in this article with the solid shell at the wide walls is used as a reference (Figure 16(a)). Next, an additional brass plate attachment to the narrow walls is considered (Figure 16(b), case (i)). Finally, the movement of the solid shell is included ( (Figure 16(b), case (ii)). The alternation of the horizontal meniscus velocity u x is shown in Figure 16(c). When the brass plates are attached to the wide and narrow walls, enhanced braking of the meniscus is detected, displayed by a shift between the black and blue lines in Figure 16(c). When a pulling velocity of the shell is defined, the meniscus flow slightly accelerates to compensate for additional downward motion in the vicinity of the mold walls (blue and red lines in Figure 16(c)). However, the difference is not dramatic and the flow pattern is conserved. On the other hand, for the faster casting speeds, one would expect significant changes; therefore, a simplification of the experimental and numerical models should be carefully selected. We encourage the readers to observe the supplemental materials, including the animation videos ''Video S1.avi'', ''Video S2.avi'', ''Video S3.avi,'' and ''Video S4.avi,'' where the most representative results of the flow alternation and the formation of the multiroll pattern under the applied magnetic field are shown. V. CONCLUSIONS During this investigation, it was revealed that the meniscus flow undergoes different regimes during the increase of the applied magnetic field. First, it is accelerated since the turbulent structures are damped and all linear momentum from the freshly fed melt is transformed into the upward and downward flow. Thereby, the upper rolls are supported with stronger momentum. Accordingly, at this stage, the meniscus is accelerated as well. Meanwhile, a reverse flow develops at the SEN port corner. It was generally assumed that this OV is caused by the surrounding melt entrainment due to the mass conservation. The present investigation shows that this reverse flow is of a pure MHD nature. It comes from the induced electric current closure. In convection-dominant areas, it acts in the form of braking. However, in other regions, this current accelerates the flow, giving rise to the formation of a reverse flow adjacent to the jet flow. With the growing magnetic field, the reverse flow reaches the top surface and finally occupies the entire mold width from the SEN to the narrow wall. Consequently, with this OV development, the top surface velocity is initially decreased since the kinetic energy is consumed for the new flow structure formation. However, at some critical intensity of the magnetic field, the meniscus starts to accelerate again. The industrial effect of the fully developed opposite meniscus flow is not fully understood. It can have positive or negative consequences on the product quality. Providing more superheat to the stagnation zones at the clearance between the SEN and the wide walls is, for example, favorable for continuous casting. The consequent enhancement of the slag entrapment possibility is, nevertheless, undesirable. As revealed in the performed studies, with the variation of the EMBr magnetic field, all significant changes happen in the liquid bulk long before they are observable at the meniscus. This fact should be seriously considered, since most of the measurement techniques for the real casting act close to the slag band level and can give misleading indications. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
8,856.4
2021-07-06T00:00:00.000
[ "Engineering", "Physics" ]
The Janus-Face of Ius Sanguinis: Protecting Migrant Children and Expanding Ethnic Nations Costica Dumbrava’s proposal for abandoning ius sanguinis is timely and bold. My intuition is to reject his suggestion that children’s citizenship might The Janus-Face of Ius Sanguinis: Protecting Migrant Children and Expanding Ethnic Nations Francesca Decimo Costica Dumbrava's proposal for abandoning ius sanguinis is timely and bold.My intuition is to reject his suggestion that children's citizenship might be disconnected from that of their parents, but to join his advocacy for a radical rethinking of the ius sanguinis principle with a view towards eliminating it once and for all.These are rather contrasting stances in relation to the same principle.Let us see if the apparent contradiction can be resolved. To begin, let us consider the element of Costica Dumbrava's proposal that has elicited most attention and controversy among the respondents, but was picked up and expanded by Lois Harder, namely the assertion that granting citizenship at birth is unnecessary and, above all, that making children dependent on the legal status of their parents exposes them to a form of vulnerability.The idea of postponing the acquisition of citizenship until adulthood, taking into account birthplace and residence or possession of the appropriate attitudes and skills, derives from the classic opposition between ius sanguinis and ius soli according to which the former is considered ethnic and exclusive while the latter is considered civic and inclusive.Yet Rainer Bauböck's comments on this point explain how, in the absence of parental transmission of citizenship to children, ius soli and ius domicilii can generate individual and familial conditions that are both legally paradoxical and morally unfair.I share the doubts and critiques raised by Rainer Bauböck, Scott Titshaw and Kristin Collins regarding the alleged emancipatory value of a citizenship system that disconnects children from their parents.Particularly, I consider any legal system that fails to specifically protect the relationship between parents and children to be highly risky.Indeed, who should children depend on if not their parents?Dumbrava's proposal that children might instead be subject to, and protected by, a kind of international law faces the problem of subordinating the individual and familial reproductive spheres to institutional logics. As Luc Boltanski has noted, 1 the event of birth is inextricably linked to the definition of belonging and social descent -and therefore legal, political, cultural, national, etc. descent as well.Historically, devices for legitimating the procreative event were provided by religion, ancestry, the nation-state and, in more recent times, a long-term relationship among a couple.In a scenario in which parentage and citizenship are not tightly connected from the beginning, the risk is not only that of generating stateless children but also an excess of state power.Even after World War Two, the Catholic Church in Ireland took children considered illegitimate away from their unmarried mothers.It was nationalist demographic policies, both in Europe and overseas, that shaped the reproductive choices of individuals and families during the 20th century with a view to producing children for the fatherland.We might recall these policies when interpreting some recent nationally-oriented arguments encouraging the children of immigrants to rid themselves of the burden of their cultures of origin in which their inadequately assimilated mothers and fathers remain stuck. 2 With this in mind, do we really want to define children's citizenship irrespective of their parents'?Do we really want to shift the task of determining the legitimate membership of our offspring from relationships to institutions? The considerations made thus far therefore lead me to agree with those who have argued that, as long as the system of nation-states regulates our rule of law, children's citizenship must be linked from birth to that of their parents. At the same time, it seems to me that ius sanguinis is a legal instrument which, especially in a global context of increased geographical mobility, opens the way to policies of attributing nationality that go far beyond protecting the parent-child relationship.This point relates to Dumbrava's observation that ius sanguinis is historically tainted that was critically addressed by Jannis Panagiotidis but has not yet been decisively refuted. As scholars have noted, ius sanguinis makes it possible to recognise a community of descendants as legitimate members of the nation regardless of its territorial limits, but that is not all.This principle has been used to grant the status of co-national to individuals dispersed not only across space but also across time, leading to the construction of virtually inexhaustible intergenerational chains. 3This principle is based on blood, identified as the 1 See Boltanski, L. ( 2004), La condition foetale.Paris: Gallimard.2 See Hungtinton, S. ( 2004), Who are we?New York: Simon and Schuster.essential and primordial element of descent, belonging and identification.It is true that this potential for unlimited intergenerational transmissibility is effectively defused by the fact that many countries interpret ius sanguinis narrowly, applying it generally only up to the second generation born abroad.And yet, is this limit enough to bind and delimit the potential of ius sanguinis?In national rhetoric the image of a community of descendants continues to exert a powerful appeal that goes beyond the attribution of birthright citizenship.In historical emigration countries -but also others -, 4 ius sanguinis as a legal practice is used to grant preferential conditions and benefits to descendants as part of the direct transmission or 'recovery' of ancestral citizenship well beyond the second generation. 5Generational limits in the granting of citizenship to descendants can thus be bypassed because, in principle, ius sanguinis itself poses no particular restrictions in this regard. The most controversial aspects of ius sanguinis emerge when this principle ends up competing with ius soli or ius domicilii, that is, when individuals born and raised elsewhere enjoy a right to citizenship in the name of lineage and an assertion of national affiliation while immigrants who participate fully in the economic, social and cultural development of the country are denied this same right or face serious obstacles in accessing it.In such context -Germany in the past and Italy today -the right to citizenship effectively becomes a resource which, like economic, human and social capital, is distributed in a highly unequal way, benefitting certain categories of people -'descendants' -at the expense of others -'foreigners'. In view of its unlimited intergenerational potential, I conclude that, if its purpose is merely to bind children's citizenship to that of their parents, ius sanguinis as a legal instrument suffers from ambiguity and disproportionality.All of these critical points seem to be implicitly overcome in Bauböck's proposal of a ius filiationis principle, which would focus entirely on linking children's citizenship to that of their parents, especially for migrants and non-biological offspring.Under a different name and with distinct content, does this move not suggest that, rather than modifying or modernising ius sanguinis as advocated by Rainer Bauböck and Scott Titshaw, it is time to abandon it once and for all, adopting in its place a principle that explicitly protects parentage and citizenship in contexts of geographical mobility instead of linking it to genealogical lineage and nationhood? Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/),which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material.If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
1,787.8
2018-01-01T00:00:00.000
[ "Political Science", "Law", "Philosophy" ]
Stanniocalcin-1 Protects Retinal Ganglion Cells by Inhibiting Apoptosis and Oxidative Damage Optic neuropathy including glaucoma is one of the leading causes of irreversible vision loss, and there are currently no effective therapies. The hallmark of pathophysiology of optic neuropathy is oxidative stress and apoptotic death of retinal ganglion cells (RGCs), a population of neurons in the central nervous system with their soma in the inner retina and axons in the optic nerve. We here tested that an anti-apoptotic protein stanniocalcin-1 (STC-1) can prevent loss of RGCs in the rat retina with optic nerve transection (ONT) and in cultures of RGC-5 cells with CoCl2 injury. We found that intravitreal injection of STC-1 increased the number of RGCs in the retina at days 7 and 14 after ONT, and decreased apoptosis and oxidative damage. In cultures, treatment with STC-1 dose-dependently increased cell viability, and decreased apoptosis and levels of reactive oxygen species in RGC-5 cells that were exposed to CoCl2. The expression of HIF-1α that was up-regulated by injury was significantly suppressed in the retina and in RGC-5 cells by STC-1 treatment. The results suggested that intravitreal injection of STC-1 might be a useful therapy for optic nerve diseases in which RGCs undergo apoptosis through oxidative stress. Introduction Optic neuropathy is a disease of axons of retinal ganglion cells (RGCs) in the optic nerve, and is one of the leading causes of irreversible visual loss [1,2]. The causes for axonal damage in the optic nerve are diverse ranging from neurodegenerative and neuroinflammatory diseases to glaucoma that affects more than 60 million people around the world and causes bilateral blindness in about 8 million people [3]. The final pathway of diverse forms of optic neuropathies is the death of RGCs occurring mainly through apoptosis [2], and the generation of reactive oxygen species (ROS) takes an intrinsic part in RGC apoptosis [4][5][6]. Similar to other mammalian neurons in the central nervous system, axons and RGCs are unable to regenerate, and thus no therapeutic treatment is available to date for optic neuropathies. Stanniocalcin-1 (STC-1) is a 247 amino acid protein that is secreted from cells as a glycosylated homodimer. STC-1 was originally identified as a calcium/phosphate regulatory protein in fish [7]. Although its physiological function in humans is not clear, STC-1 is physiologically active in mammals and may be involved in regulation of cellular calcium/phosphate homeostasis [8]. In addition, mammalian STC-1 has been shown to have multiple biological effects involving protection of cells against ischemia [9,10], suppression of inflammatory responses [11], or reduction of ROS and the subsequent apoptosis in alveolar epithelial cancer cells [12] and photoreceptors in the retina [13]. Also, it was found that STC-1 was secreted by mesenchymal stem cells (MSCs) in response to signals from apoptotic cells and mediated an antiapoptotic action of MSCs [14]. Here we investigated the effects of STC-1 on the apoptosis of RGCs and on ROS production in the retina of rats with intraorbital optic nerve transection (ONT), a well-established model for optic neuropathy that induces rapid and specific RGC degeneration and results in apoptotic death of more than 80% of RGCs within 2 weeks [15]. In addition, we evaluated the STC-1 effect in cultures of RGCs with CoCl 2 injury that causes RGC apoptosis by several mechanisms including ROS-driven oxidative stress [16,17]. Ethics Statement The animal study was performed in strict accordance with the Association for Research in Vision and Ophthalmology Statement for the Use of Animals in Ophthalmic and Vision Research. The experimental protocol was approved by the Institutional Animal Care and Use Committee of Samsung Medical Center (SMR112051). Animals and animal model Eight-week-old male Sprague-Dawley rats weighing 200 to 250 g were purchased from Orient Bio Inc. (Seongnam, Korea), and used in all experiments. Under anesthesia with zolazepamtiletamine (ZoletilH, Virbac, Carros, France) and xylazine, the pupils were dilated with phenylephrine/tropicamide eyedrops, and transection of optic nerve was performed as previously described [18,19]. Briefly, after exposing an optic nerve through a superotemporal conjunctival incision, optic nerve sheath was incised longitudinally, and cross-section of the optic nerve was made at 2 mm from the eyeball with a 20-gauge MVR blade. Immediately after ONT, preservation of blood supply to the optic nerve head and the retina was confirmed by fundus examination, and the rats received an intravitreal injection of either 2 mL STC-1 (1 mg) or the same volume of PBS using a Hamilton syringe with a 33 gauge needle (Hamilton, Reno, NV). Recombinant human STC-1 was purchased from BioVender (Brno, Czech Republic). According to the manufacturer's instructions, distilled water was added to a vial of STC-1 that was lyophilized in 20 mM Tris buffer, 20 mM NaCl to yield a final solution of 0.5 mg/mL, and sterilized through a filter before use. The rats were sacrificed at days 1, 7, and 14, and the retinas were subjected to analysis. Eyes with postoperative complications such as cataract or infection were excluded from analysis. Determination of RGC density For retrograde labeling of surviving RGCs, the fluorescence tracer dextran tetramethylrhodamine (DTMR; 3,000 MW, Molecular Probes Inc., Eugene, OR) was applied to the proximal surface of transected optic nerve as previously described [18,19]. DTMR diffuses passively through the axon toward the cell soma at a rate of 2 mm/h which subsequently label the surviving retinofugal RGCs with a competent axon [19,20]. At days 1, 7, 14, and 28, eyeballs were enucleated and fixed in 4% paraformaldehyde for 4 h. The retinas were isolated from eyeballs, and four cuts were made from the edges to the center of the retina. The retinas were then flattened and mounted vitreous side up on slide glasses and covered with fluorescent mounting media (Dako, Glostrup, Denmark). The whole-mounted retinas were observed under a laser confocal microscope (LSM700; Carl Zeiss Micro-Imaging GmbH, Jena, Germany), and images were acquired at 1006magnification. The density of labeled RGCs was determined by counting cells in the fields 1, 2, and 3 mm from the center of the optic nerve along the centerline of each retinal quadrant. The number of labeled cells in a total of 12 photographs was divided by the area of the region and pooled to calculate the mean density of labeled cells per square millimeter for each retina. The numbers of RGCs were counted independently by two observers in a masked fashion, and averaged. Cell culture For an in vitro study, we used RGC-5 cells, a transformed rat RGC line that has been well-characterized as cells expressing ganglion cell markers and exhibiting ganglion cell-like behavior [21]. The cells were a kind gift from Dr. N. Agarwal [19]. Cells were cultured in Dulbecco's minimal essential medium (DMEM) containing 4500 mg/L glucose, 10% heat-inactivated fetal bovine serum, and 1% penicillin/streptomycin in a humidified incubator with 5% O 2 at 37uC. When 70% confluence was reached, the cells were exposed to CoCl 2 (100-800 mM; Sigma-Aldrich Co. LLC, St. Louis, MO) to induce hypoxia and apoptosis and treated with recombinant STC-1 (1-500 ng/mL; BioVender) or N-Acetyl-Lcysteine (Sigma). We used N-acetylcysteine as one of controls because a previous report showed that N-acetylcysteine protected RGC-5 cells from hypoxia-induced cell death by scavenging ROS [22]. Assays for cell viability and apoptosis Cell viability and proliferation were measured using MTT assay following the manufacturer's protocol (VybrantH MTT Cell Proliferation Assay Kit; Invitrogen, Carlsbad, CA). Apoptosis was measured by flow cytometry (FACSCanto flow cytometer; BD BioSciences, Mountain View, CA) after double-staining cells with propidium iodide (PI)-PE and annexin V-FITC (Molecular Probes, Inc., Leiden, The Netherlands). The populations of PI + Annexin-V + cells were compared between groups. Western blot Clear lysates of protein from the retinas or the cells were prepared as described above and measured for the concentration. A total of 50 mg protein was fractionated by SDS-PAGE on 10% bis-tris gel (Invitrogen), transferred to nitrocellulose membrane (Invitrogen), and then blotted with antibodies against HIF (hypoxia-inducible factor)-1a (Santa Cruz Biotechnology, Inc., Dallas, TX) or b-actin (Santa Cruz Biotechnology). Real time RT-PCR For RNA extraction, the cells or the retinas were lysed in RNA isolation reagent (RNA Bee, Tel-Test Inc., Friendswood, TX) and total RNA was then extracted using RNeasy Mini kit (Qiagen, Valencia, CA). Double-stranded cDNA was synthesized by reverse transcription (SuperScript III, Invitrogen). Real-time amplification was performed (Taqman Universal PCR Master Mix, Applied Biosystems, Carlsbad, CA) and analyzed on an automated instrument (7500 Real-Time PCR System, Applied Biosystems). PCR probe sets were commercially purchased (Taqman Gene Expression Assay Kits, Applied Biosystems). Values were normalized to 18s RNA and expressed as fold changes relative to normal retinas or uninjured cells. Flow cytometrical analysis of mitochondrial ROS Mitochondrial ROS was measured in cultures using Cell-ROX TM Deep Red Reagent (Invitrogen), a novel cell-permeant dye that fluoresces (near-infrared) when oxidized and MitoTracker Green FM Dye (Invitrogen), a probe that stains mitochondrial membrane lipid regardless of mitochondrial membrane potential. The cells were treated with CellROX TM dye and MitoTracker Green dye, and analyzed by flow cytometry (FACSCanto flow cytometer). Statistical analysis The data are presented as the mean 6 SEM. Comparisons of two values were made using the two-tailed Student's t test, and comparisons of more than two values using a one-way ANOVA Intravitreal injection of STC-1 increased the survival of RGCs after ONT To evaluate the effect of STC-1 on survival of RGCs in vivo, we injected 1 mg STC-1 into the vitreous cavity of rats immediately after ONT. At days 1, 7, 14, and 28, the rats were sacrificed, and the retinas were evaluated for RGCs (Fig. 1A). The numbers of RGCs at days 7 and 14 were significantly greater in rats that received STC-1 compared to controls that received PBS (Fig. 1B, C); the numbers of RGCs were 1196630/mm 2 in STC-1-treated rats and 955623/mm 2 in PBS-treated rats (p,0.0001) at day 7, and 419636/mm 2 in STC-1-treated rats and 166610/mm 2 in controls (p,0.0001) at day 14. There was no difference in the numbers of surviving RGCs between the groups at day 28 after ONT. STC-1 decreased apoptosis and oxidative damage in the retina after ONT To investigate that STC-1 improved RGC survival by decreasing apoptosis, we analyzed the retina for the level of active caspase-3. Caspase-3 is implicated in the primary and secondary waves of RGC apoptosis and active for a long period of time and with a great intensity during RGC loss [23,24]. As shown in Fig. 2A, caspase-3 activity at day 1 was significantly lower in the retinas of rats that received STC-1 compared to controls, indicating reduction of apoptosis by STC-1. Next, we assayed the retinas for nitrotyrosine and protein carbonyl, two protein derivatives of ROS that are used to measure oxidative damage in the retina [25,26]. We evaluated ROS levels because previous studies reported that bursts of ROS were generated following ONT and triggered RGC apoptosis [2,[4][5][6]. The levels of both nitrotyrosine and protein carbonyl in the retinas at day 1 were significantly lower in STC-1-treated eyes compared to PBSinjected controls (Fig. 2B, C). STC-1 decreased the expression of HIF-1a in the retina after ONT Next, we used real time RT-PCR to evaluate the expression of oxidative stress-and apoptosis-related genes that are implicated in oxidative damage, RGC apoptosis, and survival: UCP2, HIF-1a, BDNF (brain-derived neurotrophic factor), and caspase-3 [2]. Additionally, we assayed for the expression of STC-1 to check whether ONT induced up-regulation of endogenous STC-1 in the retina because previous studies reported that STC1 transcript was in the heart or brain following hypoxic signals [27,28]. The expression of all the genes tested increased at day 1 and decreased at day 7 after ONT ( Supplementary Fig. 1, Fig. 2D, E). Of note, transcript levels of HIF-1a, a key regulator of hypoxia, were markedly increased in the retina at day 1, and were significantly reduced by intravitreal injection of STC-1 (Fig. 2D). Consistently, western blot analysis showed that levels of HIF-1a protein were increased in the retina at day 1 and markedly decreased in the retina treated with STC-1 (Fig. 2F). Also, levels of caspase-3 transcripts that were increased by ONT were significantly decreased by STC-1 at days 1 and 7 (Fig. 2D, E). However, the expression of UCP2 that was previously shown to be up-regulated by STC-1 [11,29] was not increased in STC-1-treated retinas either at mRNA or protein levels (Fig. 2D, E, G). Also, STC1 transcripts were not increased in the retina after ONT and not altered by exogenous STC-1 treatment ( Supplementary Fig. 1, Fig. 2D, E). The level of BDNF, that exerts a potent neuroprotective effect on RGCs in vivo and in vitro [30,31], was significantly higher in the retinas of STC-1-treated eyes at day 7 compared to PBS-treated controls (Fig. 2E). STC-1 inhibited apoptosis in CoCl 2 -injured RGC-5 cells To evaluate the effect of STC-1 on the survival of RGCs in vitro, we exposed RGC-5 cells to different concentrations of CoCl 2 (0-800 mM) for 12 or 24 h in order to induce hypoxia and apoptosis. Expectedly, CoCl 2 decreased the cell viability, and STC-1 treatment significantly increased the cell viability in a dosedependent manner as measured by MTT assay (Fig. 3A, B). Also, flow cytometry showed that the numbers of PI + Annexin + cells indicating apoptotic cells were increased in RGC-5 cells after CoCl 2 exposure in concentration and time-dependent manners (Fig. 3C, D). Treatment with either 100 or 500 ng/mL STC-1 significantly decreased the numbers of PI + Annexin + cells as assayed by flow cytometry (Fig. 3E, F). STC-1 suppressed CoCl 2 -induced ROS production and HIF-1a expression in RGC-5 cells We next evaluated the effect of STC-1 on ROS production in RGC-5 cells that were exposed to CoCl 2 . The percentage of cells that were both positive for CellROX TM and MitoTracker Green indicating production of mitochondrial ROS was increased by CoCl 2 , and reduced significantly by STC-1 treatment (Fig. 4A, B). Similarly, levels of nitrotyrosine, a marker of oxidative stress, were markedly increased in the cells by CoCl 2 and significantly decreased by STC-1 or N-acetylcysteine (Fig. 4C). Together, data suggested that hypoxia induced by CoCl 2 increased oxidative stress in RGC-5 cells, and STC-1 decreased oxidative stress. Also, similar to in vivo data (Fig 2D, E, F), the expression of HIF-1a was induced in RGC-5 cells by CoCl 2 and significantly reduced by STC-1 both at transcript and protein levels (Fig. 4D, E). However, STC-1 treatment did not change the expression of UCP2 either at transcript or protein levels in RGC-5 cells, whereas N-acetylcysteine significantly increased UCP2 levels (Fig. 4D, F). Discussion Data demonstrated that intravitreal injection of STC-1 delayed RGC apoptosis in a rat model of ONT. Also, treatment with STC-1 decreased CoCl 2 -induced apoptosis in RGC-5 cells. Both in vivo and in vitro, the anti-apoptotic effect of STC-1 was accompanied by decreases in ROS and by down-regulation in HIF-1a. HIF-1 is a heterodimeric transcription factor that is composed of a and b subunits. HIF-1 acts as a key regulator for the cellular response to hypoxia [32]. Under normoxic condition, HIF-1a, the active subunit, is rapidly degraded by the ubiquitin-proteosome system. However, under hypoxic condition, HIF-1a is accumulated and facilitates apoptosis by activating diverse genes for proapoptotic proteins such as BNIP3 as well as stabilizing p53 which in turn activates genes to initiate apoptosis [33,34]. In fact, high levels of HIF-1a were detected in the retina and optic nerve head of patients with glaucomatous optic neuropathy, indicating the involvement of hypoxia and HIF-1a in the pathogenesis of the disease [35,36]. However, HIF-1a can also inhibit apoptosis by activating anti-apoptotic genes such as VEGF and Bcl-xL [37,38]. Therefore, the role of HIF-1a on cell apoptosis is more complicated depending on the type of tissues and injuries. In our study, HIF-1a expression was down-regulated in STC-1treated retinas and cell. These findings might be direct effects of STC-1 or indirect results of STC-1-mediated tissue protection reflecting that decreased damage in STC-1-treated tissues might reduce activation of HIF-1a in response to tissue damage. Therefore, HIF-1a might not be directly related to RGC damage or to the action of STC-1. Further studies are necessary to investigate the role of HIF-1a in RGC apoptosis and protection as well as potential implication of STC-1-induced down-regulation of HIF-1a. Oxidative stress plays an intrinsic role in apoptosis of RGCs. Previous studies showed that bursts of ROS were generated in the retina following ONT, and oxidative stress caused by an imbalance between ROS production and their elimination subsequently induced an irreversible loss of RGCs [2,[4][5][6]. Of note, this study revealed that STC-1 significantly decreased ROS levels in the retina with ONT and in RGCs with CoCl 2 injury. For the mechanism of STC-1, several studies suggested that STC-1 up-regulated the expression of mitochondrial UCP-2 to uncouple oxidative phosphorylation and thereby diminished superoxide generation [11,29]. However, UCP-2 was not increased in the retina or in RGCs after STC-1 treatment in this study. Therefore, the mechanism by which STC-1 lowers ROS in RGCs remains to Real time RT-PCR analysis indicated that expression of HIF-1a was induced in RGC-5 cells by CoCl 2 , and was significantly down-regulated by STC-1 (100 or 500 ng/mL). However, UCP2 transcripts were not increased by STC-1. (E) Western blot analysis for HIF-1a showed that HIF-1a protein was increased in RGC-5 cells after CoCl 2 injury, and was decreased by STC-1 treatment. (F) ELISA showed that the levels of UCP2 protein were not increased in CoCl 2 -injured RGC-5 cells by STC-1 treatment, whereas N-acetylcysteine treatment significantly increased levels of UCP2. The values are presented as the mean 6 SEM. doi:10.1371/journal.pone.0063749.g004 Stanniocalcin-1 Protects Retinal Ganglion Cells PLOS ONE | www.plosone.org be clarified although the primary effect of STC-1 was probably to decrease apoptosis by reducing oxidative stress. One time injection of STC-1 was not effective in decreasing apoptosis at 28 days after injury. Considering RGCs undergo apoptosis over 2 weeks after complete transection of an optic nerve, one time injection of recombinant STC-1 may not be sufficient to completely block RGC apoptosis. Multiple intravitreal injections of STC-1 may be necessary for long-lasting effects and are feasible in human patients. Together, the results demonstrated that STC-1 decreased apoptosis and oxidative stress in RGCs and in the retina. These findings suggest that intravitreal injection of STC-1 may be a promising candidate for treatment of optic neuropathy including glaucoma which is the second most common cause of blindness [3]. Glaucoma is a chronic neurodegenerative disease and characterized by gradual and irreversible loss of RGCs mainly through apoptosis [2]. Strategies to treat this condition are either to prevent RGCs from apoptosis or to stimulate regeneration of axons. Moreover, multiple intravitreal injections of STC-1 are feasible in patients. Therefore, intravitreal injection of STC-1 is particularly attractive for treating chronic diseases such as glaucoma. Figure S1 Gene expression profiles in the retina at days 1 and 7 after optic nerve transection. * p,0.05.
4,379.2
2013-05-07T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Analysis of Bulk Cement Distribution Network Considering Market Share and Operating Income after Acquiring the Competitor ― The distribution network is the most important strategic decision issues that need to be optimized for the efficient operation of whole supply chain. When a company make a business acquisition that brings more distribution facilities, the location allocation planning of the distribution network needs to be reconsidered. The distribution network includes the link from factories to packing plants and from factories or packing plants to demand points. The linear programming model was developed as a solution to solve optimization problem which involves multisource, multiproduct, and multipored in multi-echelon distribution network. We build numerical experiments from two scenarios to show the behaviour of this model. This model will determine the decision of distribution facilities location should be used and quantities should be allocated to achieve the optimal operating income considering the market share policy to satisfy the customer demands. I. INTRODUCTION HE network design is a fundamental thing done in supply chain management, where it will affect all other decisions that exist in a supply chain and has a great influence on investment returns and overall supply chain performance, it was further conveyed that mergers and acquisitions can make the company integrates different logistics networks [1]. The design of the supply chain network involves strategic decisions including determining the number, location and capacity of distribution facilities to meet consumer demand effectively and efficiently [2]. Decisions in supply chain design can result in a supply chain configuration that has a significant impact on logistics and responsive costs [3]. The supply chain network can be used to achieve the company's supply chain objectives, namely low supply chain operating costs to a high level of responsiveness to customer demand. So if an organization / company wants to increase its productivity and profitability, an effective and efficient supply chain network design is absolutely necessary. The benefits of managing supply chain networks by integrating operational, design and financial decisions that have an objective to determine the optimal configuration of production and distribution networks with operational constraints, including quality, production (ie supply restrictions related to production allocation and capacity balance) and finance (i.e. production costs, transportation costs, and other costs incurred along the network through which materials and products flow) [4]. The design of the distribution network consists of three parts including location-allocation, vehicle routing problems, and inventory control [5]. Location-allocation is defined as the unity of the location of the customer whose request is known and the unity of the location of available facilities. When the facilities have been determined there will be a fixed fee, there will also be a delivery fee between the candidate location that will be used and the location of the customer. So the facility location and delivery pattern between the facility and its customers will be sought to achieve the desired objectives [6] [7]. These objectives are classified into four categories namely minimizing costs, demand orientation, profit maximization, and environmental problems [8]. In this paper, we developed linear programming model to solve the location allocation model of the distribution network after the bussiness acquisition policy done by cement company in Indonesia by considering the market share. Therefore, this study aims to develop a location allocation model of the distribution network to optimizing the operating income by considering the market share of the cement company which recently make a bussiness acquisition policy of a similliar company. This paper devided into 5 section. Section 1 describe the research background. Section 2 provides literature review especially for proposed model. Section 3 present the proposed model. Section 4 provides the case of the location allocation of the distribution network of an Indonesian cement company. And the last section will be discusses about conclusion and future research. II. LITERATURE REVIEW Pujawan describe the location of allocations in the supply chain network [2]. Decisions on the establishment or use of a production facility or place of storage are often made simultaneously with other decisions such as the allocation of production and delivery. And it becomes more complex when the capacity constraints of production and storage are included in the decision. Where if there are a number of distribution facilities (both factory and storage) that are in several different places with a limited capacity to serve the entire marketing area of the company that has a different level of demand from one another. Therefore a linear programming is needed to determine simultaneously which production facilities will serve the marketing area and which factories will supply the inventory in the storage area. The discussed of the planning of capacity location allocation from distribution centers for distribution network T The 8th International Conference on Transportation & Logistics (T-LOG 2020) Surabaya September 6th-7th 2020, Universitas Internasional Semen Indonesia (UISI), Gresik, Indonesia design by considering between factories to distribution centers and distribution centers to the point of demand by exploring the optimal number and locations of distribution centers in the X cement industry in Myanmar [9]. Solving problems using mixed integer linear programming (MILP) consisting of three factories, six distribution centers, and six market areas. The MILP model provides useful information for the Company about which distribution centers are opened and the best distribution networks to maximize profits while still meeting customer demands. There are three scenarios which in all scenarios, the solution is to have only two distribution centers from the Mandalay and Meikhtila markets that are recommended to be opened in the distribution network. The examined the supply chain distribution network that focused on maximizing the profitability of locationinventory in multiculturality that is sensitive to price demand [10]. Determination of location, allocation, price, with a large size of the volume of orders from customers intended to maximize the total profit that can be achieved. Using a mixed integer non linear programming model that is solved by the lagrangian relaxation algorithm in the case of a capacity distribution center and not capacity. The results obtained indicate the existence of optimal quasi tolerance that can be accepted with a small computational time can solve the problem of large cases. The models of the allocation location of a distribution network in a company with the aim of maximizing earnings before interest, taxes, depreciation, and amortization (EBITDA) while still considering market share in accordance with company policy [11]. The model created resulted in an increase in EBITDA of 10.54% and an increase in the allocation of market share for sales areas where the Company is a market leader and market challenger and in the area of follower and nicher markets on average there was a decrease in market share allocation. Van Dijk writes about supply chain distribution networks in multicommodity parcel companies [12]. The purpose function is to maximize profits and maintain market share. Market share itself depends on the price and time of service provided to customers. The solution approach used is to integrate processes such as determining prices, determining demand, then minimizing costs on the distribution network. Using new metaheuristic algorithm based on local branching. There are two situations for optimization, where the first optimization situation is only on price and routing by linearizing the objective function to estimate the original non-linear model so that the formulation with the heuristic method is used with MILP to find the optimal solution. The second optimization situation is done in price, routing, and distribution network. The problem solving approach uses the same thing as the first situation by adding a meta heuristic algorithm based on neighborhood search and local branching variables that are run with MILP. The results given get the optimal solution where the more complex the distribution network that is built the longer the system is to do the calculations. This study attempts to develop location allocation model of the distribution network in an Indonesian cement company who recently make a acquisition bussiness policy to optimizing the operating income by considering the market share. A. Problem Description The supply chain distribution network based on product flow as depicted in Figure 1. This complex supply chain network includes multisources, multiechelon, multiproduct multiperiode with considering market share policy for optimizing operating income. A hypothetical capacity allocation problem will be considered based on the network, where multiple products can be distributed within a time horizon of 12 months. The aim is to determine how capacities should be allocated optimally to distribute the product items in a complete supply chain, whereby the capacity constraints of supply, distribution, and market share are considered simultaneously. Here, distribution facilities capacity is defined as the available supply volume in each plant and each period, and the capacity of each plant is independent of the others; the supply capacity is the maximum amount of product that can be provided by each distribution facility in each period. In addition, some other factors, such as type of product and market share policy, are considered. The problem for the proposed model is determining the allocation of volume of the products to be distributed to sales area from each plant (factory, packing plant, grinding plant) in order to satistying the demand. The objective is maximizing the operating income that generated by income from the sales price minus by cost of good sold, cost of sales marketing, cost of general and administrative, and cost of last miles delivery. The model restricted by some assumptions. The volume of the demand using the forecasting demand from the company. Demand fulfillment modelled as two scenarios, delivered it in full and delivered it based on the market share policy. The both scenarios aims to get the maximum operating income. The boundary of this model is the use of distribution facilities only in Java because it can already represent the entire distribution Surabaya September 6th-7th 2020, Universitas Internasional Semen Indonesia (UISI), Gresik, Indonesia network. The product is in bulk and using one of the brand of the company that has the most high market share. The product, transportation, and distribution facilities is always available and unlimited. The inbound cost already captured in the cost of goods sold. The proposed conseptual model shown in Figure 2. B. Proposed Mathematical Model The notations that will be used to describe the problem are as follows : The objective function maximizes the total operating income of the supply chain by maximizing the operating income from each factory, packing plant, and grinding plant generated by multiplied the total volume distributed to area sales d from factory f / packing plant p / grinding plant g for time period t and type product j with the operating income obtained from distributing to area sales d from factory f / packing plant p / grinding plant g for time period t and type product j . Where the operating income calculation for each factory f / packing plant p / grinding plant g given by : a. Factory operating income calculation formula (2) b. Packing plant operating income calculation formula (3) c. Grinding plant operating income calculation formula (4) Subject to : 1. Volume delivery from main factory to area sales & packing plant ≤ main factory capacity (5) This constrains ensure that the capacity of the main factory enough for delivering product both to the area sales and to the packing plant. 2. Volume delivery from factory to area sales ≤ factory capacity (6) This constrains ensure that the capacity of the factory enough for delivering product to the area sales. 3. Volume delivery from packing plant to area sales ≤ packing plant capacity (7) This constrains ensure that the capacity of the packing plant enough for delivering product to the area sales. 4. Volume delivery from grinding plant to area sales ≤ grinding plant capacity. (8) This constrains ensure that the capacity of the grinding plant enough for delivering product to the area sales Surabaya September 6th-7th 2020, Universitas Internasional Semen Indonesia (UISI), Gresik, Indonesia 5. Volume flow in to packing plant = volume flow out from packing plant to area sales (9) This constrains guarante that the flow in volume from the factory to the packing plant is as same as the flow out volume from the packing plant to the area sales 6. Volume fullfillment of demand area sales based on market share policy based on equation. (10) So we have upper bound and lower bound for the demand to be delivered a. Volume to area sales ≤ upper bound of demand based on market share policy An Indonesia cement company has make a bussiness acquisition policy of a similiar company. After the acquisition, Indonesian cement company has 5 cement factory, 3 packing plant, 1 grinding plant as the distribution facilities to fulfill cement demand in Java. As Figure 3 show that the company distribution facilities has covered all provinces in Java. For bulk cement distribution facilities, the company has 3 factory (Tuban, Rembang, and Narogong), 3 packing plant (Banyuwangi, Priok, and Ciwandan), and 1 grinding plant (Gresik) for fulfilling bulk cement demand in Java. And it has two type of bulk cement, OPC and PCC. There are 103 sales area in 5 province in Java with demand volumes estimated about 6,5 million tons a year. Therefore, the company need to develop new location allocation of the distribution network to satisfy the market demand and also to strengthen the market share for achieving the maximum operating income. C. Data The data parameter taken from the company shown in Table 1 to Table 5. D. Result and Discussion The model run using Open Solver software which a Microsoft Excel 2013 addon in Intel (R) Core (TM) i5-5200U CPU @ 2.20 GHz (4 CPUs) RAM 8.192 GB, experimental firstly calculate the operating income for each plant-destination (Table 6) and setup the lower and upper boundary of the market share policy ( Table 7). After that we setup the contraint in the open solver software. And then we run the Open Solver and get the objective function result presented in Table 8. The Table 8 shows that the operating income tend to higher when the model processed. In scenario all demand will be fulfilled, the operating income had 32,51% higher than the expected value with the same market share. And more higher 34,20% than expected value when the model run with the scenario using market share policy. However, there are decreased volume and declined market share when the model run with scenario market share policy, but it is still within the boundary of the market share policy. This indicate that the model tend to increase the volume that have higher operating income and decrease the volume that have the lower or negative operating income. This shows that the model can effectively conduct location allocation on the distribution network that generate optimal operating income with considering the market share policy. The distribution facilities utilization is also measured to check wether the distribution facilities utilized properly or not. Table 9 shows that there are no over utilized or no over capacity of the distribution facilities. The new distribution facilities (Narogong Factory) become crucial for the Indonesia cement company with 99,35% of utilization to support the company for distributing the product to the customer and help the company to generate higher operating income. The location allocation of the distribution network plays crucial role in supply chain management. Because of bussiness acquisition policy of a similliar company, a supply chain manager must be able to develop new location allocation model of the distribution network to optimilize the distribution allocation of the product to achieve maximum operating income for the company and also to satisfy the demand for maintain the market share according to company market share policies. The model using linear programming with two scenario to show the model behaviour towards demand fulfillment based on market share policy, location allocation model of the distribution network can be constructed respect to demand fulfillment at each of the distribution network, distribution facilities capacity, and the market share policy. The optimization result reached optimum condition and shows that all demand satisfied from the distribution facilities at maximum operating income and acceptable market share value based on market share policy. As the higher operating income, the higher volume will be allocated. And conversely as the lower operating income, the smaller volume will be allocated. However, the model still can developed in the future research by considering wether the company will keep or release the distribution facilities, the other financial measurement, or others company policies other than market share policy.
3,837.4
2020-11-03T00:00:00.000
[ "Economics" ]
A Novel Encoder-Decoder Model for Multivariate Time Series Forecasting The time series is a kind of complex structure data, which contains some special characteristics such as high dimension, dynamic, and high noise. Moreover, multivariate time series (MTS) has become a crucial study in data mining. The MTS utilizes the historical data to forecast its variation trend and has turned into one of the hotspots. In the era of rapid information development and big data, accurate prediction of MTS has attracted much attention. In this paper, a novel deep learning architecture based on the encoder-decoder framework is proposed for MTS forecasting. In this architecture, firstly, the gated recurrent unit (GRU) is taken as the main unit structure of both the procedures in encoding and decoding to extract the useful successive feature information. Then, different from the existing models, the attention mechanism (AM) is introduced to exploit the importance of different historical data for reconstruction at the decoding stage. Meanwhile, feature reuse is realized by skip connections based on the residual network for alleviating the influence of previous features on data reconstruction. Finally, in order to enhance the performance and the discriminative ability of the new MTS, the convolutional structure and fully connected module are established. Furthermore, to better validate the effectiveness of MTS forecasting, extensive experiments are executed on two different types of MTS such as stock data and shared bicycle data, respectively. The experimental results adequately demonstrate the effectiveness and the feasibility of the proposed method. Introduction Time series is the sequence of arranged numbers according to the occurrence time, which is also called dynamic series. e time span can be years, quarters, months, hours, or other factors [1]. In recent years, time series are widely applied in various fields, such as economics, medicine, transportation, and environmental science, which has been attracted much attention [2]. According to the number of observed variables, time series data can be divided into univariate time series data and multivariate time series data [2]. erefore, how to mine useful information from these time series data becomes a very important task in data mining, machine learning, artificial intelligence, and other fields [3]. As a key and crucial branch of time series data analysis, time series prediction aims to accurately predict or estimate the future events by exploring the past and current data of the single variable or several correlated variables [4]. e former is called univariate time series forecasting; the latter is called multivariate time series forecasting. For example, economists utilized the historical data of stock prices to forecast stock prices or trends [5], medical scientists made use of the biological time data to predict diseases [6], transportation departments explored the historical data of traffic flow to predict congestion [7], and environmentalists employed atmospheric timing data to estimate environmental climate changes [8], etc. Nevertheless, time series data not only contains abundant information but also appears to some complex characteristics such as high dimension, nonlinear, fluctuation, and spatiotemporal dependence, which make accurate time series data prediction become a challenging study hotspot [9]. In the past few decades, time series data prediction has been widely concerned and many methods have been proposed [10]. For instance, traditional statistics-based methods focused on relevant domain knowledge, while learning-based methods are introduced to learn temporal dynamics in a pure data-driven strategy. As a popular learning-based method, deep learning can learn the deep latent features from the input data comprehensively and has become a cutting-edge approach [11]. e traditional statistics-based methods include autoregressive (AR) [12], autoregressive moving average (ARMA) [13], autoregressive integrated moving average, and exponential smoothing models (ARIMA) [14]. Although the above methods can utilize statistical inference to describe and evaluate the relationship between variables, they assumed that the input data has a linear relationship between model structure and the constant variance [15]. erefore, there are some limitations to dealing with complex time series data containing nonlinear and nonstationary structures, so they cannot effectively obtain accurate predictions. In order to solve the shortcomings mentioned above, many learning-based methods including support vector machine (SVM) [16], genetic algorithm (GA) [17], AdaBoost [18], and artificial neural network (ANN) [19], which can simulate the complex structures of time series data, have been widely applied to time series prediction task. For example, Dong et al. [16] discussed utilizing SVM for predicting building energy consumption in tropical regions, and they considered that it was superior to other neural networks from the views of performance and parameter selection. Yadav et al. [17] proposed a neuron model based on polynomial structure and used the Internet traffic and financial time series data to conduct forecast experiments, which showed that the neural network (NN) model not only achieved better performance but also greatly reduced the computational complexity and running time comparing with the existing multilayer neural networks. However, building an effective learning-based model needs a large amount of professional data, and the training process requires a high level of computer hardware equipment. erefore, the application of traditional machine learning models is largely limited. In recent years, with the improvement of data acquisition and computing power, a novel learning-based method called deep learning has attracted much attention. Deep learning [20] can obtain a higher-level representation of the original input via designing simple and nonlinear modules, which was conducive to learning the feature representation. Convolutional neural network (CNN) [21], recurrent neural network (RNN) [22], and variant models have been successfully applied to time series prediction. Zhang et al. [23] proposed a deep spatiotemporal residual network model to predict the flow of people throughout the city. Jagannatha and Yu [24] developed a bidirectional recurrent neural network (BRNN) for medical events detection in electronic medical records. Nevertheless, RNN and BRNN are easy to suffer from the gradient vanishing and gradient exploding problems. To overcome the drawbacks, the long short-term memory network (LSTM) [25] and the gated recurrent unit (GRU) [26] were developed. Since both LSTM and GRU can keep the historical information for a longer time step, they are widely used in time series data analysis, prediction, and classification tasks. Compared with LSTM, the GRU has a simpler structure and fewer parameters, which can reduce the overfitting risk. For example, Shu et al. [27] presented a new neural network model based on improved GRU to predict short-term traffic flow. As an unsupervised method, Autoencoder (AE) is also widely applied to feature representation learning [28]. In order to extract better features, the RNN is frequently combined with AE. Xu and Yoneda [29] first used a stacked autoencoder (SAE) to encode the key evolution patterns of urban weather systems and then adopted the LSTM network to predict the PM2.5 time series of multiple locations in the city. Zhang et al. [30] proposed an encoder-decoder model for real-time air pollutant prediction, in which LSTM was the main network. e experimental results indicated that the model can fully extract the data correlations and obtain higher prediction accuracy. In addition, the attention mechanism (AM) [31] has attracted extensive attention in time series data analysis and prediction. Han et al. [32] combined LSTM with AM to predict time series, in which the AM can capture time correlation by calculating weights between nodes and neighboring nodes so that it achieved better performance and provided enlightenment for multivariate time series prediction simultaneously. Although abundant methods have been developed, their performances are limited since the high nonlinearity and nonstationarity of multivariate time series (MTS) data. To improve the prediction performance, a novel encoder-encoder prediction model is presented, and the contributions are as follows: (1) e proposed model can sufficiently extract significant temporal features of MTS data. (2) As a unit structure, the GRU is adopted to describe sequential characteristics which can reduce model parameters in the procedures of encoding and decoding. (3) e AM is introduced into the decoding process for preferably acquiring the reconstructed MTS data. (4) To strengthen the prediction performance, 1Dconvolution operation and AM are further performed based on the reconstructed new MTS data, which possess discriminant and significant characteristics. e outline of this paper is as follows. Section 2 reviews the related works, and time series data preprocessing is introduced in Section 3. Section 4 describes the proposed network structure in detail. Section 5 illustrates extensive experiments to verify the effectiveness and feasibility of the proposed model. Section 6 provides some conclusions and future works. Related Works Recently, researchers have proposed extensive time series (TS) and multivariate time series (MTS) prediction methods, which are classified into two categories including machine learning and deep learning methods [9]. 2 Computational Intelligence and Neuroscience Machine Learning Methods. e basic assumption of the statistical methods is that the TS and MTS with simple structures are linearity and stationarity. However, in real applications, the TS and MTS data are collected with complex structures, which have high nonlinearity and nonstationarity and they make the TS and MTS forecasting very difficult. Meanwhile, the machine learning algorithms are usually helpful to improve the prediction accuracy [33], which can analyze the behavior of data over time and are independent of the statistical distribution assumption to extract complex nonlinear patterns. Specifically, Li et al. [34] firstly proposed a chaotic cloud simulated annealing genetic algorithm (CcatCSAGA), which was used to optimize the robust support vector regression (RSVR) parameters for improving the performance of ship traffic flow prediction. Sahoo et al. [35] designed a novel online multiple kernels regression (OMKR), which successively learned kernel-based regression in an extensible manner. Moreover, its effectiveness was demonstrated on real data regression and time series prediction tasks. Ahmed et al. [33], respectively, adopted multilayer perceptron (MLP), Bayesian neural networks (BNN), radial basis function (RBF), general regression neural network (GRNN), k-Nearest neighbors regression (KNNR), classification and regression tree (CART), support vector regression (SVR), and Gaussian process regression (GPR) to perform experiments. is study revealed significant differences between various methods in TS and MTS prediction, and the MLP and GPR methods were the best. Besides, in order to improve the performance, Domingos et al. [36], respectively, combined the ARIMA with MLP and SVR to predict time series. It showed that the hybrid model was better than the single model. Rojas et al. [37] presented a hybrid method integrating an artificial neural network and ARMA model, which achieved outstanding results. Deep Learning Methods. e deep neural network can surpassingly learn complex data representation [38], which is widely utilized in many tasks, such as image classification, image segmentation, and natural language processing. A convolutional neural network (CNN) was originally designed to process static image analysis, which can obtain invariant local relations across spatial dimensions [39]. Recently, CNN and its variant methods were also developed for time series data prediction [40], classification [41], anomaly detection [42], clustering [43], and so on. For example, Ding et al. [44] applied the CNN model to stock market prediction. Wang et al. [45] introduced deep learning to develop a probabilistic wind power generation prediction model. In this model, a wavelet transform was used to decompose the raw wind power data into different frequencies. en, a CNN model was used to learn nonlinear features in each frequency for improving prediction accuracy. Finally, the probability distribution of wind power generation was predicted. Different from the above methods, Oord et al. [46] proposed a new network model called WaveNet, which expanded convolution to improve the longterm dependence requirement of time series. Moreover, the size of the receptive field increased exponentially with the depth of layers. Afterward, Borovykh et al. [47] adopted the WaveNet for multivariate financial time series forecasting. A recurrent neural network (RNN) is also widely exploited for time series prediction [22]. Since there is a longterm dependence on RNN during the training, it will lead to related gradient explosion and gradient disappearance. erefore, introducing the gating mechanism into RNN has drawn much attention to overcome these limitations and preserves long-term information of time series data, such as long short-term memory (LSTM) [25] and gated recurrent unit (GRU) [26]. e gated variants of RNN essentially preserve the internal state memory through their recurrent feedback mechanism, which makes them very suitable for modeling the time series data. Moreover, their ability to capture complex nonlinear dependence can be extended from short-term to long-term and cross different variables in multivariate systems. erefore, the performance of these models is excellent in the time series prediction task. Li et al. [48] built a model combining ARIMA and LSTM to improve the prediction accuracy of high-frequency financial time series. Pan et al. [49] applied the model based on the LSTM network to predict urban traffic flow and greatly improved the prediction effect via the spatial correlation. Filonov et al. [50] proposed a model based on the LSTM network to monitor and detect faults in industrial multivariate time series data. Zhao et al. [51] established a two-layer LSTM model to learn gait patterns presenting in neurodegenerative diseases for diagnostic prediction. Jia et al. [52] developed a spatiotemporal learning framework with a dual memory structure based on LSTM to predict land cover. Huang et al. [53] proposed a sequence-to-sequence framework based on GRU to predict different types of abnormal events. Fu et al. [54] used LSTM and GRU to predict short-term traffic flow, which indicated that the RNN-based methods (such as LSTM and GRU) performed better than ARIMA. Zhang et al. [55] utilized four different neural networks, such as MLP, WNN, LSTM, and GRU, to monitor the small watercourses overflow. Furthermore, the models combining CNN with LSTM or GRU have been frequently applied to time series prediction. Wu et al. [56] explored the GRU network to encode the time mode of each sequence with low-dimensional representation and then combined it with a convolutional network for modeling behavioral time series. Shi et al. [57] presented a ConvLSTM network to predict nearby precipitation which can acquire spatiotemporal correlations well. Autoencoder (AE) has also been successfully applied in time series prediction and is generally combined with other deep learning methods [58]. Considering the inherent temporal and spatial correlation of traffic flow, Lv et al. [59] used AE as one of the modules to construct a deep learning model. Yang et al. [60] proposed a new host load prediction method, which utilized AE as the precyclic feature layer of the echo state network. Gensler et al. [61] combined AE with LSTM for renewable energy power prediction which was superior to the artificial neural network and physical prediction model. Recently, Prenkaj et al. [62] combined AE and GRU to propose a new strategy for predicting the student dropout e-courses. Time Series Data Preprocessing Generally, time series data are collected manually or automatically; it is difficult to avoid data redundancy, data missing, data error, and other unknown problems in the process of collection and transmission. erefore, data preprocessing becomes a crucial and necessary procedure for time series data analysis. It mainly includes four stages, such as data clean, data normalization, data sliding window, and data split [63]. e details are illustrated in Figure 1. (1) Data Cleaning. e purpose of data clean is to deal with missing values, outlier values, and redundant attributes in time series data. ere are many ways to handle missing and outlier values. One way is to delete the data with missing and outlier values directly. However, when many attributes of data have missing and outlier values, it is very hard to remain adequate useful attributes and results in incomplete time series data, which will affect the learning and generalization ability of models. e other way considers outlier values as missing values and then the data filling technique is applied to solve the above problems. Data filling includes statistics-based and learning-based methods. e former generally adopts mean filling, while the latter adopts simple linear regression or a complex learning model (such as deep learning). In our work, the mean filling is utilized to process missing values and outlier values. Moreover, feature selection or feature extraction methods are generally adopted to solve redundant attributes. In particular, the proposed model in our work is based on a deep learning framework, which has a strong feature representation ability. erefore, it is robust to deal with data containing redundant attributes. (2) Data Normalization. Since the different attributes of data often have different measurement scales, the values collected may vary widely. For the sake of eliminating the influence of measurement scale and value range among different attributes, it is necessary to perform normalized processing which can scale data in a certain proportion, such as mapping data values to [−1, 1] or [0, 1]. e popular data normalization methods contain minimum-maximum normalization and zero-mean normalization. Minimum-maximum normalization is named deviation standardization, which maps the values of the original data to [0, 1] via a linear transformation. e formula is as follows: where max and min represent the maximum and minimum values of data, respectively. e method can preserve the relationships that exist in original data. Zero-mean normalization is known as standard deviation standardization. After processing, the mean value and the standard deviation of normalization data are 0 and 1, respectively. e formula is defined as where x and σ are the mean and standard deviation of original data, respectively. (3) Data Sliding Window. is operation mainly creates time series data by the predefined sliding window size and step for the original time series data. In other words, this operation is used to generate the predicted data for the next moment using historical data with a given interval. e specific operation of the data sliding window is shown in Figure 2 [64]. Given any time series data with length N, such as {1, 2, 3, 4, 5, . . ., N − 1, N}, when the sliding window size is set to L and the sliding step is 1, the N-L data sets with length L + 1 are formed. Particularly, the first L data of each set is regarded as training data and the value of the number L + 1 is the target value. (4) Data Split. is stage divides the time series dataset into training data and test data. For example, the first 60% are used for training and the remaining 40% are used to test in the experiments. The Proposed Method In this work, a novel time series prediction model based on the encoding-decoding framework is designed, which integrates the recurrent neural module, convolutional module, attention mechanism, and fully connection module into a unified framework. As shown in Figure 3, the proposed model consists of three parts such as encoding, decoding, and prediction modules. In the encoding module, the gated recurrent unit (GRU) is taken as the main unit structure for extracting more effective time series features. In the decoding module, the attention mechanism (AM) is introduced to explore the importance of historical data collected at different times, so that it can obtain better new time series data. In addition, taking the influence of previous features on data reconstruction into account, feature reuse is realized by the skip connections based on the residual network. In the prediction module, the convolution layer is adopted to extract effective features from the reconstruction time series. en, the AM is further performed on the convolution feature mapping owing to the influence of important information on prediction performance. Finally, a multilayer fully connected network is established for prediction. Deep Autoencoder (DAE) . Autoencoder (AE) is an unsupervised deep learning method which is frequently used in feature representation, data compression, image denoising, and other tasks [28]. e structure of AE includes an encoder and a decoder, which only contain a fully connected hidden layer. To better extract features and reconstruct original data, Deep Autoencoder (DAE) [65] is Computational Intelligence and Neuroscience designed that contains multiple hidden layers shown in Figure 4. LSTM and GRU. In general, DAE is a multilayer feedforward neural network, while it does not consider the importance of historical information of time series data to the prediction or classification of unknown data. As a specific network structure, a recurrent neural network (RNN) [22] can adeptly utilize the historical information of time series data, which adopts a backpropagation through time (BPTT) algorithm to train and learn parameters. However, RNN produced gradient vanishing or gradient exploding problems when it handled time series with long time intervals [25]. In particular, the longer the time interval, the more likely it is to appear severe gradient vanishing or gradient exploding, which will make it difficult to train effective RNN models for long interval sequences. To solve the above problems, other RNN variants (such as LSTM [25] and GRU [26]) are easier to capture the longterm dependence of time series data. LSTM uses the gate mechanism to control the information accumulation speed and can selectively update information and forget information accumulated. LSTM includes an input gate, forget gate, and output gate, which are displayed in Figure 5. e forget gate f t controls which information needs to be forgotten derived from the internal state of the previous moment. e input gate i t controls which information from the current candidate state needs to be retained. And the output gate o t controls which information of the current internal state needs to be output. Different from LSTM, GRU is a simplified version of LSTM. It merges the forget gate and input gate into the update gate and retains the original reset gate, as shown in Figure 6. It can be observed that no additional memory units are needed in GRU. It is due to the fact that an update gate can control how much information needs to retain from the historical state and needs to receive from the candidate state for the current state. e calculation formula of GRU is where z t and r t represent update gate and reset gate, respectively. h t is the state of the current moment t and h indicates the candidate state. σ is the sigmoid activation function that can convert results to [0, 1]. tanh stands for hyperbolic tangent activation function. e symbol ⊙ is the dot product operation of corresponding elements. x t represents the input of the neural network at time t. W z , W r , W h and U z , U r , U h represent the parameter matrix and recurrent weight of the model. b z , b r , and b h are the offset vector. Compared with LSTM, GRU has a simple structure and fewer parameters because there are fewer gate structures of GRU. erefore, GRU not only can reduce the model training time and avoid overfitting problems but also can achieve the same results as LSTM and even better than LSTM. In addition, BiGRU is a variant version of GRU. Although BiGRU has better performance than GRU in some cases, the parameter size of BiGRU is bigger than GRU. In order to overcome the overfitting problem, the GRU is adopted as the main unit structure of the autoencoder. Attention Mechanism. Attention mechanism (AM) has been widely applied to natural language, computer vision, and other fields [66]. It is a resource allocation scheme that uses limited computing resources to process more important information for the information overload problem. Like artificial neural networks, AM originated from human vision and borrowed from human visual attention mechanisms. e core idea of AM is to select the more critical information and ignore the unimportant or irrelevant information to the current task from a large amount of information [66]. At present, plenty of attention mechanisms have been built to solve related tasks, such as spatial attention, channel attention, and mixed attention mechanisms [67]. In image understanding tasks including image segmentation and target detection, the channel attention (CA) [68] module is mainly adopted to explore relationships between feature maps of different channels, and its structure is shown in Figure 7. In the module, the feature map of each channel is taken as a feature detector that can determine which part of the features should be noticed more. It is well known that the time attribute is very important and also affects the prediction results. erefore, we view each time attribute as a channel and the channel attention (CA) mechanism is integrated to mine the significance of time attributes in the proposed method. Prediction Module. In the prediction module, a 1Dconvolution is firstly explored to extract features from the time series data reconstructed by DAE. en, in order to explore the different contributions of historical data for forecasting, the CA mechanism is performed on feature mapping by the previous layer. Finally, a multilayer dense network structure is constructed for prediction. e details are displayed in Figure 8. Experiments and Results Analysis To verify the effectiveness of the proposed method, two series of experiments are conducted on public stock and shared bicycle datasets, respectively, and compared with some related methods. Many experimental results validate the effectiveness of our model. Evaluation Metrics and Experimental Environment. In order to quantitatively analyze the accuracy and superiority, mean square error (MSE), mean square error (RMSE), mean absolute error (MAE), and mean average percentage error (MAPE) are adopted to evaluate the performance of the proposed model [69]. e calculation formulas are as follows: where X t and X t ′ represent the actual and predicted values of the data and n is the number of samples. e smaller the above values, the more accurate the prediction result. e source codes of the proposed method and the compared methods are completed using Tensorflow with Python. e corresponding versions of the development software and the configurations of the hardware platform are listed in Table 1. Moreover, the settings of the key parameters during the training processing are shown in Table 2. Stock Data Description. e stock data used in the experiment are Shanghai Composite Index 50 (SCI-50), CSI-300, and Shenzhen Component Index (SZCI). Each stock data records multiple attributes, such as the closing price, the highest price, the lowest price, the opening price, the previous day's closing price, change, and ups and downs. e closing price, the highest price, the lowest price, and the opening price represent the final price, the highest price, the Tables 7-9. Obviously, in most cases, when the step increases, the value of each evaluation indicator decreases. It indicates that the performance of the proposed model improves with the increasing step. is is because long interval data provides more useful information for prediction. However, as the step continues to increase, the values of each evaluation indicator will increase. It indicates that the performance of the proposed model decreases with the increase of time step. e possible reason is that time series data with too long intervals contains redundant information and high volatility, which makes it difficult to capture more effective information for future data prediction. Convergence Analysis. In order to verify the convergence of our proposed method, we plot the curves of loss values (MSE) on the training set and validation set for each dataset. From Figure 9, we can see that our model reaches convergent very quickly on the training set. For the validation set, the loss values (MSE) of the proposed model fluctuate but basically maintains stability when the number of iteration (Epochs) is greater than 400. Performance Analysis. In order to further test the performance of the proposed method, we compare it with GRU, BiGRU, GRU-AE, BiGRU-AE, GRU-AE-AM, and BiGRU-AE-AM. Tables 10-12 show the results of different methods on three stock datasets. e following conclusions can be drawn from the experimental results: (1) e performances of traditional GRU and BiGRU models are lower than those of other comparison methods. Furthermore, BiGRU not only makes use of the useful information of historical data in the forward direction but also mines the dependence of current data on historical data in the reverse direction. erefore, BiGRU has better performance than GRU. . It demonstrates that introducing the attention mechanism into the recurrent neural network can mine significant information in time series data. (4) e proposed model is based on the idea of integrating encoding-decoding and attention mechanisms simultaneously into the recurrent neural network. Different from GRU-AE-AM and BiGRU-AE-AM, the proposed method develops the attention mechanism in the decoding stage to capture the degree of importance between different intervals. erefore, compared with other methods, the presented method establishes significant advantages on different evaluation indicators. Shared Bicycle Data Description. e datasets of this experiment are derived from the shared bicycle demand of three streets in Shenzhen, China, such as Longgang Central City, Pingshan Street, and Zhaoshang Street. Each data set contains the historical travel data of shared bicycles, time attribute data (such as hours, working day or not), and weather data (such as temperature, rainfall, wind speed, and humidity). e details are listed in Table 13. Parameters Analysis. In this experiment, the influence of the time step on the prediction performance is also analyzed adequately. e step size setting is consistent with stock price prediction experiments, and experimental results are shown in Tables 14-16. We can see that the effect Tables 7-9 and Tables 14-16 on the Longgang Street dataset. When the step is set to the minimum (L � 5), the proposed method can obtain the optimal results. e cause is maybe that time series data has strong dependence and complex data structure. Performance Analysis. Similarly, the proposed method is compared with other well-known methods, and the results are shown in Tables 17-19. On the whole, the experimental results are consistent with those of stock experiments, except for the data in Longgang. In particular, the proposed method can achieve better performance with a step value of 20. It indicates that the data structure is relatively simple, which is prone to overfitting for the complex model. erefore, the evaluation metrics of the bidirectional recurrent neural network model (BiGRU, BiGRU-AE, and BiGRU-AE-AM) are higher than those of the recurrent neural network model with unidirectional structure (GRU, GRU-AE, and GRU-AE-AM). 12 Computational Intelligence and Neuroscience Conclusions and Future Works In this paper, to improve the accuracy of time series data prediction, the autoencoder, recurrent neural network, attention mechanism, convolution module, and full connection module are integrated to establish a novel prediction model based on an encoding-decoding framework. e prediction performances are evaluated for the stock price and the demand for shared bicycles on three stock datasets and three shared bicycle datasets, respectively. In addition, we compare it with many other related methods, which demonstrate that the proposed model has higher prediction accuracy from the views of multiple quantitative indicators (such as MSE, RMSE, MAE, and MAPE). e future works mainly include the following points. (1) We will try to apply the proposed model to prediction tasks of time series data in other fields (such as medical, energy, environment, and other industrial data). (2) Using the core idea, we will further extend it to solve the anomaly detection task of time series data. (3) We will intensively study how to combine the traditional multivariate time series method with deep learning to further improve the prediction performance in real applications. Data Availability e network code and data are available from the corresponding author upon request. Conflicts of Interest All authors declare that there are no conflicts of interest regarding the publication of this paper.
7,256.2
2022-04-14T00:00:00.000
[ "Computer Science" ]
Geographic Association of Rickettsia felis-Infected Opossums with Human Murine Typhus, Texas Application of molecular diagnostic technology in the past 10 years has resulted in the discovery of several new species of pathogenic rickettsiae, including Rickettsia felis. As more sequence information for rickettsial genes has become available, the data have been used to reclassify rickettsial species and to develop new diagnostic tools for analysis of mixed rickettsial pathogens. R. felis has been associated with opossums and their fleas in Texas and California. Because R. felis can cause human illness, we investigated the distribution dynamics in the murine typhus–endemic areas of these two states. The geographic distribution of R. felis-infected opossum populations in two well-established endemic foci overlaps with that of the reported human cases of murine typhus. Descriptive epidemiologic analysis of 1998 human cases in Corpus Christi, Texas, identified disease patterns consistent with studies done in the 1980s. A close geographic association of seropositive opossums (22% R. felis; 8% R. typhi) with human murine typhus cases was also observed. urine typhus is a common infectious diseases in south Texas. Often the disease is mild and unrecognized; however, it can be severe and even fatal. The severity of murine typhus infection has been associated with old age, delayed diagnosis, hepatic and renal dysfunction, central nervous system abnormalities, and pulmonary compromise. Up to 4% of hospitalized patients die (1)(2)(3). Murine typhus, which is endemic in many coastal areas and ports throughout the world, is one of the most widely distributed arthropodborne infections. Sporadic outbreaks of murine typhus have been reported in Australia and more recently in China, Greece, Israel, Kuwait, and Thailand (4)(5)(6). Recent serosurveys have demonstrated a high prevalence of antibodies to typhus group Rickettsiae in humans living in Asia and southern Europe. In the United States, thousands of human cases were reported annually in the 1940s (1,2). A major public health measure consisting of a combination of environmental modification, rat, and vector-control programs greatly reduced human cases in the United States to <100 reported cases of murine typhus/year. As a result, most states no longer report murine typhus. However, murine typhus has been a reportable disease in Texas for the past 40 years. Interest in this disease has been rekindled because of the resurgence of human cases of murine typhus in south Texas from 1980 through 1984, when 200 cases were reported to the Texas Department of Health. Twenty-eight percent of the patients resided in Nueces County, where the highest annual incidence rate, 4.2 patients/100,000 residents, was reported. Although onset of symptoms occurred throughout the year, 40% of cases were reported in April, May, and June. These studies (Boostrom et al., unpub. data; [7][8] also showed that the maintenance and transmission of Rickettsia typhi, the etiologic agent of murine typhus, did not occur by the classic cycle involving rats (Rattus rattus and R. norvegicus) and the rat flea, Xensopsylla cheopis. Detailed investigations of murine typhus in the Nueces County/Corpus Christi area have shown a cardinal role for the opossum (Didelphis virginiana) and the cat flea (Ctenocephalides felis) in the R. typhi life cycle (7,8). In addition to R. typhi, sampled opossums and their fleas were also infected with R. felis (formerly known as ELB agent [9][10][11][12]). Furthermore, in 1994 R. felis was detected by polymerase chain reaction (PCR) in a blood sample from a patient diagnosed with murine typhus. The presence of R. felis, clinically masquerading as dengue fever, was documented recently in patients from Yucatan, Mexico, and four patients with fever and rash in France and Brazil (13)(14)(15). Our published data and these recent reports not only support the pathogenic role of R. felis but also demonstrate its wide geographic distribution. In this study, we report the presence of R. typhi and R. felis in opossums and their fleas collected during 1998 in south Texas. Data from our 1998 studies show that the rate of seropositive opossums and infected fleas, as well as the R. typhi/R. felis ratio, are comparable with those in our 1993 studies. In addition, we analyzed the reported cases of murine typhus in Corpus Christi in relation to opossum distribution and seroprevalence. We found a positive correlation between 1998 human murine typhus cases and the geographic distribution of seropositive opossums and their fleas. Review of Human Murine Typhus Cases Historical data on cases of murine typhus are available through the Texas Department of Health and the Corpus Christi-Nueces County Health Department. Extant data fit the confirmed case definition of a fourfold rise in indirect immunofluorescence assay (IFA) titer or a single titer of >1:128 with clinical symptoms. The 1997 data were extracted from cases reported to the Texas Department of Health. In 1998, data included passive and active surveillance of Spohn Hospital System records. In addition, a board-certified infectious disease specialist contacted area physicians about a human typhus study, which was running concurrently with the opossum study. We also included murine typhus cases reported by area physicians during May through July 1998. Data were analyzed for trends in yearly case rate and incidence by age groups. The 1997 and 1998 data were analyzed for sex, age, symptoms, and geographic distribution of cases. Opossum Collection The sera analyzed in this study came from the opossums trapped by Corpus Christi residents during an 18-day period in mid-June 1998. A total of 149 opossums were given to animal control officers for euthanization. Opossums were removed from traps, tagged, and transported to the Vector Control facility, where they were numbered and anesthetized with a ketamine/xylazine mixture. The opossums were weighed, identified by age and sex, processed for ectoparasites, and bled by cardiac puncture. Fleas and ticks were removed with a flea comb. The ectoparasites were collected and placed in vials containing 70% ethanol. Rickettsial Seroprevalence in Opossums Over 95% of trapped opossums were used for a seroprevalence study of rickettsial infections. Initial screening of opossum serum samples of antibodies to R. typhi, R. rickettsii, Coxiella burnetii, and Ehrlichia chaffeensis was carried out at the University of Texas at San Antonio. Rickettsial diagnosis was performed with Multi-Test INDX R3E2 Dip-S-Ticks test strips (Integrated Diagnostics, Inc., Baltimore, MD). The assay uses a four-step enzyme-linked immunoassay dot technique for detecting both immunoglobulin (Ig) G and IgM antibodies. Serum samples from uninfected murine typhus patients were used as negative and positive controls. A titer >1:32 was considered positive for R. typhi. Eighty samples with equivocal results were retested by the kit manufacturer (Integrated Diagnostics, Inc.). In addition, opossum sera were tested by IFA for antibodies to R. typhi and R. felis by IFA. Briefly, R. felisinfected flea midguts (FleaData, Inc; Freeville, NY) were dissected and placed into individual wells of a 10-well Tefloncoated antigen slide at two midguts/well and allowed to air dry for 20 minutes. Slides were fixed in ice-cold acetone for 10 minutes, air dried, and incubated with individual opossum serum samples (diluted 1:64 and 1:128 in phosphate-buffered saline [PBS]), for 1 hour in a humidified chamber at 37°C. Serum was removed by aspiration, and wells were washed three times with PBS. Midguts were then incubated with secondary antibody (fluorescein isothiocyanate-conjugated goat anti-opossum IgG, [Bethyl Lab., Montgomery, TX], diluted 1:20 in PBS/0.01% Evan's blue) for 30 minutes at room temperature. After three PBS washes, slides were air dried and screened for seropositivity. R. typhi-infected Vero cells were also used for the serologic screening. Murine typhus convalescent-phase serum, R.typhi-positive opossum serum, negative control serum, and uninfected flea midguts (IFA and PCR negative) were used as positive and negative controls. The cat fleas, purchased from FleaData, Inc., were constitutively infected with R. felis (>95% [15]) and used as positive controls and antigen sources for opossum serology. The IFA slides were screened by two readers for accuracy. Although attempts to isolate Rickettsiae from the serum samples during the acute phase of infection were unsuccessful, we extracted DNA from selected opossum serum samples. DNA was extracted from 200-µL serum samples by using QIAmp DNA Blood Mini Kit (Qiagen, Valencia, CA) and used for PCR with Rickettsia-specific primers. Detection and Identification of Rickettsiae in Fleas Detection and identification of rickettsial species in fleas collected from opossums were carried out using PCR and restriction fragment-length polymorphism analysis of PCR products. Detection of R. felis gene encoding 17-kDa protein antigen in fleas was done by PCR as described (9)(10)(11). Briefly, DNA from fleas was obtained by grinding the fleas with grinders containing 20 µL of sterile distilled H 2 O and boiling the lysate for 10 minutes. After centrifugation, 5 µL of the supernatant containing DNA was used for PCR. The DNA template was added to a solution containing 18 µL of PCR Master mix (Roche, Mannheim, Germany) and 1 µL each of forward and reverse primers (100 µmol). In a PCR thermal cycler (Thermo Hybaid, Franklin, MA), each sample was heated to 94°C for 3 minutes, followed by 30 cycles of 94°C for 45 seconds, 55°C for 45 seconds, 72°C for 45 seconds, and an additional incubation period of 72°C for 5 minutes on the final cycle. The target PCR product was visualized by electrophoresis on a 1% agarose gel stained with ethidium bromide and excised; DNA was recovered from the gel with a StrataPrep DNA extraction kit (Stratagene, La Jolla, CA) according to manufacturer's protocol. Enzymatic digestion of cleaned PCR product was done by incubating 8 µL of DNA in 1X enzyme buffer (10 mM Tris-HCl [pH 7.5], 50 mM KCl, 0.1 mM EDTA, 1 mM dithiothreitol, 200 µg/mL bovine serum albumin, and 50% glycerol), and 15 U of AluI (Stratagene) for 1 hour at 37°C. Digested products were visualized on 8% TBE gels (Novex, San Diego, CA) stained with ethidium bromide. For sequencing, the purified 17-kDa fragments were subcloned in TOPO TA cloning vector (Invitrogen, San Diego, CA) and were sequenced by the dye terminator method on a model 373 automated fluorescence sequencing system (Applied Biosystems, Foster City, CA). RESEARCH Sequence analysis was performed with the MacVector software package (Accelrys, Inc., Madison, WI), and the BLAST program (National Center for Biotechnology Information, Bethesda, MD) was used for comparison. Sequencing was carried out three times, in both directions, to ensure fidelity. Human Murine Typhus Cases, Corpus Christi, Texas Since the 1970s, the number of murine typhus cases has fluctuated around 20 cases/year in south Texas. In 1997, however, a record number of cases, 72, were reported in Texas, resulting in a statewide incidence of 0.4/100,000 population. Sixty-nine of the 72 cases occurred in Region 11 of the Texas Department of Health; most cases occurred in three counties: Hidalgo, Cameron, and Nueces. These three counties consistently register the majority of murine typhus cases in Texas. Data from January 1985 through December 1997 show that Nueces County has averaged the most cases. Cases are reported year-round; however, peak incidence occurs during May and June, which leads local physicians to call murine typhus "the summer flu." Murine typhus cases from 1997 and 1998 (Figure 1), occurring in residents of Corpus Christi, were reviewed. Patients ranged from 5 to 79 years of age (mean 40 years). The 1997 and 1998 murine typhus patients were analyzed for race, ethnicity, history of fleabite, exposure to cats and opossums, and presence of symptoms. Fifty-five percent of patients were Hispanic, and 62.2% were female. Symptoms included headache (56%), fever (100%), rash (27%), nausea/ vomiting (51%), malaise/fatigue (44%), arthralgia/myalgia (22%), and diarrhea (20%). Fewer than 15% of patients reported a history of fleabite, and exposure to cats or opossums at residences was associated with only 13% and 11% of cases, respectively. Nueces County/Corpus Christi had 14 of the 42 confirmed murine typhus cases reported in 1999 in Texas and 20 of the 52 reported cases in 2000. Characteristics of Opossums Trapped for Typhus Studies Opossums are nuisances for residents of Corpus Christi by inhabiting den sites in junk heaps, storage sheds, garages, and attics. Corpus Christi's opossum population is controlled primarily by private citizens using personal traps. Fifty traps are available at nominal rental through the Corpus Christi Animal Control Program. In contrast, anecdotal information from the nearby Flour Bluff and Calallen areas suggests that residents in these areas tolerate opossum presence. Most opossums that cause problems for residents in these areas are destroyed privately; occasionally, they are used for food. Nevertheless, from 1996 through 1998, Corpus Christi Animal Control trapped and euthanized >18,000 opossums. The mean number of trapped opossums during this 3-year period was 6,324/year. Although data regarding opossum population size, based on the average number of trapped opossums/year, are not available for the study area, the trapped population may represent 20% to 30% of the total yearly population. If this is the case and assuming equal distribution of opossums' ideal habitats throughout the city, the opossum population density in Corpus Christi could approach >75 opossums/square mile or approximately 1 opossum/0.013 square mile. Although opossums are collected continuously in the Corpus Christi area, the 1998 study focused on opossums trapped within approximately a 3-week period during the traditional peak of human murine typhus cases. The characteristics of the 149 opossums trapped during June 8-25, 1998, were as follows: 51% female (n=76); 49.0% (n=73) male; and 47.7% juveniles and sexually immature ( Table 1). Weight of the trapped opossums ranged from 5 oz to 8 lbs (mean weight 13 oz for juveniles; 4 lbs 14 oz for adults). Rickettsial Seroprevalence in Opossums In 1998, a seroprevalence study for R. typhi showed a geographic association between human cases of murine typhus and ranges of seropositive opossums (Figures 2 and 3). Six (31.6%) of the 19 patients lived within the minimum home range, 0.02 square mile, of a seropositive opossum. Another five patients (26.3%) were within the maximum home range, 0.1 square mile, of a seropositive opossum. Initial studies on seroprevalence of rickettsial infections in opossums carried out by enzyme-linked immunoassay showed no seroreactivity to C. burnetti, the agent of Q fever; E. chaffeensis, the agent of monocytic ehrlichiosis; and R. rickettsii, the agent of Rocky Mountain spotted fever. However, >25% of the 149 serum (Table 2). Although R. felisinfected flea midguts were used to identify non-R. typhi seropositive opossums, these two rickettsial species could not be distinguished in some samples (n=6). Since both R.typhi-and R. felis-positive fleas were collected from opossums, the possibility of dual infections of opossum could not be ruled out, even though dual rickettsial infection in fleas has not been reported (16). (Figure 4). The overall R. felis infection rates for 1998 samples were lower than 1993 infection rates (Table 3). Overall, 8% of the opossums had positive fleas when fleas were tested individually, compared with 21% when flea pools (50 pools; <20 fleas/pool/ opossum) were used. The observed discrepancy between the results from pooled and individual flea samples reflects the variability in the DNA recoverable by PCR procedure. Although there was a positive correlation between the opossum age and the flea/opossum ratio, infected fleas came from both juvenile and adult opossums. Additionally, no correlation between the infected fleas and seropositive opossums existed. Discussion Since 1946, the Annual Summary of Notifiable Diseases in Texas has included murine typhus. Historical data identify 1,127 cases of murine typhus in 1946. However, the reported cases of murine typhus dropped rapidly with the advent of successful rodent and flea controls; by 1952, <100 cases/year in Texas were reported (1,2). Through 1960, the number of human cases steadily decreased, ranging from 12 to 50 and averaging 20 cases/year. The sudden increase in locally acquired cases in the 1990s presented a different reservoir-vector-rickettsia paradigm. Historically, murine typhus infection as an urban zoonosis has been maintained and transmitted in commensal rodents, in particular the Norway rat (R. norvegicus) and the oriental rat flea (X. cheopis) (1,2). However, in recent years the zoonotic cycle responsible for the documented human murine typhus cases in south Texas, as well as southern California, has been shown to involve opossums and cat fleas (7)(8)(9)(10)17). The role of opossums and cat fleas in the transmission of R. typhi in suburban focus of murine typhus in Los Angeles County has been well documented (9,17). As in our study, a high proportion of opossums collected in Orange County, California, was seropositive for rickettsia (9,17). Opossums, as a peridomestic animal, are frequent visitors of human habitations, where they search for both harborage and food and thus expose the occupants to cat fleas and consequently to rickettsial pathogens. Cat fleas are frequently found in large numbers on opossums and are avid feeders on humans and household pets. In addition to R. typhi, the cat fleas also RESEARCH harbor R. felis. In fact, the cat flea infection with R. felis is more common than R. typhi (7,8,10). Both rickettsial species are readily maintained transovarially in fleas (11,12,18), but in contrast to commercial cat flea colonies that usually maintain >80% R. felis-infection rates, only 1% to 5% of wild-caught fleas are infected with this rickettsial species (7,8,10,18). We have shown that cat fleas collected from opossums from Corpus Christi, Texas, were infected with either R. typhi or R. felis, and the infection rates remained <5% in both the 1993 and 1998 samplings. While we have found no evidence for dual infection in individual fleas, opossums fed on by infected fleas could have antibodies against both R. typhi and R. felis. Our initial opossum serosurvey results (Tables 1 and 2) using enzyme-linked immunoassay were directed against R. typhi only. However, IFA results from R. felis-infected fleas confirmed our earlier findings (7,8,10) that the cat flea/opossum cycle is responsible for the maintenance of both R. typhi and R. felis in Corpus Christi. We have reported the importance of R. felis as a component of murine typhus transmission cycles (14,19,20). Both R. typhi and R. felis were found in fleas and opossum tissues from the murine typhus-endemic areas of southern California and south Texas (7,8,10). Additionally, a retrospective investigation of five murine typhus patients from Texas demonstrated that four of the patients were infected with R. typhi and the fifth had been infected with R. felis (7). This documented human infection with R. felis and its presence in opossums and their fleas, and possibly in other wildlife associated with human habitations, have raised concerns about R. felis spillover into human populations. In addition, cat fleas infected with R. felis have been identified not only in the United States (19) but also in Central and South America, Europe, and Australia (T. Kilminster, unpub. data). Together, our published data and these recent reports not only support the pathogenic role of R. felis but also demonstrate its wide geographic distribution. However, we know very little regarding the natural maintenance and transmission of this organism in areas of the world besides south Texas and southern California. The cat flea, known as an indiscriminate feeder, has an extremely broad host range. While it parasitizes cats, opossums, and other animals of the same size, the flea readily switches to different hosts, and it has been found on rats and mice. Because cat fleas are commonly found on household pets, we extended our studies to determine rickettsial seroprevalence in cats. Our pilot serologic studies showed >15% of 513 serum samples from the eastern USA were reactive at >1/64 with R. felis, as assessed by IFA (Higgins et al., unpub. data). Sorvillo et al. (17), in their Los Angeles study of a suburban focus of murine typhus, reported that 9 of 10 domesticated and 3 of 26 feral cats were seropositive to R. typhi. Thus, domesticated cats and cat fleas, as well as peridomestic animals, may play an important role in the maintenance cycle of R. felis and its transmission to humans. Our study further documents the involvement of the opossum/Rickettsia/cat flea triad in the flea-associated rickettsial transmission cycle of urban and suburban areas of south Texas and southern California. Similar host/parasite relationships may also operate in other parts of the world where recent R. felis human cases have been documented (13,15). Recent attention to R. felis, which already has resulted in reassignment of this organism to the spotted fever group rickettsiae (14,15), may further elucidate the other components involved in the maintenance of this rickettsiosis. Lanes 2 and 3: purified 17-kDa-fragment amplification product from Rickettsia typhi-infected Vero cells and AluI digest, respectively. Lanes 4 and 5: purified 17-kDa-fragment amplification product from colonyraised R. felis-infected fleas and AluI digestion, respectively. Lanes 6 and 7: purified 17-kDa-fragment amplification product from R. typhiinfected fleas collected in Texas and AluI digestion, respectively. Lanes 8 and 9: purified 17-kDa-fragment amplification product from R. felisinfected fleas collected in Texas and AluI digestion, respectively. (2) a Confirmed with polymerase chain reaction/restriction fragment-length polymorphism sequencing. Dr. Boostrom is the director of Family Health Services at the Corpus Christi-Nueces County Public Health District. Her research interests include communicable disease surveillance and the epidemiology of typhus in south Texas. Emerging Infectious Diseases Policy on Corrections The Emerging Infectious Diseases journal wishes error-free articles. To that end, we 1) Make corrections as quickly as we become aware of errors 2) Publish corrections online and in print. Online, we correct the error in the article it occurred with a note that the article was corrected and the date of correction. In print, we prominently publish a full correction, printing all needed information, and provide the URL of the corrected online article for reprints. For additional information on corrections, send email to<EMAIL_ADDRESS> Correction, Vol. 8, No. 3 In Listeria monocytogenes Infection in Israel and Review of Cases Worldwide, by Y. Siegman-Igra et al., an error appears in the discussion section. The corrected sentence appears below and online at http://www.cdc.gov/ncidod/EID/vol8no3/01-0195.htm . The case-fatality rate in the collected data on nonperinatal infection was 36% (413 of 1,149 patients for whom this information was available). We regret any confusion this error may have caused
5,062.4
2002-06-01T00:00:00.000
[ "Environmental Science", "Medicine", "Biology" ]
Two-loop amplitudes for t ¯ tH production: the quark-initiated N f -part : We present numerical results for the two-loop virtual amplitude entering the NNLO corrections to Higgs boson production in association with a top quark pair at the LHC, focusing, as a proof of concept of our method, on the part of the quark-initiated channel containing loops of massless or massive quarks. Results for the UV renormalised and IR subtracted two-loop amplitude for each colour structure are given at selected phase-space points and visualised in terms of surfaces as a function of two-dimensional slices of the full phase space. Introduction Higgs production in association with a top quark pair was observed for the first time a few years ago at the Large Hadron Collider (LHC) [1][2][3] and will play an important role at the High-Luminosity (HL) LHC.The process pp → t tH is particularly interesting due to its direct sensitivity to the top-Yukawa coupling y t , which is now being constrained with increasing accuracy, including potential CP-violating couplings [4,5].The importance of this process was realised a long time ago [6,7], and NLO QCD corrections for on-shell t tH production have been known for many years [8][9][10][11][12].The corrections have been matched to parton showers in Refs.[13][14][15].NLO EW corrections have first been calculated in Ref. [16], the EW corrections have been combined with NLO QCD corrections within the narrow-width-approximation (NWA) for top-quark decays in Refs.[17,18].NLO QCD corrections to off-shell top quarks in t tH production with leptonic W -decays have been calculated in Ref. [19,20] and full off-shell effects with H → b b have been calculated in Refs.[21,22].A combination of the NLO QCD corrections with NLO EW corrections has been presented in Ref. [23], NLO QCD corrections combined with electroweak Sudakov logarithms and a parton shower have been studied in Ref. [24]. Given the projection that the statistical uncertainty will shrink to the order of 2-3% after 3000 fb −1 [33], the measurement of t tH will be dominated by systematics.As the dominant systematic uncertainties currently come from modelling uncertainties of signal and backgrounds [1][2][3], there is a clear need to reduce the theory uncertainties.At NLO QCD the scale uncertainties are of the order of 10-15%, therefore NNLO QCD corrections are necessary to match the experimental precision at the HL-LHC. First steps towards this goal already are available in the literature: in Ref. [34], O(α 4 s ) corrections to the flavour-off-diagonal channels have been calculated, exploiting relations from q T -resummation [35].In Ref. [36], the total NNLO cross section has been presented, where for the finite part of the two-loop virtual amplitude a soft Higgs boson approximation has been used.The coefficients of the two-loop infrared singularities for this process have been calculated in Ref. [37].In Ref. [38], the order O(y 2 t α s ) corrections to the perturbative fragmentation functions and to the splitting functions relevant for associated top-Higgs production have been calculated.Analytic results for the master integrals entering the leading-colour two-loop amplitudes that are proportional to the number of light flavours for the processes gg, q q → t tH have recently been presented in Ref. [39].Furthermore, the gg → t tH one-loop amplitude has been calculated semi-numerically up to second order in the ε-expansion in Ref. [40].Results for the two-loop amplitudes for both the gluon and the quark channel in the high-energy boosted limit have been provided very recently in Ref. [41]. In this work we present numerical results for the two-loop virtual amplitudes for q q → t tH which contain closed fermion loops, i.e. are proportional to the number of light fermion flavours n l , heavy fermion flavours n h , or both.Specifically, we calculate the renormalised interference of the two-loop amplitude with the tree-level amplitude, with full dependence on the top quark and Higgs masses, split into nine independent colour and fermion flavour factors.Many of the master integrals appearing in this calculation are not currently known fully analytically, we therefore choose to evaluate all integrals using the sector decomposition [42][43][44][45] approach.Our results are visualised on one-and twodimensional slices of the five-dimensional phase space.These results can be regarded as a proof of concept for the calculation of the other colour structures and partonic channels. The paper is structured as follows.In Section 2 we describe the kinematics of the process, the structure of the q q → t tH amplitude, and the workflow of our calculation.We present our results in Section 3 and conclude in Section 4. Further details of our calculation, including the UV renormalisation, the colour decomposition, the integral families used for the integral reduction, and full numeric results at several example phase-space points are given in Appendix A. Description of the method The calculation of the virtual two-loop amplitudes contains the channels q q → t tH and gg → t tH.Here we focus on the quark initial state. Kinematics We use "all incoming" kinematics, and the Mandelstam invariants are defined by Ten such invariants can be built; five of them are independent due to momentum conservation.Out of these we use the following dimensionless variables: There is also an independent parity odd invariant, The square of the parity odd invariant is equal to the Gram determinant spanned by four linearly independent external momenta and is not algebraically independent of the other invariants.However, as ϵ(1234) picks up a sign under parity, while the square root of the Gram determinant does not, the sign of the parity odd invariant must be specified to fully describe a physical phase-space point.QCD is invariant under parity, therefore, the QCD corrections to the t tH production amplitudes ultimately must not depend on the sign of the invariant. The phase-space parameters.The angles θ t and φ t are local to the t t rest frame, while θ H is local to the t tH rest frame. Phase space parametrisation The phase-space volume for t tH production is non-trivial when expressed in the variables given in eq.(2.4).To parametrise it in a more explicit way we factorise it into sub-phasespace volumes for the production of a "t t state" and a Higgs boson, combined with the "decay" of the t t state into two top quarks, leading to the following expression: and the angles θ H , θ t , and φ t introduced as in Figure 1 with precise definitions given in Appendix A.4. As the production threshold of the t tH system is located at s 0 = (2m t + m H ) 2 , a convenient variable for a scan in partonic energy is such that β 2 → 0 at the production threshold and β 2 → 1 in the high energy limit.For a compact parametrisation of the fraction of kinetic energy which enters the t t system, we define the variable frac s t t as with frac s t t = 0 corresponding to the production threshold of the t t system with the Higgs boson carrying the remaining energy, and frac s t t = 1 corresponding to the production threshold of the Higgs boson, with the t t system carrying the remaining energy.Note that if the phase-space integration is performed in frac s t t , a Jacobian factor of ds t t/dfrac s t t has to be included for the full phase-space density of (2.10) The set of parameters {β 2 , frac s t t , θ H , θ t , φ t } provide a way to parametrise the amplitude which is equivalent to using the five invariants from eq. (2.4); the mapping between them is defined by eq.(2.7), eq.(2.8), eq.(2.9), and the relations given in Appendix A. 4. In these parameters the physical region of the phase space is found as (2.11) Note that the probability density of eq.(2.10) will suppress the low-β region as β 4 , and enhance the high-β region as 1/(1 − β) 2 .It will also suppress both low-and high-frac s t t regions as frac s t t and 1 − frac s t t , respectively.Nominally, the factors sinθ H and sinθ t also suppress the polar cap regions in θ H and θ t , but this is only an artifact of the choice to map the respective spherical regions to a hypercube. Ultraviolet renormalisation To produce an ultraviolet (UV) and infrared (IR) finite 2-loop amplitude, we work in conventional dimensional regularisation assuming that all momenta live in d = 4 − 2ε space-time dimensions, and use the expressions for the two-loop singularity structure of massive amplitudes worked out in Ref. [46], also used in Ref. [47]. We expand the bare amplitude for q q → t tH in the strong coupling as where the dependence on kinematics is implicit.The renormalised amplitude has the form where Z q (Z Q ) are the on-shell wave-function renormalisation constants for light (heavy) quarks, µ is the renormalisation scale, and the bare quantities are replaced by the respective renormalised ones.The bare mass of the heavy quark, m 0 , is renormalised using Z m in the on-shell scheme and the bare Yukawa coupling y 0 t is renormalised accordingly.In particular, we further expand the coefficients of the expansion of the bare amplitude in eq.(2.12) to where and the overall factor of m −1−nε is extracted in order to have dimensionless amplitudes that depend on the dimensionless variables x ij introduced in eq.(2.4). The bare coupling constant α 0 s is defined as which corresponds to the MS scheme with N f = n l + n h active flavours.However, we do not consider the top quark as an active flavour contributing to the running of α s and the parton distribution functions.Therefore we use the decoupling relation where ζ αs is given in Appendix A.1, together with the explicit expressions for the renormalisation constants.Our notation is such that α s ≡ α . To split two-loop amplitudes into smaller building blocks, it is in general convenient to project the amplitude onto scalar form factors multiplying the independent spinor and Lorentz structures, or onto helicity amplitudes.The latter is however less convenient for amplitudes involving massive fermions.For the q q channel considered here, we use the Born amplitude itself as a projector, and calculate the spin-and colour-summed interference term of the renormalised and rescaled NNLO amplitude with the LO amplitude.We decompose this quantity into colour and flavour factors as follows: where N C is the number of QCD colours, and the colour group factors C F , C A , T F , and d 33 are given in Ref. [48] for SU (N ) as well as SO(N ) and Sp(N ): we allow the colour group to be general here. Similarly, the decomposition of the NLO and LO interference terms is ) The explicit expressions for the renormalized components A, B i and C i in terms of their bare counterparts are given in eq.(A.9) to eq. (A.12) in Appendix A.1.Note that the colour factors of the t tH production amplitude are in principle the same as for t t production.The virtual amplitudes for top quark pair production at NNLO were investigated in Ref. [47], see also Refs.[49][50][51] for analytic results in the quark channel.In Ref. [47], the colour factor decomposition is given after having formed the interference with the Born amplitude, just as in eq.(2.19).In contrast to [47, eq. (2.8)] however, we do not assume the colour group to be SU (N ), and as a result, instead of seven independent colour and flavour structures, we identify nine. Infrared singularity structure of the virtual amplitude The origin of the IR divergences present in 2-loop amplitudes with two massive coloured final state particles has been discussed in the literature and their form at 2-loops is known [46,47,52]. For the description of the IR divergences, we work in the colour space formalism.The renormalised amplitude is expressed as a vector in colour space |A R (α s , y t , m, µ, ε)⟩ and the divergences are removed by using a multiplicative MS renormalisation factor Z ≡ Z({p}, {m}, µ, ε), which is a kinematic-dependent matrix in colour space.For the colour decomposition for q l qk → t i tj H we adopt the following basis elements1 : with full details given in Appendix A.2.We use the following expressions: The renormalisation constant Z fulfills the differential equation which induces the following solution at two loops: 4ε . (2.27) The coefficients of the anomalous dimension matrix are defined by the expansion with The general form of the anomalous dimension matrix up to two loops is given in Ref. [46].Here we present the explicit form of the expressions for the process we study.With the colour basis of eq. ( 2.24) the anomalous dimension matrix has the specific form with and in terms of light flavour and colour factors leads to 11, n l d 33 + . . . where the parts irrelevant for the N f -part of the amplitude are contained in the dots.The relevant Z 11 components then have the form which can now be used in order to construct the IR structure of the interference terms, as will be shown in the following: The parts not shown here have no IR poles. Workflow of the calculation The leading order (LO) amplitude A b 0 can be represented by two Feynman diagrams: (2.36) The LO amplitude has no N f -part itself, but it contributes to the renormalisation of the NNLO N f -part, because the α s beta-function contains N f .The LO amplitude in the quark channel has both ε 0 and ε 1 parts (but no higher parts).We derive the corresponding expression using Alibrary [53], which is a Mathematica library interfacing with Qgraf [54], Feynson [55, Chapter 4], Form [56], and Color.h[48] to generate amplitudes, sum over tensor structures, construct integral families, and export the results to integration-byparts (IBP) relation solvers and/or pySecDec [57][58][59][60]. We can use the LO result to estimate the distribution of the events over the phase space at the LHC, as done in Figure 2.These plots tell us that most of the events are expected to come from the region of moderately high β2 and medium frac s t t .In particular, the region of β 2 ∈ [0.10, 0.95] (that is, √ ŝ ∈ [500 GeV, 2.1 TeV]) is expected to contain 99% of all events. Amplitude generation To generate the one-loop and two-loop amplitudes (A b 1 and A b 2 respectively) we use the following procedure: first we generate the corresponding Feynman diagrams (using Qgraf), then we insert Feynman rules, apply the projectors, and sum over the spinor and colour tensors (using Form and Color.h);all of this is done through Alibrary.This way, for each diagram, we obtain a corresponding sum of many scalar integrals. In total we find 31 non-zero one-loop diagrams and 249 two-loop diagrams.Examples for one-loop diagrams with different colour factors are depicted in Figure 3; examples for two-loop diagrams can be found in Figure 4. 2 IBP reduction The next step is to reduce the calculation of the approximately 20000 scalar integrals that appear in the amplitudes to a much smaller number of master integrals using IBP relations [65].To this end we first calculate the symmetries between the diagrams (using (left), and β 2 and frac s t t (right), according to the LO q q → t tH amplitude.For this plot we take the energy of incoming quarks to be distributed according to the ABMP16 parton distribution functions [61] (which we evaluate via LHAPDF [62]), with the collision energy set to 13.6 TeV.We have also applied cuts on the top quark momenta (as we calculate with on-shell top-quarks) in line with those reported in [1,3]: we enforce a minimal transverse momentum of 25 GeV, a maximal rapidity of 4.5, and a separation ∆R in rapidity and azimuthal angle between the top quarks of ∆R > 0.4.These cuts remove about 3% of the events, and mostly affect the low-β region. Feynson), and sort them into integral families.For the q q channel we use 4 one-loop and 43 two-loop families.Out of the two-loop families, 28 are unique (shown in Appendix A.5); the remaining 15 are obtained by crossing the external momenta. The usual next step would be to write down a system of IBP equations and solve it symbolically using e.g. the Laporta algorithm [66].We employed Kira [67,68] together with Firefly [69,70] for this purpose and observed that while the reductions for the oneloop families may be obtained in a rather straightforward manner, for most of the two-loop families the computation is quite challenging.Fully analytic reduction of the two-loop amplitude is rendered largely unfeasible given the large number of variables (five ratios given in eq.(2.4), the mass ratio m 2 H /m 2 t , and the dimensional regulator ε) along with the presence of internal masses.Instead we opt for a numerical approach and solve the IBP systems for individual phase-space points by substituting kinematic scales with rational numbers.Note that we employ the same numerical approach for one-loop amplitudes as well so as to have a uniform implementation for the whole calculation. To set up the IBP reduction, we first select a basis of master integrals for each of the amplitudes.We require a total of 33 master integrals for the one-loop amplitudes and 831 master integrals for the two-loop amplitudes.The choice of master integrals significantly impacts both amplitude reduction as well as numerical evaluation of the master integrals.Ideally we prefer a basis that is 1) quasi-finite [71], 2) d-factorising [72,73], 3) fast Example diagrams for q q → t tH at one-loop level.Massive quarks are depicted using solid (blue) bold lines, while massless quarks are represented by lighter (grey/red) solid lines.The colour factors correspond to applying the first colour projector from eq. (2.24). to evaluate with pySecDec, and 4) leads to simple polynomials in the denominators of the IBP reduction tables.Finding a basis satisfying the first two requirements is rather straightforward by considering integrals in higher dimensions (d = 6 − 2ε or d = 8 − 2ε) and with higher propagator powers or dots, with up to 5 dots in some cases.We then apply heuristic arguments to choose integrals that also satisfy the last two requirements in the following way.For each sector, we perform a reduction neglecting sub-sector integrals, and we analyze the denominator factors of the resulting IBP tables for different choices of master integrals.Selecting the master integral basis with the smallest denominator factors, we observe a significant improvement in the run-time for the full reduction.With this initial basis choice, we then evaluate the amplitude as discussed below and we identify the integrals with a significant contribution to the evaluation time.The basis of the corresponding sectors is then further refined by repeating the above procedure, restricting the set of candidate masters to integrals showing a relatively fast convergence with pySecDec. After we select the basis, we use Kira to generate the IBP systems for each integral family.We generate dimensional recurrence relations using Alibrary to be able to reduce the amplitudes to master integrals in shifted dimensions.The combined system of equations is then fed to Ratracer [74] which prepares and optimizes an execution trace of the solution.Then we use Ratracer to perform a series expansion on this trace in ε; this results in direct output of the ε-expansion of the IBP coefficients.Performing an expansion in ε effectively removes it from the computation which, combined with substituting rational numbers for the kinematic scales, results in a purely numerical system.This system is then Example diagrams for q q → t tH at two-loop level proportional to n l or n h .Massive quarks are depicted using solid (blue) bold lines, while massless quarks are represented by lighter (grey/red) solid lines.The colour factors correspond to applying the first colour projector from eq. (2.24). solved by Ratracer through replaying the trace in a parallelized manner and using finite field methods.Note that finite field methods used for function reconstruction as a way of solving IBP equations is by now an established practice, pioneered in Refs.[75,76]; our usage however does not require function reconstruction, only rational number reconstruction and the Chinese remainder theorem.Our setup allows us to compute reductions in around two CPU minutes for the two-loop amplitude, and under a second for the one-loop amplitude on a desktop CPU for most points.Overall this reduction method is fast enough, in the sense that we are more constrained by the evaluation of the master integrals. Evaluation of the master integrals The families of integrals required in this calculation are complicated enough that analytic evaluation is not currently achievable for many of the master integrals, such as topologies b81 or b82 shown in Appendix A.5.Instead, we turn to evaluating the master integrals numerically, using the approach of sector decomposition as implemented in pySecDec.Specifically, we rely on pySecDec's ability to integrate weighted sums of integrals (introduced in version 1.5, see [58]), and define one sum for each of the bare amplitude's symbolic structures as given in eq.(2.19) and eq.(2.21).We require the two-loop amplitudes to be expanded up to ε 0 , and one-loop up to ε 1 .By default pySecDec expects the coefficient of the sums to be given as algebraic expressions in terms of kinematic variables, but because we do not compute these in general (as we do not perform a fully symbolic IBP reduction), we instead use the results of a test IBP reduction at some fixed kinematic point for the coefficients to compile the pySecDec integration library.This ensures that the pole structure of the amplitude is known at the compilation stage, and so the needed depths of the ε-expansion of the masters can be correctly determined.After the integration libraries are compiled, to evaluate the amplitudes at a given kinematic point, we substitute the coefficients with the results of the IBP reduction. The sector decomposition of the 831 two-loop master integrals results in a total of around 18000 sectors, and around 28000 sector/expansion-order pairs.To make the evaluation of such a lage number of expressions efficient we rely on the performance improvements in pySecDec 1.6 (see [57]) coming from the new Disteval evaluator (which was partially developed as a response to the challenges of this calculation).We perform all the evaluations of the two-loop amplitudes on NVidia A100 GPUs. The one-loop amplitudes on the other hand are much simpler (180 sectors in total) and quicker to evaluate; for them we only use CPUs. Our target is to obtain the renormalised two-loop amplitudes with a precision better than 1%.In the bulk of the phase space this is easily achievable, and the two-loop integration time per point is around 5 minutes; for the one-loop amplitude it is around 10 seconds using 4 CPU threads.This however changes in the high-β region and in the regions near the boundaries of frac s t t and θ t : there we observe large numerical cancellations, both within and between the integrals, that require the evaluation of the master integrals to higher precisions to meet the amplitude precision goals. These cancellations cause three separate problems: 1.They drive the evaluation times upward, and in principle we expect this growth to be unbounded as β tends to 1 (i.e.ŝ → ∞). This problem could be mitigated by an asymptotic expansion in large ŝ for the highβ region.However, we will not follow this strategy here, in favour of using a single method for all regions. 2. They require increasingly large Quasi-Monte-Carlo lattice sizes, to an extent where we run into the limitations of precomputed lattices available in pySecDec: the largest such lattice has the size of around 7•10 10 , but some of the integrals need up to 10 14 evaluations in the high-β region. To solve this issue we employ the new "median QMC rules" [77] lattice construction method implemented in pySecDec 1.6, that enables on-the-fly construction of lattices of unbounded size. 3. At very high β 2 (e.g.0.99) the cancellations between some of the integrals become as large as 20 decimal digits, which means that even evaluating the integrals to the full precision of the double-precision floating point numbers (which is 16 digits) would be insufficient to get any precision for the amplitudes. Here we find that the integrals that cancel between each other and need high precision are mostly simpler integrals with up to five denominators, most significantly the ones of "sunset" and "ice-cone" [78] type, in various mass and external momenta configurations.Such integrals converge relatively quickly, and obtaining them with more than 20 digits of precision using sector decomposition would be well within our time budget if it was not for the double-precision floating point limitation. As a solution we have upgraded pySecDec with the ability to dynamically switch from double floating precision to "double-double" for integrals that need it, allowing for the maximum of 32 decimal digits of precision.Our implementation of the double-double arithmetic is based on the methods described in [79,80].We choose this approach instead of the more commonly used quadruple precision floating point numbers (float128) because NVidia GPU compilers do not come with the support for either of them, and our benchmarks show that double-double performs around 2.5 times faster than float128 on a CPU, while being simple enough to be implemented on a GPU.Still, the performance penalty of double-double integration is as high as a factor of 20 on the GPU compared to double. To cross-check our double-double precision implementation we have also evaluated the sunset and ice-cone type integrals using the series solution of differential equations as implemented in the DiffExp package [81,82] with boundary conditions obtained using the Auxiliary Mass Flow (AMFlow) method [83].We find agreement between our results up to the error reported by our double-double implementation. Once the integrals are evaluated, the last step is to combine the values of the bare Born, one-loop and two-loop results to values for the renormalised virtual two-loop amplitude as described in Section 2.4, propagating the numerical uncertainties. Checks To double-check our calculation we have independently computed the LO and NLO amplitudes via GoSam [84,85] at a number of points, verifying agreement within the reported accuracy.Note that the comparison of the NLO amplitudes requires extra care, because GoSam produces the results in the 't Hooft-Veltman scheme [86], and these need to be converted to get agreement with conventional dimensional regularisation that we use.Regularisation scheme independence of the NLO virtual contribution is only obtained after full IR subtraction [87].In the particular case of interest, the scheme difference of the subtraction term can be traced to the O(ε 1 ) part of the Born contribution which vanishes for the 't Hooft-Veltman scheme.Hence, we can convert the GoSam result to conventional dimensional regularisation by (2.37) The factor 1 − π 2 12 ε 2 is necessary because the convention for MS in GoSam uses (4π) ε instead of S ε of eq.(2.14) as a prefactor. We have also double-checked the IR poles of our amplitudes against Ref. [37], where the pole parts of the renormalised interference terms are given at four phase-space points.To get agreement with this paper we need to set the renormalisation scale to m t , and use 6 fermion flavours in the running of α s . While some of the symmetries listed in Section 2.3 are trivially observed when deriving the variables of eq. ( 2.4) from the parameters of eq.(2.11), we have verified the symmetry of simultaneous exchange of q ↔ q and t ↔ t. Finally and most importantly, for each evaluated point we compute the predicted IR pole coefficients of the amplitude as described in Section 2.5, and compare them to the ones obtained by numerical integration.We find agreement within the reported integration accuracy, which provides us with a check on both the correctness of the renormalisation procedure, and on the correctness of the reported numerical integration precision of the two-loop results, since the IR poles only depend on the one-loop amplitudes, which are integrated separately (and to a higher precision). Results In this section we visualise the two-loop amplitude as a function of slices of phase-space variables.To this end we choose the following kinematic point to centre our slices on: β 2 = 0.8, frac s t t = 0.7, cos θ H = 0.8, cos θ t = 0.9, cos φ t = 0.7; ( we also set m 2 H /m 2 t = 12/23, µ 2 = s 12 /4, and m 2 t = 1.The values of the amplitude at this kinematic point is given in Appendix A.6 (along with two other points), where we list both the bare and the renormalised values of each component of the LO, one-and two-loop amplitude (as defined in eq.(2.20), eq.(2.22), and eq.(2.23)).For brevity however we prefer not to plot individual components, but rather the combined C and B values as defined in eq.(2.19) and eq.(2.21).For this we set n l = 5, n h = 1, and the colour group to SU (3), i.e.C F = 4/3, C A = 3, and d 33 = 5/6. In Figure 5 we have plotted one-dimensional slices of both C and B in β 2 and frac s t t .The plots illustrate the difference in behaviour of the one-and two-loop amplitudes across the parameter space; the two-loop amplitude changes more rapidly, and is on average more negative. Both in Figure 5 and further it is convenient to use the LO amplitude A as a reference; a slice of A in β 2 and frac s t t is presented in Figure 6.Note that once the phase-space A × (phase-space density) × 10 3 On the right the amplitude is multiplied by the phase-space density of eq.(2.10).The centre point is marked with a star.density factor of eq.(2.10) is included to obtain the event probability density, the regions of low-β 2 , low-frac s t t , and high-frac s t t are all suppressed.This suppression is important because starting at the one-loop level the amplitude develops a Coulomb-type singularity in the low-frac s t t region.This singularity can be seen on the slices of B and C in β 2 and frac s t t depicted in Figure 7.The inclusion of the phase-space density however suppresses 0.2 0.4 0.6 0.8 C × (phase-space density) × 10 3 this divergence, as can be observed in Figure 8. To further illustrate the difference in behaviour between the one-and the two-loop level results, we present the slice in θ H and θ t in Figure 9.A similar slice in θ t and φ t is presented in Figure 10.Finally, we illustrate the difference in behaviour between different components of B and C in Figure 11, with a slice in β 2 and frac s t t for each of the individual components, aside from B l , C ll , which are not plotted because their ratio to A is constant. Conclusions We have presented numerical results entering t tH production at NNLO QCD, for the quark initiated N f -parts of the two-loop amplitude including loops of both massless and massive quarks.This calculation serves as a proof of concept that our setup is capable of calculating two-loop pentagon amplitudes with internal massive propagators and three massive particles in the final state.We have performed the UV renormalisation and subtraction of IR poles, presenting the finite part of the two-loop amplitude, split into nine different colour structures for a general colour group. For the reduction to master integrals, we do not attempt to obtain a fully symbolic reduction and instead perform a numerical reduction for each phase-space point leaving the dimensional regulator symbolic.The master integrals are evaluated with a recent version of pySecDec, which has been further extended to support integration over double-double precision integrands, this allows us to obtain stable results also in the high-energy and collinear limits where many digits of the master integrals cancel.The evaluation times vary substantially over the phase space, being of the order of five minutes in the bulk of the phase space, increasing substantially when approaching the β → 1 limit.We do not expect the full quark channel to present further major obstacles within our calculational framework. Although we have demonstrated that the amplitude can be evaluated with sufficient precision at individual phase-space points, the largest remaining challenge for producing realistic phenomenological applications is to sufficiently densely sample the full 5-dimensional phase-space.One possible way of addressing this obstacle is to supplement the evaluated phase-space points with a reliable interpolation framework that allows data points at any 5-dimensional phase-space point to be provided with sufficient accuracy.This is a challenge for kinematic regions where the amplitude has a very steep gradient, for example in the high-energy region with quasi-collinear configurations.While an interpolation covering the whole phase space is feasible, assessing the associated uncertainties is challenging; this is work in progress. A.5 Integral families Generic integral families needed for the NNLO q q → t tH: In this section we provide results for both the renormalised and bare amplitudes at three example points.The first point is a rationalised version of the centre point from eq. (3.1); it is given by Figure 2 : Figure2: Event probability distribution in β 2 (left), and β 2 and frac s t t (right), according to the LO q q → t tH amplitude.For this plot we take the energy of incoming quarks to be distributed according to the ABMP16 parton distribution functions[61] (which we evaluate via LHAPDF[62]), with the collision energy set to 13.6 TeV.We have also applied cuts on the top quark momenta (as we calculate with on-shell top-quarks) in line with those reported in[1,3]: we enforce a minimal transverse momentum of 25 GeV, a maximal rapidity of 4.5, and a separation ∆R in rapidity and azimuthal angle between the top quarks of ∆R > 0.4.These cuts remove about 3% of the events, and mostly affect the low-β region. Figure 5 : Figure5: One-dimensional slices in β 2 and frac s t t of the one-and two-loop amplitudes B and C as defined in eq.(2.22) and eq.(2.20), normalised to the Born amplitude squared A from eq. (2.23), around the centre point of eq.(3.1).The centre point is marked with a star.Each plot is an interpolation from around 30 data points. Figure 6 : Figure 6: Slice of the LO amplitude around the centre point of eq.(3.1) in β 2 and frac s t t .On the right the amplitude is multiplied by the phase-space density of eq.(2.10).The centre point is marked with a star. Figure 8 : Figure8: Slices of the one-loop (left) and two-loop (right) virtual amplitudes multiplied by the phase-space density of eq.(2.10), around the centre point of eq.(3.1) in β 2 and frac s t t .The centre point is marked with a star. A. 6 Numerical results at example phase-space points α 2 = sp 13 sp 14 sp 23 sp 24 m 4 µ 4 , α 3 = sp 13 sp 24 sp 23 sp 14 , β 34 = acosh −sp 34 2m 2 , (2.30) and sp ij ≡ 2p i •p j +i0 + .The anomalous dimensions γ i are given in Appendix A.3.Since we are only interested in the interference with the LO amplitude, we only need the component Γ 11of the anomalous dimension matrix Γ for the IR pole structure of those amplitude parts proportional to the quark flavours at NNLO.Expanding in α (n l ) s frac s t t of the one-and two-loop amplitudes B and C as defined in eq.(2.22) and eq.(2.20), normalised to the Born amplitude squared A from eq. (2.23), around the centre point of eq.(3.1).The centre point is marked with a star.Each plot is an interpolation from around 30 data points. 2Slices of the normalised one-loop (left) and two-loop (right) virtual amplitudes around the centre point of eq.(3.1) in β 2 and frac s t t .The centre point is marked with a star.Each plot is a linear interpolation of grid of around 500 data points in total. Slice of the normalised one-loop (left) and two-loop (right) virtual amplitudes around the centre point of eq.(3.1) in θ H and θ t .The centre point is marked with a star.Figure 10: Slices of the normalised one-loop (left) and two-loop (right) virtual amplitudes around the centre point of eq.(3.1) in θ t and φ t .The centre point is marked with a star. Contributions from the individual colour factors to the one-and two-loop amplitudes for phase-space slices around the centre point of eq.(3.1) in β 2 and frac s t t .The centre point is marked with a star. Table 2 : Results for the bare amplitudes at the example points from Appendix A.6.
8,559.6
2024-02-05T00:00:00.000
[ "Physics" ]
New Advances in Dial-Lidar-Based Remote Sensing of the Volcanic CO2 Flux We report here on the results of a proof-of-concept study aimed at remotely sensing the volcanic CO 2 flux using a Differential Adsorption lidar (DIAL-lidar). The observations we report on were conducted in June 2014 on Stromboli volcano, where our lidar (LIght Detection And Ranging) was used to scan the volcanic plume at ∼ 3 km distance from the summit vents. The obtained results prove that a remotely operating lidar can resolve a volcanic CO 2 signal of a few tens of ppm (in excess to background air) over km-long optical paths. We combine these results with independent estimates of plume transport speed (from processing of UV Camera images) to derive volcanic CO 2 flux time-series of ≈ 16–33 min temporal resolution. Our lidar-based CO 2 fluxes range from 1.8 ± 0.5 to 32.1 ± 8.0 kg/s, and constrain the daily averaged CO 2 emissions from Stromboli at 8.3 ± 2.1 to 18.1 ± 4.5 kg/s (or 718–1565 tons/day). These inferred fluxes fall within the range of earlier observations at Stromboli. They also agree well with contemporaneous CO 2 flux determinations (8.4–20.1 kg/s) obtained using a standard approach that combines Multi-GAS-based in-plume readings of the CO 2 /SO 2 ratio ( ≈ 8) with UV-camera sensed SO 2 fluxes (1.5–3.4 kg/s). We conclude that DIAL-lidars offer new prospects for safer (remote) instrumental observations of the volcanic CO 2 flux. INTRODUCTION A major step forward in ground-based volcano monitoring has recently arisen from the advent of modern instrumental techniques and networks for volcanic gas observations (Galle et al., 2010;Oppenheimer et al., 2014;Saccorotti et al., 2014;Fischer and Chiodini, 2015). Such technical advances provide improved temporal resolution relative to traditional direct sampling techniques (Symonds et al., 1994;Giggenbach, 1996). As longer-term volcanic gas records increase in number and quality, full empirical evidence is finally emerging for increased CO 2 flux emissions prior to eruption of mafic to intermediate volcanoes (Aiuppa, 2015). Precursory plume CO 2 flux increases have been now detected at several volcanoes, including Etna (Aiuppa et al., 2008;Patanè et al., 2013), Kilauea (Poland et al., 2012), Redoubt (Werner et al., 2013), Turrialba (de Moor et al., 2016a), and Poas (de Moor et al., 2016b). At Stromboli (in Italy), however, CO 2 flux observations have been particularly valuable for interpreting, and eventually predicting, the volcano's behavior (Aiuppa et al., 2010a. On Stromboli, the "regular" mild strombolian activity is occasionally interrupted by larger-scale vulcanian-style explosions, locally referred as "major explosions" or (in the most extreme events) "paroxysms" (Rosi et al., 2006(Rosi et al., , 2013Andronico and Pistolesi, 2010;Pistolesi et al., 2011;Pioli et al., 2014). These explosions, although short-lived (tens of seconds to a few minutes), represent a real hazard for local populations, tourists and volcanologists, since they produce fallout of coarse pyroclastic materials over wide dispersal areas (Rosi et al., 2013). In addition, such events are not anticipated by any detectable anomaly in the geophysical or volcanological record, perhaps because they originate deep in the crustal roots of the volcano's' plumbing system Métrich et al., 2005Métrich et al., , 2010Allard, 2010). Observational evidence suggests, however, that "major explosions" and "paroxysms" (Aiuppa et al., 2010a) are both systematically preceded by days/weeks of anomalous CO 2 -rich gas leakage from Stromboli's deep (8-10 km) magma storage zone (Aiuppa et al., 2010b). CO 2 flux emissions from the open-vent crater plume have become, therefore, a unique monitoring tool for volcanic hazard assessment and mitigation on the volcano. On Stromboli, as at other volcanoes, the volcanic gas CO 2 flux is calculated from a combination of co-measured SO 2 fluxes and plume CO 2 /SO 2 ratios Aiuppa, 2015). While the SO 2 flux can remotely be sensed by UV spectroscopy (Oppenheimer, 2010;Oppenheimer et al., 2011), measuring the CO 2 /SO 2 ratio requires in-situ direct sampling and/or measurements via Multi-GAS (Aiuppa et al., 2010a) or Fourier Transform Infra-Red Spectrometry (La Spina et al., 2013) in the vicinity of hazardous active vents. As such, implementation of novel techniques for the remote observation of the volcanic CO 2 flux, from more distal (and safer) locations, remains highly desirable. New prospects for ground-based remote detection of the volcanic CO 2 flux have recently become available from the advent of a new lidar (Light Detection and Ranging) using the DIAL (Differential Absorption lidar) technique Fiorani et al., 2015Fiorani et al., , 2016. DIAL-lidars (Weitkamp, 2005;Fiorani, 2007) use backscattering of artificial light (laser) from atmospheric back-scatterers and/or from the volcanic plume itself, and are therefore potentially ideal for remote volcanic CO 2 detection (Fiorani et al., 2013;Queißer et al., 2015Queißer et al., , 2016. In previous work, we demonstrated the ability of our lidar to remotely resolve the volcanic CO 2 flux from a relatively proximal measuring site (<200 m from the source vents) . Here, we extend this work by reporting on a successful CO 2 flux detection at Stromboli over a far longer optical path (∼3 km distance from the vents). Results of this proof-ofconcept experiment confirm lidars as promising tools for remote monitoring of the volcanic CO 2 flux Queißer et al., 2016). The Bridge Lidar Our measurements on Stromboli (Figure 1) were obtained using the same DIAL-lidar described in Aiuppa et al. (2015) and Fiorani et al. (2015Fiorani et al. ( , 2016, and realized within the context of the FP7-ERC project Bridge (www.bridge.unipa.it). Only key information is reported here, and the reader is referred to previous studies for a detailed description of the instrument. In brief, the Bridge lidar ( Figure 1D) uses a complex transmitter that integrates (i) an injection seeded Nd:YAG laser with (ii) a double grating dye laser. This transmitter is used to generate laser radiation at ∼2010 nm, a region of the electromagnetic spectrum absorbed by atmospheric CO 2 , while showing minimal cross-sensitivity to H 2 O (Fiorani et al., 2013). At the ON and OFF wavelengths selected for this experiment, the differential cross section of CO 2 is five orders of magnitude larger than that of H 2 O (Rothman et al., 2013). Considering a CO 2 mixing ratio of 400 ppm, and with the upper and lower ranges of H 2 O mixing ratios used in atmospheric models (Berk et al., 2014), i.e., from 2.59% (tropics, sea level) to 0.141% (high latitude, winter, sea level), the respective CO 2 absorption is 3 and 5 orders of magnitude larger than that of H 2 O. The 2.59% H 2 O mixing ratio is not far from the saturated water vapor pressure at standard atmospheric conditions. We conclude that, even in a condensing volcanic plume, H 2 O absorption is negligible compared to that of CO 2 . A piezo-electric element is used to sequentially switch the wavelength of the transferred laser beam, from λ ON (2009.537 mm: maximum CO 2 absorption) to λ OFF (2008.484 nm: no CO 2 absorption), at 10 Hz repetition rate. These closely spaced pairs of laser beams are sequentially transmitted into the atmosphere, where they are eventually scattered back by atmospheric backscatterers (aerosols, water droplets, particles) in either the volcanic plume or the background atmosphere. During their atmospheric propagation, the laser beams are also reflected by any obstacle encountered along the optical path, e.g., in our specific case, the Pizzo and Vancori walls/rims in front of or behind the volcanic plume (see Figures 1, 2). The returned signal is captured by the lidar receiver (a Newtonian telescope, diameter: 310 mm), and then detected and amplified by an InGaAs PIN photodiode module, directly connected with the analog-to-digital converter (ADC). Field Operations During our experiment, the DIAL-lidar operated from a small laboratory truck (Figure 1C), positioned in a fixed position at the base of the volcano in the Scari area, ∼2-2.5 km from the degassing vents on the volcano's summit (Figures 1, 2). The lidar operated during June 24-29, 2015, including an initial instrumental setup phase. Stable weather conditions FIGURE 2 | Maps illustrating geometries of (A) Pizzo scans and (B) Vancori scans. In the horizontal Pizzo scan in (A), the Field Of View (FOV) of the lidar was sequentially rotated (at constant elevation) at heading angles ranging from 227 • to 317 • (the Pizzo morphological peak was intercepted at ∼245.8 • ). In (B), the heading angle was kept constant at 237.8 • , while the plume was vertically profiled at elevations of 16 to 21 • . (C,D) are pseudo-color images, from processing of UV camera data, showing distribution of SO 2 column amounts (in ppm m, see scale). The locations of Pizzo and Vancori are indicated in (C). During June 24-25, the UV camera images (see example in C) identified the plume as a nearly vertically rising band of peak SO 2 column amount, north of the Pizzo area; on June 26-29, the plume was instead transported south-southeast of Pizzo by the prevailing north-northwesterly winds (see image D). During operations, two large motorized elliptical mirrors (major axis: 450 mm) simultaneously aimed the laser beams and the telescope, allowing the laser beam of the lidar to scan the volcanic plume either horizontally (Figure 2A) or vertically ( Figure 2B). In particular, during June 24-25, the volcanic plume was mainly dispersed northwards by gentle southerly winds. From our Scari observation point (Figure 2), the plume was seen to rise nearly vertically north of the Pizzo area ( Figure 2C). The Line Of Sight (LOS) of the lidar was therefore pointed north of Pizzo and the horizontal scan mode was preferred (heading angles: 227-317 • ; Figure 2A). Vertical scans above the Pizzo area were also performed. For simplicity, we refer below to these June 24-25 measurements as the Pizzo scans (Figure 2A). On June 26-29, the plume was instead transported south-southeast by the prevailing north-northwesterly winds ( Figure 2D). Vertical scans were therefore preferred that were operated at constant heading angle (237.8 • ) and at elevation angles from 16 to 21 • ( Figure 2B). In such conditions, the Pizzo and Vancori peaks were intercepted at elevation angles of 16.98 • and 17.78 • , respectively, and the volcanic plume was in all cases encountered in the 2300-2700 m range. We hereafter refer to these scans as the Vancori scans ( Figure 2B). During each profile, 100 lidar returns, 50 at λ ON , 50 at λ OFF , and interlaced (OFF after ON, OFF again and so on), were emitted at a 10 Hz rate, then co-added and averaged to increase the signal-to-noise ratio, reducing the signal sampling frequency to 0.1 Hz (temporal resolution of 10 s). The spatial resolution was about 5 m (corresponding to the rise time of the detector module due to its bandwidth). Plume scans, both horizontal and vertical, were retrieved combining about 50 profiles in <10 min. Typically, 10 scans at different elevations were repeated, obtaining a three-dimensional tomography of the volcanic plume. A cell filled with standard CO 2 gas was periodically used during operations, for check of wavelength accuracy, repeatability and stability. In brief, our calibration procedure involved measuring-by photoacoustic spectroscopy-the absorption of the CO 2 gas cell as a function of wavelength. This calibration, limited to a small interval near the predicted λ ON , allowed identifying the wavelength at which cell absorption is maximum. The laser system was finally forced to transmit at this radiation. The CO 2 absorption cross-section used in our calculations was based on HITRAN data (Rothman et al., 2013). UV Camera Concurrently with our lidar observations, a dual-UV camera system (Kantzas et al., 2010;Tamburello et al., 2012;Burton et al., 2014) was used to monitor the temporal variations of the SO 2 flux and plume transport speed. A fully autonomous system, similar to that used in other recent work (D'Aleo et al., 2016), was mounted on the roof of the laboratory truck and operated every day from 6 am to 4 pm (Local Time). The UV camera system acquired sequential images of the plume at ∼0.5 Hz using two JAI CM 140 GR cameras. Both cameras had 10-bit digitization and 1392 × 1040 pixels, using an Uka Optics UV lens with a ∼37 • field of view. Distinct bandpass filters, centerd at either 310 nm (where SO 2 absorbs) or 330 nm (no SO 2 absorption), were mounted on the back on the lenses of the two cameras. Each set of co-acquired images from the two UV cameras was processed using the methodology of Kantzas et al. (2010) and integrated into the Vulcamera software (Tamburello et al., 2011(Tamburello et al., , 2012, to calculate an absorbance for each camera pixel. Absorbance was converted into an SO 2 column amount from readings of a co-exposed Ocean optics USB2000+ UV Spectrometer, as outlined in Lübcke et al. (2013). Cameras and spectrometer were both controlled by a mini-pc Jetway. To calculate SO 2 flux time-series, we used Vulcamera to derive temporal records of SO 2 integrated column amounts (ICAs) along a plume cross-section, perpendicular to the plume transport direction. The obtained ICA time-series were then combined with high-temporal resolution (∼1 Hz) records of plume transport speed. This latter was derived using an Optical Flow sub-routine using the Lukas/Kanade algorithm (Bruhn et al., 2005;Peters et al., 2015), integrated in Vulcamera. In our specific case, the Lucas-Kanade method was used to track movements of gas fronts (e.g., gas-rich and/or ash-free portions of the plume, having well distinct absorbance features) in consecutive UV camera frames, which allowed us quantifying plume transport speed at 0.5 Hz. We tested performance of this method by using artificial images with known particle velocities, and obtained errors in estimated velocities of <5%. Table 1 lists daily means (±1 SD: standard deviation) of both SO 2 fluxes and plume transport speed (V p ) during our observational period. Characteristics of the DIAL-Lidar Signal According to lidar theory (Fiorani, 2007), the optical power returned to the lidar receiver at any time t is produced by backscattering of the laser beam by an atmospheric layer at distance R (range) from the source, where R = ct/2 and c is the speed of light. As such, the lidar offers range-resolved information on atmospheric structure and properties (aerosols, particles and gas molecules) along the laser beam, in the form of an intensity (I) vs. range plot (Figure 3). Upon its atmospheric propagation, the beam intensity decreases approximately (a) exponentially, due to atmospheric extinction, according to the Lambert-Beer law; and (b) as 1/R 2 , because the solid angle subtended by the receiver is A/R 2 , where A is the telescope's effective area. The two processes are superimposed. As such, in order to better observe the atmospheric back-scattering, a "range corrected signal, S" is commonly used, being given by: S = ln(I R 2 ) (see below). Since the system works in DIAL mode, each intensity profile is in fact acquired at two distinct wavelengths, λ ON -absorbed by CO 2 -and λ OFF -not absorbed by CO 2 (Figure 3). The two wavelengths are so close that atmospheric behavior, except from CO 2 absorption, is practically identical. The measured intensity contrast between the co-emitted λ ON and λ OFF signals allow Plume speed and SO 2 flux are obtained by processing UV camera images. For both parameters the daily average and its standard deviation (SD) are quoted (the latter is taken as representative of uncertainty). The plume volcanic gas CO 2 /SO 2 ratios are derived from in-situ Multi-GAS observations taken on the volcano's summit; each quoted ratio is the average (+1 SD) over a 30-min observational period, from 16 to 16:30 Local Time. No successful Multi-GAS plume detection was obtained in other daily observational windows (04-04:30; 10-10:30-22-22:30). Two independent estimates of the CO 2 flux are reported, based on either. a Multiplying the SO 2 flux by the CO 2 /SO 2 ratio, or b Processing of DIAL-LIDAR results. Uncertainties in the derived CO 2 fluxes are from either c Error propagation on SO 2 fluxes and CO 2 /SO 2 ratios (taken as 1 SD), or d Estimated at ±25% (see appendix). range-resolved CO 2 concentrations in the volcanic plume to be obtained. An example of a lidar-based atmospheric profile, obtained at Stromboli during a typical Vancori scan, is illustrated in Figure 3. As described above, the lidar registered one such profile every 10 s, since 100 lidar returns acquired at 10 Hz were co-added and averaged to increase the signal-to-noise ratio. Each of the atmospheric profiles (e.g., Figure 3) acquired during the Vancori scan contains the following characteristic features: 1) at R ∼0, a first strong intensity peak is recorded for both λ ON and λ OFF ( Figure 3A); this peak, which we refer to as I 0,ON and I 0,OFF , is due to scattering inside the laboratory truck of some photons of the transmitted laser pulse. This peak yields the pulse transmission zero-time, and its intensity is proportional to the transmitted energy (used for signal normalization), 2) for R between 0 and ∼500 m, a weak signal is observed that is returned from atmospheric back-scatterers encountered by the laser beam along the optical path ( Figure 3A); this signal, as explained before, attenuates with distance and vanishes at R ∼500 m, 3) a I P,ON and I P,OFF peak at R ∼1900 m (Figures 3A,B); this is produced by reflection of the lidar beam by the southeastern margin of the Pizzo morphological peak (see Figure 2), 4) a series of weak but resolvable peaks observed in the range interval 2300-2700 m ( Figure 3B); in these peaks, the λ ON signal appears strongly attenuated relative to the co-acquired λ OFF signal, a fact due to laser absorption by CO 2 molecules in the volcanic plume, 5) a I V,ON and I V,OFF peak at R ∼2800 m, which is produced by reflection of the laser beam by the Vancori peak ( Figure 3B). Atmospheric profiles obtained during the Pizzo scans of June 24-25 share similar characteristics, except that the Pizzo morphological peak is intercepted by the lidar beam at R ∼2300 m, and the plume is encountered either before or after the Pizzo (Figure 4). The Vancori peak was obviously not encountered. Data Processing and Calculation of CO 2 Concentrations We processed each acquired atmospheric profile using a Matlab analysis routine, with the aim of calculating the CO 2 concentrations in the atmospheric background and in the volcanic plume. The data processing routine consists of the following steps, all based on the Lambert-Beer law relation: a) Initially, the CO 2 concentration in the natural background atmosphere, C 0 , is calculated as: where I P,ON (I P,OFF ) stands for intensity of the ON (OFF) lidar signal [(3) in Figure 3A] caused by reflection of the laser beam off the surface of the Pizzo wall (R P = 2294 m); I 0,ON (I 0,OFF ) is the intensity of the ON (OFF) lidar peak caused by laboratory scattering of the laser pulse [(1) in Figure 3A]; and σ is the CO 2 differential absorption cross-section. b) Secondly, C, the average excess CO 2 concentration in the volcanic plume cross-section between Pizzo and Vancori [(3,5) in Figure 3B], is derived from: Where I V,ON (I V,OFF ) is the peak intensity of the ON (OFF) lidar signal caused by reflection of the laser beam off the surface of the Vancori rock wall (at R V = 2837 m). c) Thirdly, C CO2,i , the excess CO 2 concentration corresponding to each i-th ADC channel of the lidar profile ( Figure 3C) is calculated from: (2) is the returned signal from atmospheric back-scatterers along the laser optical path; peak (3) is the returned signal produced by reflection of the lidar beam by southeastern margin of the Pizzo morphological peak. (B) is a detail of (A), for ranges between 1500 and 3500 m. In this panel, peak (3) is as in (A); the series of peaks observed in the range interval 2300-2700 m (4) are due to back-scattering of the laser beam from the volcanic plume; peak (5) is produced by reflection of the laser beam by the Vancori peak; (C) a profile of in-plume excess CO 2 concentrations, in the 2000-2700 m range interval, calculated from processing of the lidar signal in (B). See text for the procedure used. where R is the range interval corresponding to each ADC channel, and I i,OFF and R i are the OFF lidar signal and the range of the i-th ADC channel (the OFF signal has been chosen because its signal-to-noise ratio is higher). Figure 3C shows an example of in-plume excess CO 2 concentration profile, obtained by applying the procedure above to the lidar profile of Figure 3B (in the 2100-2700 m range interval, where the volcanic plume was detected). In-Plume CO 2 Concentration Maps A series of CO 2 concentration profiles (one every 10 s), all similar to those shown in Figure 3C, were obtained as the volcanic plume was sequentially scanned by our DIAL-lidar, either horizontally or vertically, during the Pizzo/Vancori scans. By interpolating all CO 2 concentration profiles obtained during a single scan, we obtained sequences of CO 2 concentration maps, examples of which are shown in Figures 4, 5. Since a full scan of the plume was completed in ∼1000-2000 s, each map is in fact obtained from the combination of ∼50 to ∼100 atmospheric profiles. The maps illustrate the 2D distribution of CO 2 concentrations as a function of azimuth angle [ • ] (X axis) and range [m] (Y axis) for horizontal scans (Figure 4); or as a function of range [m] (X axis), and elevation angle [ • ] (Y axis) for the vertical scans ( Figure 5). In both plots, the color scales (from blue to red) illustrate the level of CO 2 concentrations (in [ppm]) in the investigated space. Figures 4, 5 demonstrate the ability of our DIAL-lidar to resolve in-plume volcanic CO 2 from the atmospheric Each map was obtained by interpolation of all CO 2 concentration profiles (e.g., same as 3A), obtained during a given Pizzo scan. In the maps, the red colored horizontal bands identify the margin of the Pizzo peak (heading angle: 244-245 • ), while the volcanic plume is the band of peak CO 2 concentration (up to 60 ppm) areas at heading angles of 245-250 • . background CO 2 (blue colors). In the CO 2 distribution maps, clusters of peak CO 2 concentration areas (marked by red, orange and yellow colors) identify the geometry of the plume. The lidar-based plume locations are consistent with visual and UV observations of volcanic plume dispersion (Figure 2). In the Pizzo horizontal scans, the plume was intercepted north of the Pizzo peak (heading angle: 244-245 • ), and is identified in the maps of Figure 4 as a cluster of peak CO 2 concentrations (up to 60 ppm above ambient air) at heading angles of 245-250 • . The plume was detected over a relative wide range interval (R = 2000-2400 m), relative to the Pizzo peak (R ∼2300 m). This is consistent with the slightly variable plume transport directions during our June 24-25 observation period that dispersed the plume either toward (Figure 4B) or away from ( Figure 4A) the lidar observation point (R = 0). A few Pizzo vertical scans (not shown) confined the vertical extension of the plume to a diagonal band, extending from R = 2300 m and elevation ∼19 • (the Pizzo area) to R ∼2700 m and ∼20 • elevation. Figure 5 is an example of a CO 2 distribution map obtained during a vertical Vancori scan. The map exhibits a clear volcanic plume signal, as marked by a cluster of high CO 2 concentrations (up to 60 ppm) in the range interval 2200-2500 m and 17-19.5 • elevation. CO 2 remained at background air levels for range distances <2000 and >2800 m. CO 2 Flux The CO 2 concentration maps served as basis for calculating the CO 2 flux. To this aim, and by analogy with previous work , we integrated the backgroundcorrected (excess) CO 2 concentrations over the entire plume cross-sectional area covered by each scan, and multiplied this integrated column amount by the plume transport speed. Mathematically, the CO 2 flux ( CO2 , in kg·s −1 ) was obtained from: where v P is the plume transport speed (in [m/s]) obtained from processing of UV camera images ( Table 1); N molCO2−total is the total-plume CO 2 molecular density (expressed in molecules m −1 ); and PM CO2 and N A are, respectively, the CO 2 molecular weight and Avogadro's constant. The term N molCO2−total was obtained by integrating the effective average excess CO 2 concentrations (C exc,i [ppm]) over the entire plume cross section, according to: where: N h is the atmospheric number density (molecules m −3 ) at the crater's summit height, the term 10 −6 converts C exc,i into a dimensionless quantity, and A i represents the i-th effective plume area, given by: where ∆R is the spatial resolution of the lidar (1.5 m) and l i is the i-th arc of circumference ( Figure 6B): In relation (9), R i is the i-th distance vector (in meters) and θ is the angular resolution of the system expressed in radians (ranging from 0.04 • π/180 = 1.75 10 −4 rad to 0.1 • π/180 = 0.00175 rad) ( Figure 6B). Our obtained CO 2 fluxes, shown in Figure 6A, range from 1.8 to 32.1 kg/s. The lidar-based CO 2 flux time-series ( Figure 6A) has maximum temporal resolution of 16-33 min (the time required to complete a full scan of the plume for our instrumental configuration). Temporal gaps in the dataset are caused by decreases in the signal-to-noise ratio (SNR) that prevent us from accurately detecting a clear CO 2 excess. These SNR decreases are likely caused by reduction of the backscattering coefficient of the probed air parcel, reflecting temporal variations in condensation extent of the volcanic gas plume. Visual (and UV camera) observations confirmed that the plume was variably condensed during our measurement interval, possibly due to slight changes in atmospheric conditions. We evaluate the overall uncertainty in our derived CO 2 fluxes at ± 25% at 1s (see appendix). DISCUSSION The scarcity of volcanic CO 2 flux data in the geological literature (see Burton et al., 2013 for a recent review) is a direct consequence of the technical challenges in resolving the volcanic CO 2 signal from the large atmospheric background (≈400 ppmv). In contrast to SO 2 , which is present at the part per billion level in the background atmosphere, allowing the volcanic flux to be routinely measured from ground and space using UV spectroscopy (Oppenheimer, 2010), remote sensing of volcanic CO 2 has only been achieved during eruptions of mafic volcanoes. In such circumstances, magma/hot rocks can effectively be used as a light source for ground-based Fourier Transform Infra-Red (FTIR) spectrometers (Allard et al., 2005;Burton et al., 2007;Oppenheimer and Kyle, 2008). In contrast, measurement of the far more common "passive" CO 2 emissions from quiescent volcanoes has required access to hazardous volcano's summit craters for direct sampling of fumaroles (Fischer and Chiodini, 2015) or in-situ measurement of plumes via either Multi-GAS instruments (Aiuppa, 2015) or active-FTIR (Burton et al., 2000;La Spina et al., 2013;Conde et al., 2014). A major breakthrough has recently arisen from the possible application of lidars to remote volcanic CO 2 sensing (Fiorani et al., 2013(Fiorani et al., , 2016Aiuppa et al., 2015;Queißer et al., 2015Queißer et al., , 2016. Aiuppa et al. (2015) were the first to report on a DIALlidar-based remote measurement of the volcanic CO 2 flux at Campi Flegrei volcano, but their observations were limited to short (<200 m) measurement distances. Here, we have extended this earlier work to demonstrate that DIAL-lidars can successfully detect volcanic CO 2 at tens of ppmv above the atmospheric background over optical paths up to ≈3 km (Figures 4, 5). Similar results have recently been obtained at Campi Flegrei volcano by Queißer et al. (2016), suggesting that lidar may soon become an important operational tool in volcanic-gas research. Our results constrain the CO 2 flux at Stromboli during June 24-29, 2015 ( Figure 6A). Averaging all successful results during Our DIAL-lidar based fluxes (red circles) were obtained using the procedure detailed in the text. For comparison, independent CO 2 flux estimates, obtained by multiplying the in-plume CO 2 /SO 2 ratio (from Multi-GAS) by the SO 2 flux (from UV Cameras), are also presented. The two independent time-series are consistent (within error, see also Table 1). (B) Schematic plot defining the parameters used in the CO 2 flux calculation procedure (see text). each measurement day, we obtain daily averages of the CO 2 flux between 8.3 ± 2.1 (June 24) and 18.1 ± 4.5 (June 25) kg/s, which correspond to cumulative daily outputs of 718 and 1565 tons, respectively. These results fall well in the range of previous CO 2 measurements on Stromboli. Aiuppa et al. (2010aAiuppa et al. ( , 2011 found that the CO 2 flux exhibits large temporal oscillations on Stromboli, from as low as 60 tons/day to as high as 11,000 tons/day, the highest values being observed in the days prior to paroxysmal and/or major explosions. The time-averaged CO 2 flux from Stromboli has been evaluated at 550 tons/day and at 1040-1200 tons/day (Allard, 2010). Our lidarbased CO 2 flux for the entire (June 24-29) measurement period is reasonably close, averaging at 1050 ± 250 tons/day (mean of 80 individual measurements). Figure 6A offers further confirmation to the robustness of our results. In the figure, we compare our lidar-based CO 2 fluxes with independent estimates, in which the CO 2 flux was derived by multiplying the CO 2 /SO 2 ratio of the plume by the SO 2 flux. This latter approach has been used at volcanoes for years (Aiuppa, 2015), and at Stromboli involves use of two fully automated Multi-GAS instruments, operating on the volcano's summit to measure the in-plume CO 2 /SO 2 ratio (Figure 2A; Aiuppa et al., 2009Aiuppa et al., , 2010aCalvari et al., 2014). This is combined with SO 2 fluxes, delivered from either the FLAMES network of scanning UV spectrometers (Burton et al., 2009) or from UV camera observations (Tamburello et al., 2012), to obtain the CO 2 flux. Problems with this Multi-GAS + SO 2 flux approach include issues of different temporal resolutions and poor temporal alignment of the two time-series. Successful Multi-GAS measurements of plume composition on Stromboli (Aiuppa et al., 2009(Aiuppa et al., , 2010b are restricted to periods when the volcanic plume is dispersed by the local wind field into the Pizzo area, where the instruments are deployed (see Figure 2A). In addition, the Multi-GAS cannot operate continuously, but only during four equally spaced measurement cycles per day, each being 30 min long (Aiuppa et al., 2009). As such, the temporal resolution of the FIGURE 7 | Temporal record of the volcanic SO 2 flux from Stromboli on June 26 th, 2015, as derived from our UV camera observations. The figure exemplifies misalignment between Multi-GAS and SO 2 flux time-series; plume CO 2 /SO 2 ratios on June 26th were successfully measured only during the 1600-1630 local time Multi-GAS acquisition period, immediately after the end of the SO 2 flux acquisition window (0900-1600 local time). Poor temporal alignment is a flaw in the technique of estimating the CO 2 flux through a combination of Multi-GAS and UV camera records. CO 2 /SO 2 ratio time-series is 6 h at best. In contrast, the temporal resolution of UV spectrometers/cameras is higher, from ∼10 to 20 min (Burton et al., 2009) to 0.5 s (Tamburello et al., 2012), but observations are intrinsically limited to daylight hours and to good meteorological conditions (no clouds). Figure 7 exemplifies the issue related to misalignment between Multi-GAS and UV observations. In the June 26th example, the only successful Multi-GAS acquisition period (from 1600 to 1630 h local time) clearly did not overlap with the SO 2 flux acquisition window (0900 to 1600 h local time). To overcome this problem, the common practice is to average out available Multi-GAS and UV spectroscopy data to obtain daily means of the CO 2 flux (Aiuppa et al., 2010a). Owing to the large inter-daily variability of SO 2 flux (e.g., Figure 7), however, large uncertainties are associated with these derived CO 2 fluxes (see Table 1, and errors bars in Figure 6A). In spite of the issues above, we find overall consistency between the lidar-based and the traditional (Multi-GAS + UV spectroscopy-based) CO 2 fluxes ( Figure 6A). This provides mutual validation for both quantification approaches. Our lidar-based CO 2 flux time-series ( Figure 6A) are manifestly more continuous and of better temporal resolution (16-33 min). In addition, the lidar as with other remote sensing techniques is intrinsically safer. We caution, however, that further development is required before the lidar can become an operative tool for volcano monitoring. Improvements will need to occur in portability (the prototype weighs ∼1100 kg), and reduced power requirement (6.5 kW) and costs (300 kUS $). In addition, the current measurement protocol is complex and thus requires great familiarity with the technique. Efforts are now being made to make the lidar more simple, user-friendly and fully automated, including development of an on-line remote control system and of a self-checking routine of the laser's wavelength settings. Electro-optics and laser/lidar private manufacturers need to be directly involved to transition the prototype into a more widely accessible, commercial instrument. CONCLUSIONS Our proof-of-concept study demonstrates the ability of DIALlidars to remotely (≈3 km distance) measure the volcanic CO 2 flux. Our reported lidar-based CO 2 fluxes at Stromboli volcano (1.8 ± 0.5 to 32.1 ± 8.0 kg/s) are in the same range as those obtained using standard techniques that require in-situ observations and are intrinsically more risky for operators. Our results, with those of Queißer et al. (2016), open new prospects for the use of lidars for instrumental remote monitoring of volcanic CO 2 flux. Further work is warranted in order to standardize and widen potential applications of Lasers in volcanic gas studies. AUTHOR CONTRIBUTIONS AA, LF, and SS conceived the idea. AA, LF, SS, GM, and MN conducted the lidar/UV camera experiment. SP and RD processed the data, with help from AA, LF, and SS. ML provided the Multi-GAS results. AA drafted the manuscript with help from all co-authors. FUNDING The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Program (FP7/2007/2013)/ERC grant agreement n 305377 (PI, Aiuppa), and from the DECADE-DCO research initiative. APPENDIX-UNCERTAINTY AND ERROR ANALYSIS Our lidar-based CO 2 fluxes are affected by the following error sources: i. systematic error in CO 2 concentration measurement, ii. statistical error in CO 2 concentration measurement, iii. error in plume transport speed, iv. error in identifying the integration area. i. Systematic error of the CO 2 concentration measurement-It is well known that the DIAL-lidar systematic error is dominated by imprecision in wavelength setting , leading to inaccuracy in differential absorption cross section and thus in gas concentration. To minimize this error, we implemented a photo-acoustic cell filled with pure CO 2 at atmospheric pressure and temperature, close to the laser exit, in order to control the transmitted wavelength before each atmospheric measurement. This procedure allows us to set the ON/OFF wavelengths with better accuracy than the laser linewidth (Fiorani et al., 2016). Assuming that the error in the wavelength setting is ±0.02 cm −1 (half laser linewidth), in the wavelength region used in this study, the systematic error of the CO 2 concentration measurement is 5.5%. ii. Statistical error of the CO 2 concentration measurement - The statistical error has been calculated by standard error propagation techniques from the standard deviation of the lidar signal at each ADC channel. As discussed in Fiorani and Durieux (2001), the statistical error of the lidar signal increases with range. As a consequence, the uncertainty associated with the derived CO 2 concentrations also increases with range. In the distance range between Pizzo and Vancori, representing a mean measurement range, and at typical atmospheric and plume conditions encountered during this study, the statistical error of the CO 2 concentration measurement was about 2%. The statistical error exceed 5% at 4 km (well beyond our measurement range). iii. Error in plume transport speed -The standard deviation and the average value of the wind speed have been calculated for each measurement session, and the corresponding relative error was evaluated (by error propagation technique) at 3%. iv Error in identifying the integration area -The integration area in which an excess CO 2 concentration is actually present is probably the most difficult parameter to retrieve accurately, and therefore represents the main error source in our calculated volcanic CO 2 fluxes. The following procedure has been followed. For each CO 2 concentration map (e.g., Figure 5), we initially measured: 1) A 15 , the area where the excess CO 2 concentration was larger than 15 ppm; and 2) A 25 the area where the excess CO 2 concentration was larger than 25 ppm. Then, the average between A 15 and A 25 was taken as the best-estimated area, and their semi-difference as the error (∼25%). The above thresholds have been chosen because below 15 ppm noise becomes significant, while above 25 ppm the plume area is reduced to its core. Use of a 15 ppm threshold likely underestimates the area (and thus the flux) of the order of magnitude of the measurement error, i.e., 10-20%. Assuming that each error source is statistically independent, we can quadratically sum all the errors and obtain a cumulative error of ∼ 25% (dominated by the area error).
8,831.2
2017-02-21T00:00:00.000
[ "Environmental Science", "Physics" ]
Exploring the Effects of Human Bone Marrow-Derived Mononuclear Cells on Angiogenesis In Vitro Cell therapies involving the administration of bone marrow-derived mononuclear cells (BM-MNCs) for patients with chronic limb-threatening ischemia (CLTI) have shown promise; however, their overall effectiveness lacks evidence, and the exact mechanism of action remains unclear. In this study, we examined the angiogenic effects of well-controlled human bone marrow cell isolates on endothelial cells. The responses of endothelial cell proliferation, migration, tube formation, and aortic ring sprouting were analyzed in vitro, considering both the direct and paracrine effects of BM cell isolates. Furthermore, we conducted these investigations under both normoxic and hypoxic conditions to simulate the ischemic environment. Interestingly, no significant effect on the angiogenic response of human umbilical vein endothelial cells (HUVECs) following treatment with BM-MNCs was observed. This study fails to provide significant evidence for angiogenic effects from human bone marrow cell isolates on human endothelial cells. These in vitro experiments suggest that the potential benefits of BM-MNC therapy for CLTI patients may not involve endothelial cell angiogenesis. Introduction Peripheral arterial disease (PAD) is a chronic condition where peripheral blood flow is restricted due to stenosis or blockage of the arteries.In an advanced state, PAD can lead to chronic limb-threatening ischemia (CLTI), resulting in patients suffering from rest pain and/or ischemic ulcers or gangrene.The current treatment for CLTI is directed at restoring the blood flow to the limb with endovascular or surgical interventions, in addition to standard drug therapy and cardiovascular risk management.Unfortunately, the success and patency rates of these interventions are around 60%. Due to the severity of the disease and shortcomings of current therapies, there is a need for new effective therapies. In the last decades, the interest in the field of cell therapy, including stem cells, is rising, since this could be a promising alternative to conventional therapy.Cell therapy came to light early this century in 2002, when the first clinical study reported that bone marrow-derived mononuclear cells (BM-MNCs) could be safe and effectively used to treat CLTI [1].Due to their potential ability to promote angiogenesis, BM-MNCs have been used in various clinical trials, showing beneficial effects for ulcer healing and limb salvage [2][3][4][5][6].However, detailed analyses of various randomized controlled trials have failed to show clinically relevant beneficial effects [7].Mononuclear cells are a mixture of different types of hematological cells, including lymphocytes, monocytes, and hematopoietic stem cells.They have been shown to have regenerative properties and the ability to promote angiogenesis [8,9].However, the composition of cell therapies is largely variable, with various preparation methods and different routes of administration. One of the promising new approaches is based on the use of REX-001, a highly standardized autologous bone marrow-derived mononuclear cell product that has shown significant blood flow recovery by increasing vascular density and functional neovascularization, which correlated with clinical benefits [10].Due to these promising results, currently a phase III clinical trial is being conducted (ClinicalTrials.govIdentifier: NCT03174522).However, the exact mechanism of action of REX-001 is still unknown. The neovascularization processes that lead to restoring the blood flow comprise both arteriogenesis and angiogenesis.Arteriogenesis is the recruitment of collaterals from preexisting arterioles, and is mainly inflammatory driven.Angiogenesis is the formation of new capillary blood vessels, and plays a crucial role in various physiological and pathological processes involving the sprouting and remodeling of blood vessels from the pre-existing vasculature.The cell types that contribute to neovascularization are endothelial cells, circulating monocytes, smooth muscle cells, and pericytes [11,12].Endothelial cells, which are important elements of blood vessels, play a pivotal role in angiogenesis.The proliferation and migration of endothelial cells are crucial events contributing to the formation of new vessels and the formation of a functional vascular network, and are driven by multiple growth factors and cytokines including vascular endothelial growth factor (VEGF), plateletderived growth factor, insulin-like growth factor 1, interleukin 1, interleukin 6 (IL-6), and interleukin 8 (IL-8) [13,14].In addition to proliferation, endothelial cell migration allows endothelial cells to navigate through the extracellular matrix to form new blood vessels.Endothelial cell migration is regulated by various signaling molecules, including VEGF and angiopoietins.Activated endothelial cells can release chemoattractants such as monocyte chemoattractant protein-1 (MCP-1), initiating the recruitment of monocytes to the angiogenic site [15][16][17]. Bone marrow-derived mononuclear cells (BM-MNCs) consist of a variety of cell types including lymphocytes, granulocytes, monocytes, and progenitor cells.It is hypothesized that BM-MNCs induce neovascularization, i.e., arteriogenesis and angiogenesis.In the current study the effect of BM-MNCs on angiogenesis was explored by studying endothelial cell proliferation, cell migration, angiogenic tube formation, and sprouting in different set-ups. Isolation and Quality Control of Bone Marrow-Derived Cells The bone marrow mononuclear cell isolates used in this study were obtained from healthy volunteers (Hemacare, Charles River, Wilmington, MA, USA) and isolated according to a strict protocol that met strict specifications, as defined by Rojas-Torres et al. [18].The first step was to isolate the BM-MNC cells according this protocol.As shown schematically in Figure 1, the BM-MNCs were isolated from heparinized bone marrow via Ficoll gradient separation.The characteristics of the product are defined in Table 1. Figure 1.Illustration of the manufacturing process of BM-MNC isolates, starting with bone marrow aspiration, followed by manufacturing the product via Ficoll gradient cell separation, and finally a quality assessment was performed using flow cytometry and hematology analysis.The quality of the isolated BM-MNCs was analyzed using both hematology analysis and flow cytometry to demonstrate that the manufactured cell isolate met the quality acceptance criteria of the bone marrow cells isolates, as in the REX-001 clinical trial.The cell isolates in this study were produced according to the REX-001 manufacturing protocol. Not all of the BM-MNC samples met the criteria of >15% leukocyte recovery; one sample only had 13.05% leukocyte recovery, and was not used in experiments (Table 1).All of the samples had >96% erythrocyte depletion and >60% thrombocyte depletion, meeting the quality criteria.Furthermore, all of the BM-MNC isolates met the following criteria: viability above 80%, containing >30% granulocytes, and the presence of CD34+/CD45+ cells (>0.1%). In addition to the required quality assessment, a more extensive flow cytometry panel was used to characterize the cell composition of the BM-MNC isolates in more detail.The CD45+ cell fraction was analyzed further to determine the percentages of B lymphocytes and T lymphocytes.Subsequently, the T lymphocytes were further characterized to CD4+ and CD8+ T cells.In addition, the percentages of monocytes in the BM-MNC isolates were determined.The flow cytometry gating strategy is shown in Supplementary Figure S1. BM-MNCs Have No Effect on Endothelial Cell Proliferation To determine whether BM-MNCs have an effect on endothelial cell proliferation, directly or indirectly, human umbilical vein endothelial cells (HUVECs) were incubated either with increasing numbers of freshly isolated BM-MNCs or increasing concentrations of BM-MNC-conditioned medium, and the proliferation was analyzed with MTT assays.Based on previous experiments, the endpoints of both assays were set at 24 h after treatment with BM-MNCs. To explore a direct effect of BM-MNCs on endothelial cell proliferation, BM-MNCs were added directly to the HUVEC cultures.None of the doses of BM-MNCs tested (625, 1250, 2500, and 5000 cells) resulted in a change in HUVEC proliferation in the MTT assay as compared to the negative control, i.e., EBM2 medium with 0.2% serum.The proliferation was significantly lower than in the positive control group that was exposed to the EMB2 medium supplemented with growth factors.If any effect could be observed, this would be that with the higher BM-MNC dose, slightly less endothelial cell proliferation occurred (Figure 2A and Supplementary Figure S2A).The data shown in Figure 2A are from one representative experiment.All of the experiments with BM isolates for different donors showed a similar pattern, with no effects on HUVEC proliferation (Figure S2).In addition to the direct effects on HUVEC proliferation by BM-MNC isolates, we studied whether proliferation could be induced by paracrine factors present in the isolate.For this investigation, we incubated HUVECs with increasing concentrations of BM-MNCconditioned media.The concentration is defined as the equivalent of BM-MNC cells secreting their paracrine factors into the conditioned medium, representative for 2500, 5000, 10,000, or 20,000 BM-MNCs (Figure 2B). To evaluate if the BM-MNCs would have an indirect effect on HUVEC proliferation in other conditions, conditioned medium was also prepared in media with less or more serum added.To evoke a potential effect, the assays were also executed in hypoxic conditions, since hypoxia induces vascular endothelial growth factor (VEGF), which is an angiogenic factor (Figure S3).Nevertheless, adding BM-MNCs in hypoxic conditions did not increase HUVEC proliferation.To evaluate whether the kind of culture medium led to different BM-MNC-conditioned medium with different effects on HUVEC proliferation, these experiments were also performed using immune cell-suitable culture media to optimize the culturing conditions for the BM-MNCs, OptiMEM, and AIMV, in order to prepare BM-MNC-conditioned medium (Figure S4).HUVEC proliferation after adding BM-MNC-conditioned medium in OptiMEM did not show any differences, whereas HUVEC proliferation after adding BM-MNC-conditioned medium in AIMV showed a decrease in HUVEC proliferation under hypoxic circumstances.Since in none of these conditions was any difference in HUVEC proliferation observed compared to the negative control, this suggests that BM-MNCs exert no paracrine effects on endothelial cell proliferation. BM-MNCs Do Not Affect Endothelial Cell Migration Next to endothelial cell proliferation, endothelial cell migration is a key process involved in the formation of new vessels that might be stimulated by bone marrow cell isolates.To evaluate the effect of BM-MNCs on HUVEC migration, wound healing assays were performed.After culturing HUVEC for 24 h, a scratch wound was introduced into monolayers of HUVECs using the Incucyte Woundmaker Tool.Subsequently, these wounded cultures were treated with different doses of BM-MNCs.The plates were incubated in the IncuCyte S3, and pictures were taken after 12 h.The percentage of scratch-wound closure after 12 h was calculated.Figure 3A clearly shows the increasing concentration of BM-MNCs that was added at t = 0, visualized as cells over the wounded area. isolates.To evaluate the effect of BM-MNCs on HUVEC migration, wound healing assays were performed.After culturing HUVEC for 24 h, a scratch wound was introduced into monolayers of HUVECs using the Incucyte Woundmaker Tool.Subsequently, these wounded cultures were treated with different doses of BM-MNCs.The plates were incubated in the IncuCyte S3, and pictures were taken after 12 h.The percentage of scratchwound closure after 12 h was calculated.Figure 3A clearly shows the increasing concentration of BM-MNCs that was added at t = 0, visualized as cells over the wounded area.Quantification of the scratch wound closure rate of HUVECs treated with BM-MNCs was performed after 12 h.The experiments were performed in six-fold with different BM-MNC isolates, and the results did not show an increase in migration rate (Figures 3B and S5A). Figure 3B shows a non-significant decreased migration rate of BM-MNCs in all dosages, whereas some graphs shown in Figure S5A also show a non-significant increase in scratch wound coverage.However, a clear induction of endothelial cells migration after adding BM-MNC isolates cannot be observed.Moreover, in one of the six experiments, a significantly lower migration rate was observed when adding 20,000 BM-MNCs. In this scratch wound set-up, we also studied the potential paracrine effects; conditioned medium was prepared in immune cell-suitable culture media AIMV or OptiMEM mixed 1:1 with endothelial cell culture medium EBM2 containing 0.2% serum, and added to the wounded HUVEC cultures.After 12 h, the scratch wound cultures showed no Quantification of the scratch wound closure rate of HUVECs treated with BM-MNCs was performed after 12 h.The experiments were performed in six-fold with different BM-MNC isolates, and the results did not show an increase in migration rate (Figure 3B and Supplementary Figure S5A).Figure 3B shows a non-significant decreased migration rate of BM-MNCs in all dosages, whereas some graphs shown in Figure S5A also show a nonsignificant increase in scratch wound coverage.However, a clear induction of endothelial cells migration after adding BM-MNC isolates cannot be observed.Moreover, in one of the six experiments, a significantly lower migration rate was observed when adding 20,000 BM-MNCs. In this scratch wound set-up, we also studied the potential paracrine effects; conditioned medium was prepared in immune cell-suitable culture media AIMV or OptiMEM mixed 1:1 with endothelial cell culture medium EBM2 containing 0.2% serum, and added to the wounded HUVEC cultures.After 12 h, the scratch wound cultures showed no significant difference in migration rate (Figure 3C).The conditioned media in other batches of BM-MNC isolates also did not lead to any changes in the migration rate (Figure S5B). No Effect of BM-MNCs on Endothelial Cell Tube Formation The angiogenic capacity of endothelial cells in general can be studied using a Matrigel tube formation assay.Therefore, we also studied the effect of BM-MNCs on the capacity of HUVECs to form tubes in a Matrigel tube formation assay.Here, we determined the total length of the tubes formed after 12 h of incubating HUVECs with different doses of BM-MNC isolates (Figure 4 and Supplementary Figure S6).The photos clearly show the increasing BM-MNC doses that were added at t = 0, visualizable as more cells adhering to the tubular structures.Quantification of the length, however, showed no differences between the different numbers of cells added. significant difference in migration rate (Figure 3C).The conditioned media in other batches of BM-MNC isolates also did not lead to any changes in the migration rate (Figure S5B). No Effect of BM-MNCs on Endothelial Cell Tube Formation The angiogenic capacity of endothelial cells in general can be studied using a Matrigel tube formation assay.Therefore, we also studied the effect of BM-MNCs on the capacity of HUVECs to form tubes in a Matrigel tube formation assay.Here, we determined the total length of the tubes formed after 12 h of incubating HUVECs with different doses of BM-MNC isolates (Figures 4 and S6).The photos clearly show the increasing BM-MNC doses that were added at t = 0, visualizable as more cells adhering to the tubular structures.Quantification of the length, however, showed no differences between the different numbers of cells added.Quantification of the total tube length of HUVECs was performed after 8 h.The experiments were performed in triplicate with different BM-MNC isolates.The results did not show an increase in tube formation rate (Figures 4B and S6A). Figure 4B shows no Quantification of the total tube length of HUVECs was performed after 8 h.The experiments were performed in triplicate with different BM-MNC isolates.The results did not show an increase in tube formation rate (Figure 4B and Supplementary Figure S6A).Figure 4B shows no differences in endothelial cell tube formation length of BM-MNCs in all dosages, which is confirmed in Figure S6A. The indirect effects of BM-MNCs on endothelial cell tube formation length were studied by adding BM-MNC-conditioned medium to HUVECs.The conditioned medium was prepared in AIMV or OptiMEM medium, both suitable immune cell culture media to optimize the culturing conditions for the BM-MNCs.The results show no differences in the tube length (Figure 4C,D). The Effect of BM-MNCs on Aortic Ring Sprouting Aortic ring sprouting ex vivo is another very informative assay for the angiogenic potential of cells or factors.Explants of mouse aortas have the capacity to sprout and form branching microvessels ex vivo when embedded in gels of collagen.Angiogenesis in this system is driven by endogenous growth factors released by the aorta and its outgrowth in response to the injury of the dissection procedure [19].The aortic ring assay offers many advantages over existing models of angiogenesis.Unlike isolated EC, the native endothelium of the aortic explants has not been modified by repeated passages in culture and retains its original properties.Angiogenic sprouting occurs in the presence of pericytes, macrophages, and fibroblasts, as seen during wound healing in vivo [20]. The incubation of murine aortic rings with increasing numbers of BM-MNC isolates (2500, 5000, 10,000, or 20,000 cells added) (Figure 5) did not result in any differences in the numbers of sprouts originating from the rings.The analysis was performed after 7 days.The experiment was repeated with three different BM-MNC isolates (Figure S7). differences in endothelial cell tube formation length of BM-MNCs in all dosages, which confirmed in Figure S6A. The indirect effects of BM-MNCs on endothelial cell tube formation length were stud ied by adding BM-MNC-conditioned medium to HUVECs.The conditioned medium wa prepared in AIMV or OptiMEM medium, both suitable immune cell culture media to op timize the culturing conditions for the BM-MNCs.The results show no differences in th tube length (Figure 4C,D). The Effect of BM-MNCs on Aortic Ring Sprouting Aortic ring sprouting ex vivo is another very informative assay for the angiogen potential of cells or factors.Explants of mouse aortas have the capacity to sprout and for branching microvessels ex vivo when embedded in gels of collagen.Angiogenesis in th system is driven by endogenous growth factors released by the aorta and its outgrowth response to the injury of the dissection procedure [19].The aortic ring assay offers man advantages over existing models of angiogenesis.Unlike isolated EC, the native endoth lium of the aortic explants has not been modified by repeated passages in culture an retains its original properties.Angiogenic sprouting occurs in the presence of pericyte macrophages, and fibroblasts, as seen during wound healing in vivo [20]. The incubation of murine aortic rings with increasing numbers of BM-MNC isolate (2500, 5000, 10,000, or 20,000 cells added) (Figure 5) did not result in any differences in th numbers of sprouts originating from the rings.The analysis was performed after 7 day The experiment was repeated with three different BM-MNC isolates (Figure S7). BM-MNCs Release Angiogenic Cytokines Thus far, no direct and paracrine effects of BM-MNCs on endothelial cells were ob served.Therefore, we were interested in determining which cytokines and factors are r leased by BM-MNCs.To study factors excreted by BM-MNCs, the productions of IL-6, IL 8, MCP-1, and MMP-9 were determined.In OptiMEM, BM-MNCs produced 1.1 ng/mL IL-6, 27.7 ng/mL of IL-8, 22.8 ng/mL of MCP-1, and 161.3 ng/mL of MMP-9.In AIMV, BM Discussion The current study investigated whether bone marrow cell isolates, prepared according a strict protocol defined by Rojas-Torres et al. [18], have angiogenesis-stimulating potential.The effects of these bone marrow cell isolates on endothelial cell proliferation and endothelial cell migration were subsequently analyzed.Under none of the conditions tested could any stimulatory effects be observed with various concentrations, under normoxic or hypoxic conditions, or with direct contact or paracrine effect via conditioned medium exposure.Neither effects on Matrigel tube formation nor on aortic ring sprouting could be observed after incubation with different doses of cell isolates.Due to the lack of effects observed in these models, the effects were not evaluated in other ex vivo angiogenesis models such as spheroid cultures.In an attempt to unravel the mechanism of action of these specified BM-MNC isolates that showed promising results in clinical trials, it seems that the effect is most likely not due to an induction of angiogenesis per se [10]. The effect of bone marrow-derived mononuclear cells in patients with critical limb ischemia has been studied for a couple of decades, but its effectiveness remained unclear [21][22][23].Clinical trials showed varying results, although there are plenty of studies that showed promising effects in patients with CLTI.However, the randomized controlled trials that reported beneficial effects of BM-MNCs are of relatively low quality.Thus far, the induction of neovascularization after BM-MNC therapy has not been convincingly demonstrated.Currently, a high-quality phase III randomized controlled trial is being conducted (NCT03174522) after reporting promising phase II trial (NCT00987363) results [10].Despite these positive clinical trial results, the supposed mechanism of action by which these injected bone marrow cells induce neovascularization remains unclear. Interestingly, most studies using bone marrow-derived cells as a therapy did not define the composition of cell types in the product nor analyze the product for quality assessment.In this study, quality requirements were set for the BM-MNC isolates, and each cell isolate was examined to confirm an adequate product quality.Hence, the cell composition of the BM-MNCs is known, and consists of multiple mononuclear cell types in certain proportions.Setting quality requirements, and thus assessing the proportions of different cell types in the product, is an important step in understanding the mechanism of action.Furthermore, it can help to acquire knowledge about why some patients do not respond to cell therapy, which may be related to the composition of the product.It is shown that 63.63-86.87% of the BM-MNC isolates consist of CD45+ leukocytes.However, the identity of the remaining 13-36% (CD45−) of the cells is still unknown.Previous research has shown that besides CD45+ hematopoietic stem cells, bone marrow also contains a population of heterogenous CD45− nonhematopoietic tissue-committed stem cells [24].In addition, CD45− cells in bone marrow cell fractions are of hematopoietic origin, and can be erythroid and lymphoid progenitors [25].There is a substantial portion of CD34+ cells present in the our bone marrow cell isolates (Table 1), and CD34+ cell therapy has been shown to be one of the most promising approaches, most likely via the miR126 present in the condition medium of the CD34+ cells [26] that was reported to induce tube formation [27].However, we were not able to demonstrate similar effects in our tube formation experiments. Although no effects on angiogenesis were demonstrated, we showed IL-8, MCP-1, and MMP-9 to be present in the BM-MNC isolates, which are known proangiogenic factors.The role of IL-8 is widely researched in the oncologic field, where IL-8 promotes tumor angiogenesis by activating the VEGF pathway and enhancing MMP expression [13,28].The presence of MMP-9 in the cell isolates suggests that extracellular matrix components can be degraded, which are key elements of the basement membrane surrounding blood vessels.Allowing the degradation of extracellular matrix allows endothelial cells to migrate into the surrounding tissue, starting new vessel formation [29].MCP-1 is a chemokine that regulates the migration and infiltration of monocytes and macrophages to the site where it is released.Then, monocytes are able to differentiate into macrophages, which are important players in angiogenesis as they release factors including VEGF, MMPs, and enzymes promoting blood vessel growth by inducing endothelial cell proliferation and migration [30][31][32]. Understanding the absence of angiogenic effects of BM-MNCs on HUVECs, despite the proangiogenic chemokines that are excreted, is of great importance.A possible explanation for lacking angiogenic effects may be that the cell products were produced from bone marrow obtained from healthy donors.Although the BM-MNC isolates manufactured in this study fulfilled all quality criteria, BM-MNC isolates manufactured from CLTI patients (REX-001) suffering from type 2 diabetes mellitus may have a different composition and characteristics.Furthermore, the in vitro set-up only involved physiological HUVECs, while in the pathophysiologic situation of patients with CLTI dysfunctional endothelial cells are involved many more cells, chemokines, and inflammatory markers [33]. In this study, we studied the effects on HUVECs in normoxic and hypoxic environments, because endothelial cells in patients with CLTI suffer from hypoxia-induced endothelial cell dysfunction [34].In addition, multiple culture media were used in the experiments to optimize the culturing conditions for both the BM-MNCs and the HUVECs together.Despite our efforts to unravel their effects on endothelial cells, one should bear in mind that other cell types, including smooth muscle cells, monocytes, and pericytes, are involved in angiogenesis and arteriogenesis.These cell types were not involved in our experiments, which is a limitation in our approach.However, REX-001 was studied in a murine model with the presence of all of the cell types involved, and improvement in revascularization and ischemic reperfusion was concluded [18].Future fundamental biological studies should focus on identifying effects on these cell types.Moreover, there is a need for strategies to identify and augment the homing, survival, and effectiveness of the injected cells. We believe that future clinical studies directed at cell therapeutic approaches to relieve CLI in patients should be based on a clear mechanism of action to avoid more disappointing clinical trial results. BM-MNC Isolates Manufacturing BM-MNCs were isolated from the heparinized bone marrow of healthy human donors (Hemacare, Charles-River) using a scaled-down density gradient.In short, the bone marrow was filtered using a 180 µm filter, and a sample of the filtered bone marrow was used for hematology analysis.The filtered bone marrow was then separated using Ficoll-Paque 1.077, and the upper layer, including the plasma and low-density cells, was isolated.These cells were washed twice with isotonic saline solution containing 2.5% human serum albumin. Finally, the BM-MNCs were resuspended in Ringer's lactate solution containing 2.5% w/v glucose and 1% w/v HSA. The filtered bone marrow and the final product were both measured in a Sysmex XP-300 hematology analyzer.The obtained amounts of leukocytes, erythrocytes, and thrombocytes were used to calculate the leukocyte recovery percentage, and the percentages of erythrocyte and thrombocyte depletion. BM-MNC Conditioned Medium To prepare the conditioned media, 1.33 × 10 6 BM-MNCs/mL were incubated for 24 h in EBM-2, OptiMEM (Gibco, Billings, MT, USA), and AIMV (Gibco) culture media.The conditioned media were stored at −80 • C, and were thawed and diluted for use in the experiments. MTT Assay The cell proliferation (n = 4 experimental replicates) of the HUVECs was determined using MTT assays.A volume of 100 µL of HUVECs (4000 cells/well) were plated in 96-well plates and cultured until approximately 80% confluency was reached in complete endothelial cell culture medium.The medium was then replaced by endothelial cell lowserum medium containing 0.2% FBS for 24 h.Subsequently, the medium was replaced by BM-MNC treatments consisting of low-serum media containing 2500, 5000, 10,000, or 20,000 BM-MNCs.After 24 h of incubation, 10 µL of MTT (Thiazolyl, blue tetrazolium bromide, Sigma M5655) was added per well.The cells were incubated for 4 h, after which 75 µL of each well was discarded and replaced by 75 µL of isopropanol/0.1 M hydrogen chloride.The plates were incubated at room temperature on a platform shaker until dissolution of the formazan crystals was observed.Thereafter, the absorbance was read at 570 nm on a Cytation5 spectrophotometer (BioTek, Winooski, VT, USA), and the data were obtained using BioTek Gen5 software.The obtained mitochondrial metabolic activity data were quantified as a representative measure of cell proliferation. Scratch Wound Healing Assay For the scratch wound healing assays (n = 6 experimental replicates), HUVECs were plated on IncuCyte Imagelock 96-well plates (BA-04856, Sartorius AG, Goettingen, Germany) and cultured until approximately 90% confluence was reached in complete culture medium, as previously mentioned.The medium was then replaced by EBM-2 Basal medium supplemented with 2% FBS (SingleQuotsTM Supplements, CC-4176, Lonza) and 1% GA-1000 (SingleQuotsTM Supplements, CC-4176, Lonza).After 24 h, a scratch wound was introduced using the Incucyte Woundmaker Tool (4563, Sartorius AG, Goettingen, Germany), and different amounts of BM-MNCs were added in EBM-2 Basal medium supplemented with 2% FBS and 1% GA-1000.The plates were incubated in the IncuCyte S3, and pictures were taken after 12 h.The percentage of scratch wound closure after 12 h was calculated by measuring the difference in the wound surface at baseline and the wound surface after 12 h using the Wound Healing Tool of ImageJ. Tube Formation Assay HUVECs were seeded in a 6-well plate in EBM-2 culture medium supplemented with SingleQuots until they became confluent.The medium was replaced with low-serum medium for 24 h.Then, a 96-well plate was coated with 45 µL/well of Geltrex basement membrane matrix (A1413202, ThermoFisher, Waltham, MA, USA).Suspensions of HUVECs at a concentration of 250,000 cells/mL and different concentrations of BM-MNCs or BM-MNC-conditioned medium were prepared and seeded in the coated 96-well plate.The plate was incubated in IncuCyte S3, and pictures were taken every 2 h for 24 h.The analysis was performed using ImageJ at t = 8 h. Aortic Ring Assay The 8-week-old mice were sacrificed, the aortas were resected, and the surrounding fat and branching vessels were removed.The aortas were cut in <1 mm rings, and were overnight incubated at 37 • C in a humidified 5% CO 2 environment in OptiMEM supplemented with 1% penicillin/streptomycin.A 96-well plate was coated with 75 µL of collagen matrix (Collagen (Type 1, Merck Sigma-Aldrich, Millipore, Burlington, MA, USA) in DMEM (ThermoFisher, Waltham, MA, USA), pH adjusted with 5N NaOH), and then one aortic ring was added per well.After one hour, the collagen was solid and 150 µL of OptiMEM supplemented with 2.5% FBS, 1% penicillin-streptomycin solution (Cytiva, HyClone Laboratories, North Logan, UT, USA), 10 ng/mL of mouse VEGF (BioLegend, San Diego, CA, USA), and different amounts of BM-MNCs were added to each ring, with 20 or 30 rings per condition.After a total of 7 days of incubation at 37 • C in a humidified 5% CO 2 environment, with a medium replacement after 3 days, pictures of each aortic ring were taken using live phase-contrast microscopy (Axiovert 40C, Carl Zeiss, Oberkochen, Germany).The number of sprouts were counted manually. ELISA The bone marrow-derived mononuclear cells were plated at 1.33 × 10 6 cells/mL for 24 h to prepare the conditioned medium.After 24 h, the supernatant was stored at −20 • C. The IL-6, IL-8, MCP-1, and MMP-9 concentrations were determined via ELISA, according to the protocol (BD Biosciences, San Jose, CA, USA) in the supernatant of the BM-MNCs. Statistical Analysis Differences in the continuous variables between groups were statistically assessed using one-way ANOVA or Kruskal-Wallis tests in Graph Pad Prism 8 software.The data are represented as means ± SEM.The significance was set at p < 0.05. Conclusions In this study, no effect from human bone marrow cell isolates on the angiogenic behavior of experimental human endothelial cells (HUVEC) could be demonstrated.Our research holds significant relevance, as it addresses the shortage of supporting evidence regarding the effects of BM-MNCs on cultured endothelial cells. Figure 4 . Figure 4. Representative microscopy photos (10x) of HUVEC tube formation with the presence of BM-MNCs (A), quantification of HUVEC tube formation length after treatment with (B) BM-MNCs (2500, 5000, 10,000, or 20,000 cells added), and after treatment with BM-MNC-conditioned medium where A, B, C, and D represent 2500, 5000, 10,000, or 20,000 BM-MNCs, respectively, in (C) AIMV medium or (D) OptiMEM medium.Data points represent three technical replicates, and are presented as mean ± SEM.Non-significant via one-way ANOVA. Figure 4 . Figure 4. Representative microscopy photos (10×) of HUVEC tube formation with the presence of BM-MNCs (A), quantification of HUVEC tube formation length after treatment with (B) BM-MNCs (2500, 5000, 10,000, or 20,000 cells added), and after treatment with BM-MNC-conditioned medium where A, B, C, and D represent 2500, 5000, 10,000, or 20,000 BM-MNCs, respectively, in (C) AIMV medium or (D) OptiMEM medium.Data points represent three technical replicates, and are presented as mean ± SEM.Non-significant via one-way ANOVA. Figure 5 . Figure 5. Quantification of neovessel sprouts of mice aortic rings after treatment with BM-MNC (2500, 5000, 10,000, or 20,000 cells added).The graph is representative for 3 experiments performe with BM-MNCs isolated from 3 different bone marrow samples.Data are presented as mean ± SE with data points in 30-fold.Non-significant via Kruskal-Wallis test. Figure 5 . Figure 5. Quantification of neovessel sprouts of mice aortic rings after treatment with BM-MNCs (2500, 5000, 10,000, or 20,000 cells added).The graph is representative for 3 experiments performed with BM-MNCs isolated from 3 different bone marrow samples.Data are presented as mean ± SEM with data points in 30-fold.Non-significant via Kruskal-Wallis test. Figure 6 . Figure 6.Quantification of the concentrations of IL-6, IL-8, MCP-1, and MMP-9 in OptiMEM or AIMV cell culture medium after 24 h incubation with 1.33 × 10 6 BM-MNCs/mL.Data points represent two measurements and are presented as mean ± SEM. Table 1 . The ranges of process performance indicators and cell populations from six times manufacturing the BM-MNC isolate. * This cell isolate was not used in experiments.1ofsingle viable cells.
7,236.6
2023-09-01T00:00:00.000
[ "Biology", "Medicine" ]
Fabrication of Highly Transparent Y2O3 Ceramics with CaO as Sintering Aid Highly transparent Y2O3 ceramics were successfully fabricated with CaO as sintering aid. The microstructure evolution, optical transmittance, hardness and thermal conductivity of the Y2O3 ceramics were investigated. It was found that doping a small amount (0.01–0.15 wt.%) of CaO could greatly improve the densification rate of Y2O3. With an optimized CaO dosage of 0.02 wt.% combined with the low temperature vacuum sintering plus hot isostatic pressing (HIP-ing), Y2O3 ceramics with in-line transmittance of 84.87% at 1200 nm and 81.4% at 600 nm were obtained. Introduction Due to their unique optical and thermal properties such as high thermal conductivity, low thermal expansion coefficient, low phonon energy, and low infrared emissivity at elevated temperature [1][2][3][4][5], transparent Y 2 O 3 ceramics have a number of important applications, e.g., they are considered as a promising host material for high efficiency solid-state lasers, they can be used as supersonic infrared windows and missile domes. However, partly owing to the high melting temperature of Y 2 O 3 (2430 • C), it is hard to fabricate highly transparent Y 2 O 3 ceramics, especially at a relatively low sintering temperature [6]. It is widely known that intragranular pores can be generated and remained after sintering. In order to promote densification rate and avoid the formation of intragranular pores, cation ions with different valences are usually utilized as sintering aids. In previous studies, tetravalent sintering aids, for example, Th 4+ , Hf 4+ , and Ti 4+ were reported [7][8][9]. Zr 4+ as a sintering aid was widely studied, and highly transparent Zr-doped Y 2 O 3 ceramics were fabricated [10][11][12]. Trivalent cations like La 3+ were also proven to be an effective sintering additive for Y 2 O 3 in a number of studies [13][14][15][16]. A few studies were conducted using divalent ions, such as Ca 2+ and Mg 2+ , as sintering aids. In 1990, Katayama et al. first reported the sintering and electrical properties of 1 mol% CaO doped Y 2 O 3 sintered in air and proved that CaO was effective in improving the sinterability [17,18]. In 2001, the diffusion mechanism of defects in Ca-doped Y 2 O 3 ceramics was studied by Saito et al. [19]. In 2010, Kodo et al. investigated the grain boundary mobility of divalent cation doped Y 2 O 3 that were pressurelessly sintered in air with a doping concentration of 1 mol% [20]. However, as the proper doping concentration of CaO in Y 2 O 3 was not investigated in the previous studies, no highly transparent Y 2 O 3 ceramics with CaO as sintering aid were fabricated. In the present work, we investigated Y 2 O 3 transparent ceramics fabrication using CaO as sintering aid. Highly transparent Y 2 O 3 ceramics was successfully fabricated using vacuum sintering followed by hot isostatic pressing (HIP) technique. We report on the sintering behaviors of the Y 2 O 3 samples under different CaO doping concentrations, and the optical, mechanical, and thermal properties of the as-sintered ceramics. Ceramic Fabrication The starting materials were commercial Y 2 O 3 powder (99.999%, Jiahua Advanced Material Resources Co., Ltd., Jiangyin, China) and calcium oxide powder (CaO, 99.99%, Sigma-Aldrich, St. Louis, MO, USA). They were mixed together with the ratio of 0.01 wt.%, 0.02 wt.%, 0.05 wt.% and 0.15 wt.% of CaO, respectively. The mixed powders were ball milled for 15 h using ethanol as ball-milling media. After drying and sieving, the obtained powders were calcined at 1200 • C for 5 h and pressed into 20 mm diameter compacts under a manual tablet press machine. The compacts were then cold isostatic pressed (CIP-ed) under 200 MPa. The green bodies were first vacuum sintered (MT-U-1822, Meiteng, Suzhou, China) at a temperature in the range of 1450-1650 • C. After that, the Y 2 O 3 ceramics were hot isostatic pressed (10-30H, AIP, Columbus, OH, USA) at 1510 • C under 196 MPa, then annealed and polished into 3 mm in thickness. Characterizations The relative density was measured by the Archimedes method. The average grain size was measured by the software called Nano Measure, taking at least 200 grains into account for each sample. The microstructure of the as-sintered ceramics was observed using a scanning electron microscopy (JSM-6360A, JEOL, Tokyo, Japan). The in-line transmission was characterized by a UV-VIS-NIR spectrometer (Lambda 950, Perkin Elmer, Waltham, MA, USA). The Vickers hardness was measured by using a microhardness tester (FM-300e, FUTURE-TECH, Kanagawa, Japan), with the applied load of 1 kg. The thermal diffusivity was measured by the laser-flash method (DLF 1200, TA Instruments, New Castle, DE, USA). Figure 1 shows the relative density of Y 2 O 3 samples with different CaO doping concentrations and vacuum sintered at 1450-1650 • C. At the same sintering temperature, the relative density of the samples increased as the CaO content increased. The maximum density that could be reached also increased with the CaO doping concentration. For example, the 0.15 wt.% CaO-doped Y 2 O 3 reached its maximum relative density (99.5%) at 1600 • C, while the highest density of the 0.01 wt.% CaO-doped Y 2 O 3 was merely 92.8%. The densification rate obviously slowed down after 1550 • C, which indicates that grain growth rather than volume shrinkage is dominant in the stage. It is important to note that there was only slight density increase for the 0.01 wt.% CaO-doped sample even when the sintering temperature was further increased. Figure 2 shows the variation of average grain size as a function of sintering temperature of the CaO-doped Y 2 O 3 ceramics. At a given sintering temperature, the samples with higher doping concentration show higher grain growth rate and exhibit larger average grain size. It indicates that adding CaO has obviously accelerated the rate of grain boundary migration. What's more, the influence on the grain boundary mobility tends to be greater with higher doping concentration. This phenomenon could be explained by the following defect analysis. As reported previously, the introduction of calcium oxide into yttria under high oxygen pressures can be described as: [17] Results and Discussion where the standard notations of Kröger and Vink were used here [21]. Ca Y and O O are a Ca 2+ ion on an yttria site and an O 2− ion on its regular site, respectively, and h is an electron hole. Considering the vacuum sintering condition in this experiment, where the oxygen pressure is negative, the oxygen vacancies are easily generated according to Equation (2): Thus, the real reaction under the present experimental conditions can be expressed as: Owing to the similar ionic radius of Ca 2+ (1.12 Å) and Y 3+ (1.02 Å), Ca 2+ could substitute the site of Y 3+ , creating one negative charge. Therefore, by adding Ca 2+ into Y 2 O 3 under vacuum sintering condition, additional oxygen vacancies could be created. The increased concentration of CaO drives the equilibrium (3) to the right side, which contributes to the concentration increase of oxygen vacancies. Chen et al. have demonstrated that the grain boundary mobility in Y 2 O 3 is dominated by cation diffusivity, which can be enhanced by the presence of oxygen vacancies [14]. Thus, the grain boundary mobility could be promoted by the introduction of CaO. Figure 3 shows the microstructure evolution of the 0.02 wt.% and 0.15 wt.% CaOdoped samples under different sintering temperatures. At 1450 • C, both the 0.02 wt.% and 0.15 wt.% samples (Figure 3a,b) have the same degree of porosity and grain size (~0.8 µm). As the temperature was increased, the 0.15 wt.% CaO-doped samples showed an obviously faster densification and grain growth rate, which is consistent with the densification curve shown in Figure 1. At 1650 • C, the average grain size of the 0.15 wt.% CaO-doped sample had increased to over 10 µm, with a certain number of intragranular pores. In contrast, intragranular pores were not detected in the 0.02 wt.% CaO-doped samples. It means grain coarsening, rather than densification, has played the leading role in the CaO heavily doped samples. That is why for the 0.15 wt.% CaO-doped samples there was a slight density decrease in Figure 1 as the temperature was further increased from 1600 • C to 1650 • C. Therefore, further densification was difficult in the 0.15 wt.% CaO-doped samples. Figure 4 shows the SEM images of the ceramics after vacuum sintering at 1550 • C followed by HIP-ing at 1510 • C. After HIP-ing, the 0.02 wt.% CaO-doped ceramics exhibited a fully densified structure, without the detection of residual pores and secondary phases. Relative density of the samples achieved above 99.99%. In contrast, relative density of the ceramic shown in Figure 4b was just 99.8%. It is known that the driving force for densification decreases with the increase of grain size. In addition, during HIP, the plastic deformation mechanism is weaker in coarser grained samples. That is the reason why the pores in the 0.15 wt.% CaO-doped ceramics were more difficult to remove. Figure 5 shows the in-line transmission of the CaO-doped Y 2 O 3 ceramics after the HIP-ing, with a thickness of 3 mm. The 0.01 wt.% CaO-doped Y 2 O 3 ceramic was completely opaque, whose transmission line is zero throughout the whole wavelength range. The 0.02 wt.% CaO-doped Y 2 O 3 showed the highest in-line transmission over a wavelength range of 0.2-2 µm, reaching 84.87% at 1200 nm and remaining 81.41% at 600 nm. The transmission degraded when the CaO doping concentration was increased. It reveals that the optical property of the final sintered samples was very sensitive to the CaO doping concentration. Under the present sintering condition, 0.02 wt.% could be a favorable doping concentration. It demonstrates that residual pores could be effectively eliminated by adding a small amount of CaO to accelerate the densification during sintering process. As far as we know, highly transparent Y 2 O 3 ceramics with CaO as sintering additive have never been reported before. The Vickers hardness of the Y 2 O 3 ceramics vacuum sintered at 1550 • C followed by HIP-ing at 1510 • C was measured and is shown in Table 1 The thermal conductivity of the transparent Y 2 O 3 ceramics with different contents of CaO is further studied, as shown in Figure 6. It is calculated according to the formula: where α is the thermal diffusivity measured by the laser-flash method, ρ is the theoretical density of Y 2 O 3 (5.01 g/cm 3 ), and C is the specific heat capacity of Y 2 O 3 calculated by [22]: In comparison with the undoped Y 2 O 3 sample, there is a slight decrease of the thermal conductivity with increasing CaO doping concentration. This is reasonable because the mismatch of the atomic mass and ionic radius between the substitutions and host crystalline materials could lead to phonon scattering and thermal conductivity reduction [23]. However, the degradation of the thermal conductivity was very small owing to the small amount of CaO doping. For example, the thermal conductivity of 0.15 wt.% CaO-doped sample still reached 14.6 W/mK at room temperature. Conclusions In conclusion, using CaO as sintering aid, we have successfully fabricated highly transparent Y 2 O 3 ceramics by low temperature vacuum sintering plus HIP-ing. The effects of CaO doping concentration on the sintering behavior, optical transmission, mechanical strength and thermal properties of Y 2 O 3 were investigated. It is shown that even a small amount of variation in CaO doping could effectively affect Y 2 O 3 sinterability and the exclusion of residual pores inside the ceramics. The highest in-line transmission (81.41% at 600 nm) was achieved on samples with 0.02 wt.% CaO-doping, vacuum sintered at 1550 • C and HIP-ed at 1510 • C. The samples have Vickers hardness of 723.84 HV and thermal conductivity at room temperature of 14.8 W/mK. Data Availability Statement: The data presented in this study are available on request from the corresponding authors. Conflicts of Interest: The authors declare no conflict of interest.
2,816.8
2021-01-01T00:00:00.000
[ "Materials Science" ]
On Shuffling of Infinite Square-free Words In this paper we answer two recent questions from Charlier et al. (2014) and Harju (2013) about self-shuffling words. An infinite word w is called self-shuffling, if w = ∞ i=0 U i V i = ∞ i=0 U i = ∞ i=0 V i for some finite words U i , V i. Harju recently asked whether square-free self-shuffling words exist. We answer this question affirmatively. Besides that, we build an infinite word such that no word in its shift orbit closure is self-shuffling, answering positively a question of Charlier et al. Introduction A self-shuffling word, a notion which was recently introduced by Charlier et al. [2], is an infinite word that can be reproduced by shuffling it with itself.More formally, given two infinite words x, y ∈ Σ ω over a finite alphabet Σ, we define S (x, y) ⊆ Σ ω to be the collection of all infinite words z for which there exists a factorization An infinite word w ∈ Σ ω is self-shuffling if w ∈ S (w, w).Various well-known words, e.g., the Thue-Morse word or the Fibonacci word, were shown to be self-shuffling. Harju [5] studied shuffles of both finite and infinite square-free words, i.e., words that have no factor of the form uu for some non-empty factor u.More results on square-free shuffles were obtained independently by Harju and Müller [6], and Currie and Saari [4].However, the question about the existence of an infinite square-free self-shuffling word, posed in [5], remained open.We give a positive answer to this question in Sections 2 and 3. The shift orbit closure S w of an infinite word w can be defined, e.g., as the set of infinite words whose sets of factors are contained in the set of factors of w.In [2] it has been proved that each word has a non-self-shuffling word in its shift orbit closure, and the following question has been asked: Does there exist a word for which no element of its shift orbit closure is self-shuffling (Question 7.2)?In Section 4 we provide a positive answer to the question.More generally, we show the existence of a word such that for any three words x, y, z in its shift orbit closure, if x is a shuffle of y and z, then the three words are pairwise different.On the other hand, we show that for any infinite word there exist three different words x, y, z in its shift orbit closure such that x ∈ S (y, z) (see Proposition 7). Apart from the usual concepts in combinatorics on words, which can be found for instance in the book of Lothaire [7], we make use of the following notations: For every k 1, we denote the alphabet {0, 1, . . ., k − 1} by Σ k .For a word w = uvz we say that u is a prefix of w, v is a factor of w, and z is a suffix of w.We denote these prefix-and suffix relations by u p w and v s w, respectively.By w[i, j] we denote the factor of w starting at position i and ending after position j.Note that we start numbering the positions with 0. A prefix code is a set of words with the property that none of its elements is a prefix of another element.Similarly, a suffix code is a set of words where no element is a suffix of another one.A bifix code is a set that is both a prefix code and a suffix code.A morphism h is square-free if for all square-free words w, the image h(w) is square-free. A square-free self-shuffling word on four letters Let g : Σ * 4 → Σ * 4 be the morphism defined as follows: We will show that the fixed point w = g ω (0) is square-free and self-shuffling.Note that g is not a square-free morphism, that is, it does not preserve square-freeness, as g(23) = 0130302 contains the square 3030. Lemma 1.The word w = g ω (0) contains no factor of the form 3u1u3 for any u ∈ Σ * 4 . Proof.We assume that there exists a factor of the form 3u1u3 in w, for some word u ∈ Σ * 4 .From the definition of g, we observe that u can not be empty.Furthermore, we see that every 3 in w is preceded by either 0 or 1.If 1 s u, then we had an occurrence of the factor 11 in w, which is not possible by the definition of g, hence 0 s u.Now, every 3 is followed by either 0 or 2 in w and 01 is followed by either 2 or 3. Since both 3u and 01u are factors of w, we must have 2 p u.This means that the factor 012 appears at the center of u1u, which can only be followed by 1 in w, thus 21 p u.However, this results in the factor 321 as a prefix of 3u1u3, which does not appear in w, as seen from the definition of g.Lemma 2. The word w = g ω (0) is square-free. Proof.We first observe that {g(0), g(1), g(2), g(3)} is a bifix code.Furthermore, we can verify that there are no squares uu with |u| 3 in w.Let us assume now, that the square uu appears in w and that u is the shortest word with this property.If u = 02u , then u = u 03 must hold, since 02 appears only as a factor of g(3), and thus uu is a suffix of the factor g(3)u g(3)u 03 in w.As w = g(w), also the shorter square 3g −1 (u )3g −1 (u ) appears in w, a contradiction.The same desubstitution principle also leads to occurrences of shorter squares in w if u = xu and x ∈ {01, 03, 10, 12, 13, 21, 30, 32}. If u = 2u then either 03 s u or 030 s u or 01 s u, by the definition of g.In the last case, that is when 01 s u, we must have 21 p u, which is covered by the previous paragraph.If u = u 030, then uu is followed by 2 in w and we can desubstitute to obtain the shorter square g −1 (u )3g −1 (u )3 in w.If u = 2u and u = u 03, and uu is preceded by 03 or followed by 2 in w, we can desubstitute to 1g −1 (u )1g −1 (u ) or g −1 (u )1g −1 (u )1, respectively.Therefore, assume that u = 2u 03 and as we already ruled out the case when 21 p u, we can assume that uu is preceded by 030 and followed by 02 in w.This however means that we can desubstitute to get an occurrence of the factor 3g −1 (u )1g −1 (u )3 in w, a contradiction to Lemma 1. We now show that w = g ω (0) can be written as Proof.We use the notation x = v −1 u meaning that u = vx for finite words x, u, v.We are going to show that the self-shuffle is given by the following: Now we verify that from which it follows that w is self-shuffling.It suffices to show that each of the above products is fixed by g.Indeed, straightforward computations show that hence ∞ i=0 U i is fixed by g and thus w = ∞ i=0 U i .In a similar way we show that 3 Square-free self-shuffling words on three letters We remark that we can immediately produce a square-free self-shuffling word over Σ 3 from g ω (0): Charlier et al. [2] noticed that the property of being self-shuffling is preserved by the application of a morphism.Furthermore, Brandenburg [1] showed that the morphism is square-free.Therefore, the word f (g ω (0)) is a ternary square-free self-shuffling word, from which we can produce a multitude of others by applying square-free morphisms from Σ * 3 to Σ * 3 . A word with non self-shuffling shift orbit closure In this section we provide a positive answer to the question from [2] whether there exists a word for which no element of its shift orbit closure is self-shuffling. The Hall word H = 012021012102 • • • is defined as the fixed point of the morphism h(0) = 012, h(1) = 02, h(2) = 1.Sometimes it is referred to as a ternary Thue-Morse word.It is well known that this word is square-free.We show that no word in the shift orbit closure S H of the Hall word is self-shuffling.More generally, we show that if x is a shuffle of y and z for x, y, z ∈ S H , then they are pairwise different.Proposition 4.There are no words x, y in the shift orbit closure of the Hall word such that x ∈ S (y, y). Proof.Suppose the converse, i.e., there exist words x, y ∈ S H such that Define the set X of infinite words as follows: In other words, X consists of words in S H which can be introduced as a shuffle of some word y in S H with itself.Now suppose, for the sake of contradiction, that X is non empty, and consider x ∈ X with the first block U 0 of the smallest possible positive length.We remark that such x and corresponding y are not necessarily unique.We can suppose without loss of generality that y starts with 0 or 10.Otherwise, we exchange 0 and 2, consider the morphism 0 → 1, 1 → 20, 2 → 210, and the argument is symmetric. It is not hard to see from the properties of the morphism h that removing every occurrence of 1 from x and y results in (02) ω .Hence the blocks in the factorizations of y after removal of 1 are of the form (02) i for some integer i.Thus the first letter of each block U i and V i that is different from 1 is 0, and the last letter different from 1 is 2. Then, U i and V i are images by the morphism h of factors of the fixed point of h.Therefore, there are words x , y ∈ S H such that x = h(x ), y = h(y ), Notice that the first block U 0 cannot be equal to 1. Indeed, otherwise x starts with 11, which is impossible, since 11 is not a factor of the fixed point of h. Clearly, taking the preimage decreases the lengths of blocks in the factorization (except for those equal to 1), and since U 0 = 1, the length of the first block in the preimage is smaller, i.e., |U 0 | < |U 0 |.This is a contradiction with the minimality of |U 0 |.Corollary 5.There are no self-shuffling words in the shift orbit closure of H. With a similar argument we can prove the following: the electronic journal of combinatorics 22(1) (2015), #P1.55 Proposition 6.There are no words x, y in the shift orbit closure of H such that x ∈ S (x, y). Proof.First we introduce a notation x ∈ S 2 (y, z), meaning that there exists a shuffle starting with the word z (i.e., U 0 = ε, V 0 = ε).Next, x ∈ S (x, y) implies that there exists z in the same shift orbit closure such that z ∈ S 2 (z, y).Indeed, one can remove the prefix U 0 of x to get z, i.e., z = (U 0 ) −1 x, and keep all the other blocks U i , V i in the shuffle product. Define the set Z of infinite words as follows: In other words, Z consists of words in S H which can be introduced as a shuffle of some word y in S H with z starting with the block V 0 .Now consider z ∈ Z with the first block V 0 of the smallest possible length.We remark that such z and a corresponding y are not necessarily unique. As in the proof of Proposition 4, the shuffle cannot start with a block of length 1.Again, if we remove every occurrence of 1 in y (and in z), we get (02) ω or (20) ω ; moreover, since V 0 contains letters different from 1, the first letter different from 1 is the same in y and z.So, without loss of generality we assume that both y and z without 1 are (02) ω , and the blocks U i and V i without 1 are integer powers of 02.Then, U i and V i are images by the morphism h of factors of H. Therefore, there are words z , y ∈ S H such that z = h(z ), y = h(y ), As in the proof of Proposition 4, since V 0 = 1, the length of the first block in the preimage is smaller, i.e., |V 0 | < |V 0 |.This is again a contradiction with the minimality of |V 0 |.So, we proved that if there are three words x, y, z in the shift orbit closure of the fixed point of h such that x ∈ S (y, z), then they should be pairwise distinct.Now we are going to prove that for any infinite word there exist three different words in its shift orbit closure such that x ∈ S (y, z). An infinite word x is called recurrent, if each of its prefixes occurs infinitely many times in it.Proposition 7. Let x be a recurrent infinite word.Then there exist two words y, z in the shift orbit closure of x such that x ∈ S (y, z). Proof.We build the shuffle inductively. Start from any prefix U 0 of x.Since x is recurrent, each of its prefixes occurs infinitely many times in it.Find another occurrence of U 0 in x and denote its position by i 1 .Put the electronic journal of combinatorics 22(1) (2015), #P1.55 At step k, suppose that the shuffle of the prefix of x is built: Find another occurrence of k−1 l=0 V l in x at some position j k > j k−1 .We can do it since x is recurrent.Put We note that k l=0 U l is a factor of x by the construction; more precisely, it occurs at position i k−1 . Find an occurrence of k l=0 U l at some position i Continuing this line of reasoning, we build the required factorization. Since each infinite word contains a recurrent (actually, even a uniformly recurrent) word in its shift orbit closure, we obtain the following corollary: Corollary 8.Each infinite word w contains words x, y, z in its shift orbit closure such that x ∈ S (y, z). The following example shows that the recurrence condition in Proposition 7 cannot be omitted: Example 9. Consider the word 3H = 3012021 • • • which is obtained from H by adding a letter 3 in the beginning.Then the shift orbit closure of 3H consists of the shift orbit closure of H and the word 3H itself.Assuming 3H is a shuffle of two words in its shift orbit closure, one of them is 3H (there are no other 3's) and the other one is something in the shift orbit closure of H, we let y denote this other word.Clearly, the shuffle starts with 3, and cutting the first letter 3, we get H ∈ S (H, y), a contradiction with Proposition 6. There also exist examples where each letter occurs infinitely many times: Example 10.The following word: the electronic journal of combinatorics 22(1) (2015), #P1.55 does not have two words y, z in its shift orbit closure such that x ∈ S (y, z).The idea of the proof is that the shift orbit closure consists of words of the following form: 1 * 20 ω , 0 * 1 ω , x itself and all their right shifts.Shuffling any two words of those types, it is not hard to see that there exists a prefix of the shuffle which contains too many or too few occurrences of some letter compare to the prefix of x.We leave the details of the proof to the reader.By Corollary 8, there are x, y, z in the shift orbit closure of H such that x ∈ S (y, z).To conclude this section, we give an explicit construction of two words in the shift orbit closure of H which can be shuffled to give H.We remark though that this construction gives a shuffle different from the one given by Corollary 8. Let: By definition, the shift orbit closure of the Hall word is closed under h.Moreover this shift orbit closure is also closed under h .One of the ways to see this is the following.It is well known that the Thue-Morse word, which is a fixed point of the morphism 0 → 01, 1 → 10 starting with 0, is a morphic image of H under a morphism 0 → 011, 1 → 01, 2 → 0. Therefore, the set of factors of the Hall word is closed under reversal . Now by induction we prove that for each word v one has h (v) = (h(v R )) R (it is enough to prove this equality for letters and for the concatenation of two words).This implies that the shift orbit closure of the Hall word is closed h . Conclusions We showed that infinite square-free self-shuffling words exist.The natural question that arises now is whether we can find infinite self-shuffling words subject to even stronger avoidability constraints: For this we recall the notion of repetition threshold RT (k), which is defined as the least real number such that an infinite word over Σ k exists, that does not contain repetitions of exponent greater than RT (k).Due to the collective effort of many researchers (see [3,8] and references therein), the repetition threshold for all alphabet sizes is known and characterized as follows: if k = 3 w ∈ Σ ω k without factors of exponent greater than RT (k) is called a Dejean word.Charlier et al. showed that the Thue-Morse word, which is a binary Dejean word, is self-shuffling [2].Question 12. Do there exist self-shuffling Dejean words over non-binary alphabets?the electronic journal of combinatorics 22(1) (2015), #P1.55
4,296.6
2015-03-06T00:00:00.000
[ "Mathematics" ]
The Apollo Structured Vocabulary: an OWL2 ontology of phenomena in infectious disease epidemiology and population biology for use in epidemic simulation Background We developed the Apollo Structured Vocabulary (Apollo-SV)—an OWL2 ontology of phenomena in infectious disease epidemiology and population biology—as part of a project whose goal is to increase the use of epidemic simulators in public health practice. Apollo-SV defines a terminology for use in simulator configuration. Apollo-SV is the product of an ontological analysis of the domain of infectious disease epidemiology, with particular attention to the inputs and outputs of nine simulators. Results Apollo-SV contains 802 classes for representing the inputs and outputs of simulators, of which approximately half are new and half are imported from existing ontologies. The most important Apollo-SV class for users of simulators is infectious disease scenario, which is a representation of an ecosystem at simulator time zero that has at least one infection process (a class) affecting at least one population (also a class). Other important classes represent ecosystem elements (e.g., households), ecosystem processes (e.g., infection acquisition and infectious disease), censuses of ecosystem elements (e.g., censuses of populations), and infectious disease control measures. In the larger project, which created an end-user application that can send the same infectious disease scenario to multiple simulators, Apollo-SV serves as the controlled terminology and strongly influences the design of the message syntax used to represent an infectious disease scenario. As we added simulators for different pathogens (e.g., malaria and dengue), the core classes of Apollo-SV have remained stable, suggesting that our conceptualization of the information required by simulators is sound. Despite adhering to the OBO Foundry principle of orthogonality, we could not reuse Infectious Disease Ontology classes as the basis for infectious disease scenarios. We thus defined new classes in Apollo-SV for host, pathogen, infection, infectious disease, colonization, and infection acquisition. Unlike IDO, our ontological analysis extended to existing mathematical models of key biological phenomena studied by infectious disease epidemiology and population biology. Conclusion Our ontological analysis as expressed in Apollo-SV was instrumental in developing a simulator-independent representation of infectious disease scenarios that can be run on multiple epidemic simulators. Our experience suggests the importance of extending ontological analysis of a domain to include existing mathematical models of the phenomena studied by the domain. Apollo-SV is freely available at: http://purl.obolibrary.org/obo/apollo_sv.owl. Results: Apollo-SV contains 802 classes for representing the inputs and outputs of simulators, of which approximately half are new and half are imported from existing ontologies. The most important Apollo-SV class for users of simulators is infectious disease scenario, which is a representation of an ecosystem at simulator time zero that has at least one infection process (a class) affecting at least one population (also a class). Other important classes represent ecosystem elements (e.g., households), ecosystem processes (e.g., infection acquisition and infectious disease), censuses of ecosystem elements (e.g., censuses of populations), and infectious disease control measures. In the larger project, which created an end-user application that can send the same infectious disease scenario to multiple simulators, Apollo-SV serves as the controlled terminology and strongly influences the design of the message syntax used to represent an infectious disease scenario. As we added simulators for different pathogens (e.g., malaria and dengue), the core classes of Apollo-SV have remained stable, suggesting that our conceptualization of the information required by simulators is sound. Despite adhering to the OBO Foundry principle of orthogonality, we could not reuse Infectious Disease Ontology classes as the basis for infectious disease scenarios. We thus defined new classes in Apollo-SV for host, pathogen, infection, infectious disease, colonization, and infection acquisition. Unlike IDO, our ontological analysis extended to existing mathematical models of key biological phenomena studied by infectious disease epidemiology and population biology. Background The science and practice of infectious disease epidemiology, like climate science, is increasingly reliant on computational simulation [1], which is performed by software applications known as epidemic simulators. The simulators require information about pathogens, host populations, rates of infection transmission, interventions, and the disease outcomes of infections [2]. Using this configuration information-which we refer to as an infectious disease scenario-a simulator's algorithm computes the progression of one or more infections in one or more populations over time, under zero or more interventions. The result of this computation-the output of the simulator-is information on which decision makers can base policy or decisions about disease control. The goal of our research for the past 4 years has been to increase the accessibility and ease of use of simulators to promote progress in the field of infectious disease epidemiology [3]. A key focus has been reducing the time and effort required to locate a simulator, access it, understand its characteristics, create an infectious disease scenario to configure it, run it, and analyze its output. As an example of the effort required, Halloran et al. spent 6 months creating a comparative study of three simulators [4]. Most of the effort was expended on representing the same scenario in the different configuration representations and then converting results into a common representation for comparisons. As an example of the syntatic and semantic differences among simulator configurations, to configure the FRED simulator version 2.0.1 [5] to simulate the closing of schools 1 3 days after some event occurs (such as influenza incidence reaching a particular threshold) one would place "school_closure_delay = 3" in its configuration file, whereas for FluTE version 1.15 [6] one would place "responsedelay = 3" in its configuration file (unlike FRED, this setting would also affect other interventions such as vaccination). To address this problem, we are developing a common representation for simulator configuration and output that is capable of representing the configurations and output of infectious disease simulators [3]. We use an XML Schema Document (XSD) as our primary representation because the XSD language enabled us to represent the probabilistic, mathematical, and other non-ontological knowledge required for and generated by simulation. We inform the design of the XSD representation by formal ontological analysis of the domain of infectious disease epidemiology, with particular attention to the inputs and outputs of nine simulators. Our goal was for the XSD to have the capability to represent the configuration and outputs of not only these nine simulators, but also other existing and future simulators. We represent the results of this analysis in an OWL ontology-called the Apollo Structured Vocabulary or Apollo-SV. Apollo-SV and XSD together can be understood as a hybrid approach to knowledge representation and reasoning as defined by Davis et al. in their seminal paper on knowledge representation [7]. In particular, Apollo-SV (1) controls the terminology used in the XSD, (2) is a source of human-readable definitions of the terms for users of the XSD, and (3) serves as a record of the ontological commitments made by the developers of the XSD. Our hypothesis was that it is feasible to develop a common representation for the configuration and output of simulators that are diverse both in their internal representations and in the pathogens, modes of transmission, geography, and interventions that they model. We previously reported our initial versions of the XSD and Apollo-SV (versions 1.0), as well as our creation of a set of Web services to transmit a common configuration to two simulators [3]. We use configurations compliant with the XSD to invoke simulators as part of these Web Services, but generated the OWL2 representation-Apollo-SV-as our core ontology. In this paper, we describe new results from our subsequent ontological analyses of additional simulators and our updated understanding of simulator configurations that we incorporated into Apollo-SV version 3.0.1. Methods Our method for the development of the common representation was formal ontological analysis with rapid implementation of the representation to configure simulators and feedback from the results of implementation into further analysis. The next sections discuss our style of ontology development, the application in which the ontology is used, and the procedures and principles we followed in constructing the OWL ontology, Apollo-SV. "Gene Ontology Style" of ontology development We developed Apollo-SV using what we refer to as the Gene Ontology style of ontology development and testing-or GO style for short. GO style is a method for ontology development that emphasizes participation of subject matter experts and frequent and early feedback to ontology developers generated from using the ontology in software applications. We adopted GO style because it was successful for the Gene Ontology and because our community of developers and users was similar in many respects. A key strength of GO style-which the Gene Ontology Consortium cites as a factor in its success-is that a community of scientists, ontologists, artificial intelligence experts, and software developers all contribute in an egalitarian fashion to the ontology and its applications [8]. The team developing Apollo-SV comprises experts in infectious disease epidemiology, simulator and other software development, disease surveillance, medicine, biomedical informatics, medical terminologies, ontological engineering, artificial intelligence, and formal logic (the last one in the list helps to ensure that OWL2 axioms that define classes are correct). All these individuals have been actively engaged in development and review of Apollo-SV, and their feedback guides design decisions. A second strength of the GO style of ontology development is its emphasis on early use of the ontology in applications, which identifies issues and generates rapid feed back into ontology development [9]. We discuss the application of Apollo-SV in the next section. Additional elements of the style, that have subsequently been adopted by the Open Biological and Biomedical Ontologies (OBO) Foundry as principles of ontology development, include creating textual definitions for each class and making the ontology publicly and freely available for community use, review, and input [8][9][10]. We discuss how we implemented these additional elements of the style, as well as additional OBO Foundry principles, in the section following application. The application in which the ontology is used As stated previously, Apollo-SV serves as the repository for definitions and standard terminology for the Apollo XSD. The Apollo XSD in turn is used in a set of Web services. The Web services, called the Apollo Web Services, allow a publicly available, Web-based, end-user application to access multiple epidemic simulators through requests to a single Broker service (Fig. 1). In Fig. 1, the Simple End User Application (SEUA) [11] creates an infectious disease scenario for simulation, encoded in an XML document that conforms to the Apollo XSD syntax [12], which in turn uses terminology defined by Apollo-SV. The SEUA invokes the runSimulation() method of the Broker service with the XML-encoded infectious disease scenario. The Broker service subsequently invokes the Translator service, which translates the infectious disease scenario into the native terminology and syntax of the requested simulator(s). The SEUA polls the Broker service for the current status of the simulator until the status returned is "COMPLETED." The SEUA then invokes various visualization services on the simulator output to display epidemic curves and maps in the interface. By standardizing the terminology in the Web services, Apollo-SV helps to ensure that the SEUA end user and the simulators understand the XML-encoded infectious disease scenario to mean the same thing. Towards that end, the SEUA displays the textual definitions of classes in Apollo-SV to help the end user specify her infectious disease scenario accurately and precisely. Beginning with the earliest development of Apollo-SV, exposing the terminology and definitions from Apollo-SV to subject matter experts, developers, and others in the SEUA was a significant source of critical feedback that led to additional ontological analysis as well as refinements of the terminology and definitions. Procedures and principles of Apollo-SV construction We encode the results of our ontological analyses in OWL2. Our process proceeds concurrently with development of the Apollo XSD, and issues discovered in constructing either the OWL or the XSD are fed back into the analysis. We conducted a formal ontological analysis of seven additional simulators-their configuration files, output files, documentation (including any user guides), and journal and conference papers that either described or used them. As part of this process, we reviewed terms that we extracted from these sources with the developers of the simulators to identify relevant but missing terms, to discover synonymy among terms, and to detect and resolve ambiguity. Of the seven additional simulators, four are presently connected to the Apollo Web Services. We wrote a textual definition for every class that we create, in keeping with the GO style and OBO Foundry principles. We also created an elucidation annotation for classes in Apollo-SV because formal ontological textual definitions are sometimes not accessible to domain experts. The elucidation restates the definition in language more familiar to subject matter experts, while still referring to the same type of entities as the definition. Also in accordance with the GO style of ontology development, we made Apollo-SV publicly available at [13], a permanent URL (PURL), to allow external scientific review, comments, and requests for additions as well as to encourage adoption of Apollo-SV. We ensured that Apollo-SV is easily accessible for browsing and download at the Web-based Ontobee portal [14], analogous to Gene Ontology browsers (the GO itself is viewable on Ontobee). The issue tracker is located at the Apollo GitHub site [15]. The PURL to the development version of Apollo-SV is at [16]. Because the Gene Ontology has "full membership" status in the OBO Foundry-a special status conferred on ontologies that conform to the OBO Foundry principles, we also followed the principles of the OBO Foundry in addition to openness and textual definitions [17,18]. Per those principles, we release it in a common format, OWL2 [19]. We also adopted the Foundry principle of orthogonality, which stipulates that ontology developers reuse preexisting ontological representations into Apollo-SV when and where appropriate. We employed two methods for ontology reuse. The first method is the OWL2 ontology-import mechanism. This method inserts into the target ontology all classes and object properties of the imported ontology. However, bulk inclusion of large ontologies is often impractical and can degrade the usability of the target ontology. Therefore, the second method we used is the Minimum Information to Reference an External Ontology Term (MIREOT) methodology [20]. Using a MIREOT Protégé plugin that we developed [21], we import selected classes, individuals, and properties from certain ontologies into Apollo-SV. We hypothesized that we would be able to reuse preexisting ontologies or significant portions of them in developing Apollo-SV. In particular, we anticipated reusing substantial portions of the Infectious Disease Ontology (IDO) [22]. IDO is an OBO ontology (but not a "full member" of the Foundry) that represents infections, infectious diseases, pathogens, and hosts from the perspectives of infectious disease as a medical subspecialty and infectious disease research. We adhered to OBO Foundry naming conventions [23]. We edited our terms to (1) avoid connectives ('and' , 'or'), (2) prefer singular nouns, (3) avoid the use of negations, and (4) avoid catch-all terms such as Unknown x. Fig. 1 The relationships of Apollo components and epidemic simulators. Apollo-SV defines the terminology used in Apollo XSD, which specifies the message syntax for the Web services. The SEUA calls the Broker service to configure simulators (messages passed along blue arrows) and to access simulator output (messages passed along red arrows). The Translator service translates Apollo messages to/from native simulator input/output. Purple ovals represent Apollo standards; blue ovals represent Apollo-developed software that use the Apollo Web services; and red ovals represent entities interacting with Apollo To help link the OWL file to the XSD, we created a Unique Apollo Label (UAL) annotation for classes in Apollo-SV. The UAL is the exact XSD type or element name to which the class in Apollo-SV corresponds, for example, InfectiousDisease and BasicReproductionNumber. Although not required by OBO Foundry principles, we imported Basic Formal Ontology (BFO) version 1.1 [24] into Apollo-SV as its upper ontology as do many other Foundry ontologies. The main reasons were (1) to maintain the semantics of BFO-based ontologies and their components that we reused and (2) to ensure that new classes and their associated axioms in Apollo-SV did not introduce inconsistencies to those semantics. We created description logic axioms according to the syntax and semantics inherent in OWL2 for classes in Apollo-SV (e.g., Figs. 2,3, 4 and 5). When possible, these axioms provide both necessary and sufficient criteria for class membership. Many axioms, however, define only necessary criteria, most often because the description logic semantics of OWL2 were insufficiently expressive to encode both the necessary and sufficient criteria of the class. Results Apollo-SV version 3.0.1 comprises 868 classes, of which 802 were required for describing simulator configuration and output. The remaining 66 classes are extraneous imported classes resulting from OWL2-based imports of ontologies in toto. Of the 802 classes, we created 397 (49.5 %) new classes, of which 117 classes have necessary and sufficient criteria. We imported 118 (14.7 %) classes via the methodology of Minimum Information to Reference and External Ontology Term or MIREOT (Table 1), and imported 287 (35.8 %) via OWL2-based import. The ontology comprises a total of 1180 logical axioms. High level classes in Apollo-SV The most important Apollo-SV class for users of simulators is infectious disease scenario, which represents an ecosystem at simulator time zero with at least one infection process (a class) affecting at least one population (also a class). The infectious disease scenario includes information about the infection process and its acquisition by a host organism (e.g., transmission probabilities and the durations of infectious and latent periods). It can also include information about planned or ongoing interventions to control infection (such as vaccination control measures). Representing ecosystems, populations, and censuses thus expanded the scope of Apollo-SV to population biology ( Table 2). Including population biology subsequently influenced our definitions of key terms in infectious disease epidemiology. Classes representing the infections, infection acquisitions, hosts, pathogens, and infectious diseases in an ecosystem are foundational in Apollo-SV. The reason is that the essential prediction of simulators is how many infections will occur given an infectious disease scenario. Nearly everything else that simulators predict are events that revolve around infection. They either (1) occur downstream of infection (such as disease outcomes including symptoms and death), (2) influence the probabilty of acquiring an infection (such as going to work or school or being vaccinated), or (3) occur as part of an infectious disease control strategy to prevent infection acquisition (such as school closure or quarantine). Also, because one simulator that we analyzed predicts colonization of hosts by pathogens and the processes by which hosts acquire colonizations, it was also important to represent colonization and how it differs from infection (see below). Foundational classes where reuse of IDO was not possible We now describe a set of foundational classes we created in Apollo-SV after attempting unsuccessfully to reuse IDO classes and their definitions. We also discuss the reasons why these classes and definitions were unworkable. Infection Apollo-SV defines infection as: A reproduction of a pathogen organism of a particular biological taxon in a tissue of a host organism from another taxon (Fig. 2). From the perspective of population biology, an infection is merely a process by which one species reproduces, surviving from generation to generation, utilizing the resources of a host species. It is the normal biology of the pathogen species. Infection is distinguished from other types of pathogen reproduction in a host-namely colonization (defined below)-by violation of the integrity of tissue in the host through tissue invasion. This tissue invasion may occur-and subsequently end-without causing any symptoms or permanent ill effects on the host. Thus, infection does not equate to disease, and we carefully distinguish between infection and infectious disease. Epidemic simulators represent infection as a process because infectious disease epidemiologists define infection as a process. For example, [25,26] define infection as the invasion of a host organism's tissue by pathogens, the multiplication of those pathogens, and the reaction of the host's tissue(s) to the pathogens and the toxins they produce. Further reinforcing the fact that infection is a process is the fact that simulators represent periods Fig. 2 of (or ontologically speaking, occurrent parts of ) the infection: the latent period and the infectious period. Before we created a class for infection in Apollo-SV, we reviewed IDO for a class that represents the process of infection, whether labeled as infection or with some other term. We found that IDO defines infection as a physical thing, or "material entity" in the terminology of Basic Formal Ontology (BFO). Specifically, it defines infection as: A part of an extended organism that itself has as part a population of one or more infectious agents and that is (1) clinically abnormal in virtue of the presence of this infectious agent population, or (2) has a disposition to bring clinical abnormality to immunocompetent organisms of the same Species [sic] as the host (the organism corresponding to the extended organism) through transmission of a member or offspring of a member of the infectious agent population. Given that epidemic simulators and the relevant basic sciences on which they are founded recognize infection as a process, we needed to create a new class in Apollo-SV to represent it. The lack of a representation of the process of infection in IDO is surprising because IDO's definitions of its classes host role and infectious agent role require a process to realize them. This process would presumably be infection. Colonization Apollo-SV defines colonization as: A reproduction of a pathogen of a particular biological taxon inside or on the surface (e.g., skin, mucosal membrane) of a host organism of another taxon, without invasion of any tissues of the host. We required this class to represent the input of the Regional Healthcare Ecosystem Analyst [27] simulator, which models the spread of methicillinresistant Staphylococcus aureus (MRSA). MRSA, as well as methicillin-sensitive varieties of S. aureus, typically colonize the nasal mucosa and skin of humans, living on these surfaces but not invading them. If a human Abiotic ecosystem census host subsequently becomes immunocompromised or suffers a breach of the integrity of these surfaces, this colonization may extend to infection. Colonization is an important epidemiological process because an individual may acquire colonization from another MRSA colonized host. IDO defines colonization as An establishment of localization in host process in which an organism establishes itself in a host. The latter part of the definition is more general than the former (assuming that there are other types of establishment besides localization) and thus does not differentiate this IDO class from its parent in IDO. We did not consider it further. Host Apollo-SV defines host as: An organism of a particular biological taxon that is the site of reproduction of an organism of a different taxon (Fig. 3). This definition accomodates the host undergoing infection and/ or colonization. We note that our use of site of in this definition has a precise meaning as specified in the Relation Ontology, where site of is a synonym for the contains process relation, which relates an …independent continuant and a process, in which the process takes place entirely within the independent continuant. We could not reuse IDO's definition of host, which is: An organism bearing a host role. To understand this IDO definition, it is necessary to review two additional IDO definitions: 1. Host role: A role borne by an organism in virtue of the fact that its extended organism contains a material entity other than the organism. 2. Extended organism: An object aggregate consisting of an organism and all material entities located within the organism, overlapping the organism, or occupying sites formed in part by the organism. Under these definitions, any organism that has an artificial joint, a penny in its gut, or an arrow through its chest is a host. Classifying a person with a prosthetic knee as a "host" is counterintuitive and not in keeping with how host is defined in population biology or infectious disease epidemiology (or in clinical medicine). Furthermore, the definition is based on IDO's view of infection as a material entity and does not account for the process of infection. Pathogen Apollo-SV defines pathogen as: An organism of a particular biological taxon that is the bearer of a disposition that is realized as its reproduction in the tissue of an organism of a different biological taxon (Fig. 4). Thus Apollo-SV defines a pathogen as an organism that has the capability to reproduce inside the tissue of a host organism of another biological taxon. Note that this definition is inclusive of organisms like MRSA involved in colonization: the organism still has the potential to invade tissue and establish infection and thus meets the definition. Once again, we had intended to reuse IDO. However, IDO defines pathogen as: A material entity with a pathogenic disposition. Again, this definition requires additional IDO definitions to clarify its meaning: 1. Pathogenic disposition: A disposition to initiate processes that result in a disorder. 2. Disorder: A material entity which is clinically abnormal and part of an extended organism. Disorders are the physical basis of disease. Thus, per IDO any material that causes injury is a pathogen, including the endotoxin of Clostridium difficile or an overdose of acetaminophen. This definition is not how infectious disease epidemiology uses the term pathogen. IDO does have a class infectious agent as a subtype to pathogen that refers specifically to organisms that can enter into a host and cause disease. The IDO definition of infectious agent, however, relies on IDO's definitions of infection and infectious disorder as material entities. To be consistent with infection as a process, we created the above definition of pathogen in Apollo-SV. Infectious Disease Apollo-SV defines infectious disease as: A disease that inheres in a host and is realized as a disease course that is causally preceded by an infection (Fig. 5). This means that the infection occurs first and creates abnormalities in the host that result in disease. This definition is compatible with the OBO Foundry definition of disease in the Ontology of General Medical Science (OGMS) [28]. We thus were able to reuse the OGMS definition of disease, in keeping with the Foundry principle of orthogonality. Note that the disease inheres only in the host. From the pathogen's perspective, there is no clinical abnormality (which is a necessary condition to meet the definition of disease in OGMS) as infection is normal biology of pathogens. IDO's definition of infectious disease is incompatible with our definition of infection as process. Infection Acquisition Apollo-SV defines infection acquisition as: The biological process of a pathogen of a particular biological taxon entering (the tissues of the body of ) a susceptible host organism of another taxon and reproducing using host resources. A susceptible host can acquire an infection from one of at least three routes: 1. From another host organism (of the same or different species) that is infectious, which we represent in Apollo-SV as the class Infection acquisition from infectious host. 2. From some object or its surface that is contaminated with the pathogen, which we represent in Apollo-SV as the class Infection acquisition from contaminated thing. 3. From self colonization with the pathogen, which we represent in Apollo-SV as the class Infection acquisition from self colonization. Note that we chose to define infection acquistion instead of transmission or transmission process. One reason was our insight that ontologically it is only the second, susceptible host that undergoes change during the process, and the term infection acquisition describes this change better than the term transmission. Another reason is that we needed to represent the acquisition of infections from contaminated things and from selfcolonization with a pathogen. In both cases, transmission from host to host is indirect (mediated through contaminated surfaces and objects and through acquistion of colonization, respectively). As with other key terms, IDO lacked an adequate class and definition for the process of infection acquisition. IDO imports transmission process and its two definitions from the Transmission Ontology: 1. A process that is the means during which the pathogen is transmitted directly or indirectly from its natural reservoir, a susceptible host or source to a new host. 2. Suggested definition: A process by which a pathogen passes from one host organism to a second host organism of the same Species [sic]. Beginning with the second definition (which for some reason the Transmission Ontology labels as a "suggested definition"), it erroneously restricts transmission to occur only between two hosts of the same species. It is thus not usable in infectious disease epidemiology or any other science that studies cross-species transmission, which frequently occurs in zoonoses and diseases like foot and mouth disease. The first definition has two major problems. The first problem is circularity, defining transmission process in terms of a pathogen being transmitted, with no definition of transmitted. The definition also excludes infection acquisitions from contaminated objects and self colonization and refers to the undefined terms natural reservoir and source. The second problem is an ontological one. It attributes to one process the property of being the means by which something else happens. For example, assume droplet spread of infection from one host to another by a sneeze. This definition equates the sneeze with the transmission process. That is, it says that only the sneeze exists, but it also has the property of "having transmitted the pathogen". However, equating the sneeze to the transmission process is nonsensical because for example, droplets can remain airborne and infectious for hours. Thus the pathogen may not reach (or be transmitted to) another host until long after the sneeze is over. The sneeze cannot therefore be the transmission process. In reality, there are two distinct processes: the sneeze and the subsequent acquisition of an infection by the second host. Testing Apollo-SV and its ontological commitments in software We created a capability to configure six simulators: using the SEUA, an end user creates an infectious disease scenario that conforms to the XSD and then submits it to the simulators via Web services. The SEUA then retrieves the output of the simulators and displays it on maps and graphs. This capability was the end product of iterative, concurrent development of Apollo-SV and the XSD according to our analysis of the simulators, which included feedback from implementation in the Web services and SEUA. In addition, the SEUA displays textual definitions of Apollo-SV classes to the end user. Feedback on these definitions was fed back into ontology development which resulted in ontology changes including improved definitions. We are piloting a 7th simulator whose unique ontological commitments are reflected in Apollo-SV and the XSD, but are still undergoing refinement. The six configurable simulators are (1) [29], and (6) an ebola model by Bellan et al. [30] These simulators are diverse in terms of underlying model (compartment vs. agent-based), disease (influenza, anthrax, ebola, and dengue), transmission (vector and person to person), and geography, both in terms of granularity (tract vs. county vs. entire nation) and scale (from a single state or nation to the entire globe). Discussion We developed and implemented a common representation for simulator configuration and output and used it in an application that constructs and sends infectious disease scenarios to six different epidemic simulators. Our success in representing the inputs of a diverse sample of simulators lends support to our hypothesis that a common representation is feasible. Early usage of the ontology and exposure of its definitions to subject matter experts in software resulted in ontology improvements, most notably in the definitions of the core classes of Apollo-SV that we discussed here. This result is consistent with those of other ontology development efforts. The ontological analysis we used to create the common representation identified abstractions that spanned simulators diverse in their core mathematical foundations (compartmental vs. agent based), pathogens, routes of transmission, geographical scope (single city or county vs. entire world), and interventions. The key abstractions were that the input of a simulator was an infectious disease scenario and that the scenario was properly understood as a representation of an ecosystem at a particular time, which corresponded to simulator time zero. We note that there is nothing specific to infectious disease in this conceptualization, which suggests that the ontology could be applied to simulation of other ecological phenomena. A novel aspect of our method was its focus on the ontological analysis of epidemic simulators. This focus quickly brought into view the key biological phenomena being simulated and their fundamental nature. Additionally, simulators-being mathematical models-make explicit ontological commitments about the core entities involved in infections and their acquisition, which led us to confront the issues involved in representing them from the outset. It is worth noting that simulators used in epidemiology are often rigorously vetted through peer review of simulator-based research, as well as peer review of the simulators themselves. A final advantage of our focus on simulators is that they make a relatively small number of ontological commitments, which allowed us to devote sufficient time to them, while still being able to implement an application that continously tested whether the evolving representation could configure an expanding set of simulators. We expect that ontological analysis of any domain for which mathematical models exist would benefit from a focus on the models. For example, for human physiology there is an extensive library of mathematical models that are the focus of the Human Physiome project [31]. Prior work on the use of ontologies for modeling and simulation identified a distinction between so-called "referential" and "methodological" ontologies [32]. The former correspond with domain ontologies: a representation of the phenomena simulated. The latter correspond with application ontologies: a representation of simulators, how they work, and parameters that specify their operations. Apollo-SV is both a domain (a.k.a. referential) and an application (a.k.a. methodological) ontology in the field of infectious disease epidemiology. We were surprised that we were unable to reuse classes from IDO for infection, pathogen, host, colonization, infectious disease, and transmission process. We conjecture that IDO's ontological analysis may have begun with a disease focus and worked from there to the nature of infection, whereas we began with a biological science perspective. Our focus differed fundamentally from IDO's concentration on how the terms are used in clinical medicine. In particular, our focus led us to a requirement to represent the process of infection, including key parts of this process such as the infectious period, as opposed to the steady-state, material-entity view of IDO. We note however that our definitions of infection, pathogen, host, and infectious disease do not conflict with how these phenomena are understood by clinical medicine and thus could be reused without difficulty by ontologies that support clinical applications. In fact, in the case of zoonoses and infections that result from a prior process of colonization, our representations are a marked improvement because our definition of infection acquisition permits cross-species transmission and infections resulting from self colonization, whereas IDO's definition of transmission process does not. Also, our definition of host and pathogen are more consistent with their usage by infectious disease specialists. We also could not reuse other prior work on ontologies that have overlap with Apollo-SV. This work includes the Epidemiology Ontology (EO) [33] and the Ontology for Simulation Modeling of Population Health (SimPHO) [34]. EO-like Apollo-SV-strives to meet Foundry principles [33]. However it, like IDO, also defines infection as a material entity. It erroneously defines infection acquisition as occuring only in humans and does not axiomatize its classes. Okhmatovskaia et al. do not define for SimPHO [34] any of the terms in Table 1. Further comparison is not possible because SimPHO is not publicly available for review/reuse. 2 Given that simulator configurations require representing several kinds of knowledge including probabilistic and mathematical knowledge, it was not possible to use an OWL2 representation in the Web services to configure simulators. At present the application that creates infectious disease scenarios does not invoke any description-logic reasoning supported by the axioms in Apollo-SV. Nevertheless, we found it advantageous to create the OWL2 representation and reuse it at the lower level of information representation of XSD. However, in other work, our OWL2 representation (i.e., Apollo-SV) supports reasoning in our ontology-based catalog of infectious disease epidemiology (OBC.ide), which is a catalog of datasets, publications, grey literature, and simulators [35]. The OBC.ide search interface makes use of multiple OWL2 reasoning capabilities including the "is a" hierarchy, transitive roles such as part of, and role chaining. Adaptation of Apollo-SV to this purpose required no re-axiomatization of the classes discussed here. Our future plans include expanding Apollo-SV and the XSD to cover additional simulators and types of information used in infectious disease epidemiology. Conclusions Apollo-SV captures the output of our ontological analysis of the entities in reality represented by epidemic simulator configuration and output. It also supplies the standardized terminology used in epidemic simulator configuration and output, which also includes an XSD-based syntax and database schema. We validated Apollo-SV through use in a simple end-user application that enables analysts to specify an infectious disease scenario and submit it to one or more of six simulators. Our analysis of biologicallygrounded epidemic simulators and our process of testing the ontology in software led to scientifically accurate definitions that we have found to be reusable across diverse simulators to date. When available, mathematical models of natural phenomena like epidemics are potentially useful starting points for ontology development. Endnotes 1 Closing schools is one infectious disease control strategy that simulators study for the control of influenza epidemics. 2 We are unable to find any remaining links to Sim-PHO, and past links while we were doing the work were broken at the time.
8,698.2
2016-08-18T00:00:00.000
[ "Computer Science", "Environmental Science", "Medicine" ]
Octave-spanning ultraflat supercontinuum with soft-glass photonic crystal fibers We theoretically identify some photonic-crystal-fiber structures, made up of soft glass, that generate ultrawide (over an octave) and very smooth supercontinuum spectra when illuminated with femtosecond pulsed light. The design of the fiber geometry in order to reach a nearly ultraflattened normal dispersion behavior is crucial to accomplish the above goal. Our numerical simulations reveal that these supercontinuum sources show high stability and no significant changes are detected even for fairly large variations of the incident pulse. 2009 Optical Society of America OCIS codes: (190.4370) nonlinear optics, fibers; (190.5530) pulse propagation and temporal solitons; (320.2250) femtosecond phenomena References and links 1. P. St. J. Russell, “Photonic-Crystal fibers,” J. Lightwave Technol. 24, 4729-4749 (2006). 2. D. Mogilevtsev, T. A. Birks, and P. St. J. Russell, “Group-velocity dispersion in photonic crystal fibers,” Opt. Lett. 23, 1162-1164 (1998). 3. A. Ferrando, E. Silvestre, J. J. Miret, and P. Andrés, “Nearly zero ultraflattened dispersion in photonic crystal fibers,” Opt. Lett. 11, 790-792 (2000). 4. A. Ferrando, E. Silvestre, P. Andrés, J. J. Miret, and M. V. Andrés, “Designing the properties of dispersionflattened photonic crystal fibers,” Opt. Express 9, 687-697 (2001). 5. J. M. Dudley, G. Genty, and S. Coen, “Supercontinuum generation in photonic crystal fiber,” Rev. Mod. Phys. 78, 1135-1184 (2006). 6. J. K. Ranka, R. S. Windeler, and A. J. Stentz, “Visible continuum generation in air-silica microstructure optical fibers with anomalous dispersion at 800 nm,” Opt. Lett. 25, 25-27 (2000). 7. X. Gu, L. Xu, M. Kimmel, E. Zeek, P. O’Shea, A. P. Shreenath, R. Trebino, and R. S. Windeler, “Frequency-resolved optical gating and single-shot spectral measurements reveal fine structure in microstructure-fiber continuum,” Opt. Lett. 27, 1174-1176 (2002). 8. K. L. Corwin, N. R. Newbury, J. M. Dudley, S. Coen, S. A. Diddams, K. Weber, and R. S. Windeler, “Fundamental noise limitations to supercontinuum generation in microstructure fiber,” Phys. Rev. Lett. 90, 113904 (2003). 9. A. Unterhuber, B Považay, K. Bizheva, B. Hermann, H. Sattmann, A. Stingl, T. Le, M. Seefeld, R. Menzel, M. Preusser, H. Budka, Ch. Schubert, H. Reitsamer, P. K. Ahnelt, J. E. Morgan, A. Cowey, and W. Drexler, “Advances in broad bandwidth light sources for ultrahigh resolution optical coherence tomography,” Phys. Med. Biol. 49, 1235-1246 (2004). 10. T. Hori, J. Takayanagi, N. Nishizawa, and T. Goto, “Flatly broadened, wideband and low noise supercontinuum generation in highly nonlinear hybrid fiber,” Opt. Express 12, 317-324 (2004). 11. H. Ebendorff-Heidepriem and T. M. Monro, “Extrusion of complex preforms for microstructured optical fibers,” Opt. Express 15, 15086-15092 (2007). 12. J. Y. Y. Leong, P. Petro, J. H. V. Price, H. Ebendorff-Heidepriem, S. Asimakis, R. C. Moore, K. E. Frampton, V. Finazzi, X. Feng, T. M. Monro, and D. J. Richardson, “High-nonlinear dispersion-shifted leadsilicate holey fibers for efficient 1-μm pumped supercontinuum generation,” J. Lightwave Technol. 24, 183190 (2006). 13. F. G. Omenetto, N. A. Wolchover, M. R. Wehner, M. Ross, A. Efimov, A. J. Taylor, V. V. R. K. Kumar, A. K. George, J. C. Knight, N. Y. Joly, and P. St. J. Russell, “Spectrally smooth supercontinuum from 350 nm to 3 μm in sub-centimeter lengths of soft-glass photonic crystal fibers,” Opt. Express 14, 4928-4934 (2006). 14. Schoot E-Catalogue 2003 Optical Glass, Schoot Glass, Mainz, Germany. 15. E. Silvestre, T. Pinheiro-Ortega, P. Andrés, J. J. Miret, and A. Ortigosa-Blanch, “Analytical evaluation of chromatic dispersion in photonic crystal fibers,” Opt. Lett. 30, 453-455 (2005). 16. E. Silvestre, T. Pinheiro-Ortega, P. Andrés, J. J. Miret, and A. Coves, “Differential toolbox to shape dispersion behavior in photonic crystal fibers,” Opt. Lett. 31, 1190-1192 (2006). #107640 $15.00 USD Received 18 Feb 2009; revised 3 Apr 2009; accepted 3 Apr 2009; published 15 May 2009 (C) 2009 OSA 25 May 2009 / Vol. 17, No. 11 / OPTICS EXPRESS 9197 17. G. P. Agrawal, Nonlinear Fiber Optics, 3 ed. (Academic Press, San Diego, CA, 2001). 18. V. L. Kalashnikow, E. Sorokin, and I. T. Sorokina, “Raman effects in the infrared supercontinuum generation in soft-glass PCFs,” App. Phys. B 87, 37-44 (2007). 19. W. J. Tomlinson, R. H. Stolen, and C. V. Shank, “Compression of optical pulses chirped by self-phase modulation in fibers,” J. Opt. Soc. Amer. B 1, 139-149 (1984). 20. N. A. Wolchover , F. Luan, A. K. George, J. C. Knight, and F. G. Omenetto, “High nonlinearity glass photonic crystal nanowires,” Opt. Express 15, 829-833 (2007). 21. A. Apolonski, B. Povazay, A. Unterhuber, W. Drexler, W. J. Wadsworth, J. C. Knight, and P. St. J. Russell, “Spectral shaping of supercontinuum in a cobweb photonic-crystal fiber with sub-20-fs pulses,” J. Opt. Soc. Am. B 19, 2165-2170 (2002). Introduction Photonic crystal fibers (PCFs) show a wide set of singular properties and, consequently, a plethora of new applications have been found in many areas of science and technology [1].One of their most interesting features is the ability to engineer the group velocity dispersion (GVD).Generally speaking, the whole dispersion derives from the combination of material and waveguide contributions.On one hand, the material dispersion component is fixed by the fabrication material, usually fused silica.However the high refractive-index contrast between the fused silica and the air holes leads to a very strong waveguide-dispersion contribution that, in fact, is very sensitive to the geometric distribution of the air holes in the photonic crystal cladding.Therefore, if we manipulate the geometry of the PCF, we can obtain very uncommon dispersion profiles.In this way, some PCF configurations that shift the intrinsic zero-dispersion wavelength (ZDW) of silica well below 1.3 µm were reported [2], as well as other PCF structures showing ultraflattened dispersion profiles [3,4]. In addition, supercontinuum (SC) generation based on PCFs is currently a cutting-edge photonics research [5].SC provides a very attractive optical source for several applications, such as optical coherence tomography, ultrashort pulse generation, optical frequency metrology, etc.The conjunction of two unusual properties shown by PCFs, high modal confinement and ZDW's shift, has successfully allowed the achievement of ultrabroad SC spectra [5,6].Typically, SC generation in PCFs is realized by injecting the pump pulse into the anomalous dispersion region of the fiber, near the ZDW.The spectral broadening arises from the interplay among several nonlinear effects.In particular, soliton fission and Raman self-frequency shift are responsible for the long-wavelength component production, whereas dispersive wave generation originates the short-wavelength components.In this dispersion regime, the soliton fission process is strongly perturbed by nonlinear effects as the modulation instability (MI), higher-order dispersion terms, and Raman scattering, where the relative contribution of each one depends on pump pulse duration.The output is usually a SC spectrum showing significant spectral oscillations (around 20 dB), an unstable fine spectral structure, and low-coherence properties [7,8], which in many cases limit its practical applications, as for example in OCT [9].It also sets a fundamental limitation on the compression of ultrashort pulses.One way to avoid the above spectral fluctuations is to pump in the normal dispersion regime.This fact suppresses MI and soliton fission, and hence improves the coherence and spectral flatness, but with the drawback of a narrower spectral broadening [10]. Recent advances in the fabrication of soft-glass PCFs [11], i.e., PCFs made up of soft transparent materials that show a very high nonlinear-index coefficient, open new possibilities in SC generation.Up to now, the research work has focused its attention on the optimization of both the nonlinear response and the location of the ZDW [12].Some results showed that soft-glass PCFs generate a very broad SC, covering a bandwidth greater than 2000 nm, when operating at the anomalous dispersion regime.In this case, the SC suffers from the same lack of spectral flatness [13], as in fused-silica PCFs.However, one can expect that operating at the normal dispersion regime, the coherence and spectral flatness be improved, and the high nonlinearity may also compensate, at least to some extent, the above-mentioned spectral bandwidth reduction. The aim of this paper is twofold.First we recognize a soft-glass triangular PCF geometry that, as in fused-silica PCFs, shows ultraflattened normal dispersion over a wide wavelength interval around 1.55 µm.In a second stage, the above microstructured fiber provides an ultrawide (over an octave), very smooth and coherent SC when pumped with parameters corresponding to commercially available Er-doped femtosecond fiber lasers. Dispersion design The first step consists in exploring the possibilities to engineer the GVD in PCFs made up of soft glass.We have paid attention to the Schoot SF57 glass [14].This commercial leadsilicate glass exhibits very high nonlinearity and was already employed in the fabrication of complex PCF preforms showing up to 160 air holes [11].The dispersive properties of SF57 glass are significantly different to that shown by fused silica.In fact in Fig. 1 ).In addition, SF57 glass shows a higher refractive index than fused silica (1.81 against 1.44, both at 1.55 µm), leading to a higher waveguide-dispersion contribution. It is a challenge to search for soft-glass PCF geometric parameters to achieve an ultraflattened dispersion profile in the region around 1.55 µm.To this end, we first adapted the design procedure established in [4] to the current case and have used our own numerical algorithm [15] to evaluate both 2 β and the effective mode area as a function of the frequency, ( ) . In this way, for an equally-sized, air-hole soft-glass PCF, the best flattened dispersion profile we achieved is shown in Fig. 1 (short-dashed curve).The pitch and the radius of the holes of the above triangular PCF, denoted as fiber #1, are respectively.The result is far away to our goal, a positive and nearly constant 2 β -profile.In a second phase, we add an additional degree of freedom and consider a triangular soft-glass PCF with two families of air-hole sizes.Then, using our recently developed inverse design technique [16] and starting from the above result, we have reached the impressive ultraflattened dispersion behavior shown in Fig. β because it is a good choice to accomplish our next objective.Needless to say that it is possible to achieve a flatter PCF dispersion behavior if we consider a higher number of rings with different sized air-holes.However, we only take into account PCFs relatively simple hole structures as in the above PCF, called fiber #2, to keep up the feasibility of their fabrication. Nonlinear propagation and SC generation Our second goal is the generation of an ultrawide and very smooth SC.To this end, nonlinear propagation along the z-axis of the fiber is evaluated by integrating in a conventional way the generalized nonlinear Schrödinger propagation equation, expressed as [17] ( ) [18], around 15 times larger than that of the fused silica, and we stress that the frequency dependence of the effective mode area is considered in our numerical calculations.The nonlinear response function, , includes both instantaneous electronic and delayed Raman contributions.The first term of the right-hand part in Eq. ( 1) describes the whole dispersion effects.In fact, we have taken into account the entire dispersion operator by performing in the frequency domain the multiplication of the complex spectral envelope, ( ) , of the SF57 glass is described by means of an analytic form based on experimental measurements [18].Note that we have neglected linear losses since the propagation distance is only a few centimeters for all cases.It is interesting to keep in mind that in the normal dispersion regime the spectral broadening factor for femtosecond pulses is given by In Fig. 2 we present some numerical simulations showing the spectral evolution of the pulsed light traveling through fibers #1 and #2 for three selected propagation lengths.At initial distances we point out that SPM chiefly produces the rapid expansion of the spectrum.At this first stage, we may recognize that the typical deep oscillations in the spectral profile produced by SPM result from the interference between temporally shifted spectral components.On the other hand, the spectral asymmetric behavior is a consequence of self-steepening.As the pulse propagates the spectral modulation gradually decreases due to dispersion, but self-steepening simultaneously stresses dispersion effects, preventing spectral broadening.In this case, it appears that a 10-cm fiber #2 length is enough to attain the wanted profile.In fact, the region of interest is slightly narrower, although more flattened, after 15-cm propagation.After inspection of Fig. 2, the conclusion is clear.The SC generated with fiber #2 is broader and very flat and the explanation is simple.The SC is very flat since fiber #2 shows an ultraflattened dispersion curve, as would be expected.Likewise, the greater the broadening factor N, the wider the SC.This assessment applies to fiber #2 due to two reasons.The first one is that, 2 β -coefficient is rather smaller than that of fiber #1.The second one, and more important, is that the nonlinear coefficient γ is around 2.5 times bigger.We gather the above statement when we compare the pitch value of fibers #2 and #1, which finally determine the effective mode area, eff A , of the corresponding guided modes.In other words, fiber #2 shows two key features, a more flattened dispersion and a greater nonlinear coefficient compared with fiber #1. SC flatness Note that the scale of the spectral intensity in Fig. 2 is linear.In logarithmic scale, we recognize an ultraflat SC spectrum with deviations less than 3 dB over an octave (see solid curve in Fig. 3).In order to put this result in context, we would like to emphasize that experimental octave-spanning SC generation in soft-glass PCFs was already reported [13,20].However in both experiments the SC presents fluctuations around 20 dB.A qualitatively different attempt using a fused-silica PCF was also reported [21].In this last case, the oscillations approximately reach 10 dB.We claim that the key point to attain our objective was the design of the fiber geometry in order to reach a low, positive and ultraflat dispersion shape.In principle, we expect that this behavior must be very robust against variations of the input pulse characteristics.In order to verify this high stability, we have calculated the output power spectrum for fairly large peak power fluctuations ( % 4 ± ) of the incoming pulse.The results are illustrated in Fig. 3.We realize that the three curves overlap in the region of interest.Of course, our numerical simulations do not include dispersion variations resulting from imperfections of material properties and manufacture, as well as attenuation.In this sense, our results are ideal in comparison to experimental verifications.Note that in the present work we focus our attention on the flatness and wideness of the emerging SC, not in its power density level.We stress that these characteristics are still preserved, even for different lasers, when we deal with 0 2 > β .This is the last reason why we do not consider insertion losses.We would certainly be able to get better results if the dispersion profile is specifically adapted to each laser system with its corresponding coupling efficiency.This discussion reveals once more the central role that the GVD design plays in the SC generation. Conclusion Our main claim is to highlight the possibility of finding ultraflat dispersion designs dealing with easy-to-fabricate soft-glass PCFs.Due to the inherent high nonlinearity of the material, the above fibers are well-adapted to produce octave-spanning ultraflat SC with different pumping femtosecond lasers, when operating at the fiber normal dispersion regime. term of the right-hand side models nonlinear effects, such as self-phase modulation (SPM), self-steepening, formation of shock waves, and stimulated Raman scattering.In our simulations, Fig. 2 . Fig. 2. Normalized spectral intensity vs wavelength for three propagation distances.Solid curves correspond to fiber #2 and broken curves to fiber #1. Fig. 3 . Fig. 3. Output power spectrum vs wavelength after 10-cm propagation throughout fiber #2 for three different peak powers of the incident pulse.Going one step further, we compared the output spectrum for three different commercial Er-doped femtosecond fiber lasers, the Femtolite Ultra Bx-60, the CF1560-HP ( , nm 1550 = λ the solid curve shows the GVD coefficient, 2 β , corresponding to bulk SF57 glass.We point out that the 1)
3,665.2
2009-05-25T00:00:00.000
[ "Physics" ]
New Monotonic Properties for Solutions of Odd-Order Advanced Nonlinear Differential Equations : The present paper studies the asymptotic and oscillatory properties of solutions of odd-order differential equations with advanced arguments and in a noncanonical case. By providing new and effective relationships between the corresponding function and the solution, we present strict and new criteria for testing whether the studied equation exhibits oscillatory behavior or converges to zero. Our results contribute uniquely to oscillation theory by presenting some theorems that improve and expand upon the results found in the existing literature. We also provide an example to corroborate the validity of our proposed criteria. Introduction In this paper, we are concerned with the asymptotic and oscillatory behavior of solutions of higher-order differential equations with advanced arguments: where α and β are the quotients of odd positive integers, β ≥ α and We assume the following. Definition 2. A solution x(⊤) of ( 1) is said to be oscillatory if it is neither eventually positive nor eventually negative.Otherwise, it is called nonoscillatory.The equation itself is said to be oscillatory if all its solutions oscillate. Oscillation theory, a key concept in physics and engineering, examines periodic oscillation in systems like pendulums and electrical circuits.Characterized by amplitude, frequency, and phase, these oscillations offer insights into stability, resonance, and energy transfer.By studying these patterns, scientists and engineers can predict behaviors, design stable structures, and innovate technologies.Understanding oscillation theory is crucial for progress in mechanical engineering, electronics, and biological systems [1][2][3][4][5][6]. Odd-order differential equations are a type of differential equation that contains only odd derivatives.These equations play a crucial role in many scientific and engineering fields, as they can be used to describe a variety of natural phenomena and technological applications.This type of equation is a powerful tool for analyzing dynamic systems that change over time or with other variables, such as mechanical vibrations, fluid flow, and heat transfer.The study of oscillatory solutions of odd-order differential equations is an important area of research, as it helps in understanding how systems stabilize and respond to disturbances.Many mathematical and theoretical techniques are relied upon to analyze these solutions and ensure their oscillation, such as spectral analysis and numerical methods.The relationship between symmetry and odd-order differential equations is pivotal for simplifying and solving these equations.By understanding the symmetries, we can transform, reduce, and sometimes even directly solve these equations, making the concept of symmetry a powerful tool in mathematical analysis and physics. Delayed differential equations are considered an important and interesting branch in the field of applied mathematics and systems analysis.These equations are characterized by considering not only the current state of the system but also its states at previous moments in time.This makes them a powerful tool for describing systems affected by their past, such as biological, economic, engineering, and other systems.A delayed differential equation includes variables that depend on their values at previous times.This type of delay can be constant or variable, and the delay may be finite or distributed over a period of time.These equations show clear importance in many fields; in biology, for example, they can be used to model biological systems where time delay is a crucial element, such as in cellular processes or epidemics.In engineering, they are used to model systems that involve elements like the time required for signal transmission or the time interval for control.One of the principal challenges confronting these equations is their complexity; exact solutions are rare, and reliance is often placed on approximate or numerical solutions.This requires the use of advanced mathematical methods and, sometimes, specialized software.As for the more complex nonlinear delayed differential equations, finding a closed-form solution becomes an extremely difficult task (see [7][8][9][10][11][12][13][14]). Advanced arguments in differential equations typically refer to the presence of terms in the equation where the independent variable (often t) is incremented by some positive constant.These types of equations are a subset of functional differential equations and are called "differential equations with advanced arguments" or "forward delay differential equations".Advanced differential equations are critical in the modern era due to their widespread applications across various fields.Models like the SIR (Susceptible, Infected, Recovered) model use differential equations to predict the spread of diseases and the impact of interventions.They help in studying population growth, predator-prey interactions, and ecological systems.Furthermore, techniques like MRI and CT scans rely on solving differential equations to reconstruct images from raw data.Moreover, they allow scientists and engineers to predict the behavior of complex systems under various conditions and provide a deeper understanding of natural and man-made systems, facilitating innovation and discovery [15][16][17][18][19]. The authors in [20] discussed the criteria ensuring that all solutions oscillate in functional differential equations: Baculíková and Džurina [21] explored the asymptotic properties and oscillation of nth-order advanced differential equations: They obtained oscillation results based on the Riccati transformation under conditions and ω(⊤) ≥ ⊤. The authors in [7,22,23] established some oscillation criteria and solutions of the following higher-order differential equations: Zhang et al. [24] studied the oscillatory behavior of the solutions of Equation ( 6) and obtained sufficient conditions to ensure the oscillation of the solutions of Equation ( 6) under the conditions Special cases of (1) have been discussed as less general equations, of which we mention, for example, Yao et al. [25], who studied some results of the oscillation of the equation of third-order differential equations with advanced arguments They provide criteria to ensure the asymptotic or oscillatory behavior of solutions of Equation (7), where On the other hand, Dzurina and Baculikova [26] studied a less general case of (7) of the following form: In [27], some new criteria for the oscillation of third-order functional differential equations of the form The possibility of obtaining criteria that guarantee the oscillation of solutions to Equation (1) has been very limited compared to differential equations when ϱ(⊤) = 0 because of the difficulty of establishing applicable relationships between the corresponding function and the solution when 0 ≤ ϱ(⊤) ≤ ϱ 0 < ∞.The purpose of this paper is to provide new capabilities for identifying certain conditions to ensure the emergence of oscillatory behavior for solutions of Equation ( 1).We introduce multiple new relationships that link the solution to the corresponding function, thereby crossing them to reach new criteria that guarantee the oscillation of solutions of Equation ( 1). This paper is organized as follows: Initially, we present the equation targeted by this study along with the necessary conditions for the study, in addition to some relevant information about the field of study and related previous studies that led to Equation (1).In the second section (Preliminaries and Existing Results), we present various lemmas drawn from different references that will be used to prove our main results.In the Oscillation Results section, we present some results that include the key relationships we later used.This is followed by a presentation of some of the results we obtained, through which we were able to ensure the oscillation of the solutions to Equation (1).At the end of the section, we provide some examples that support and confirm the validity of our results.Finally, in the Conclusions section, we provide a brief explanation of the study covered in this paper and the methods used, and then propose an idea for future work that may benefit researchers and those interested in the field. then every solution of advanced differential inequality (10) has no positive solution. Notations and Definitions Throughout this paper, we use the following notations: Moreover assume that there exist a function Remark 1.All the functional inequalities presented in this manuscript are supposed to hold eventually, that is, they are held for all ⊤ large enough. Remark 2. It should be observed that if y represents a solution to Equation (1), then −y also represents a solution to Equation (1).Consequently, regarding nonoscillatory solutions of Equation (1), it suffices to restrict our attention to positive ones. Remark 3. The previous lemma includes the relationships that connect solution of Equation ( 1) with the corresponding function v(⊤).These relationships enable us to overcome the condition and thus obtain highly effective oscillatory conditions. Proof.The proof is direct based on Lemmas 4 and 5, and Theorem 1. Corollary 2. Let ι ≥ 3 be odd and α < β.Suppose that (20) holds and there exists a continuously differentiable function Ω such that and or then ( 1) is almost oscillatory. Remark 4. We observe that the behavior of the solutions of (42) either oscillates or approaches to zero, and that one of its solutions is x(⊤) = e −2⊤ . Conclusions This note presents a new study on the asymptotic properties and oscillatory of a specific class of odd-order advanced differential equations in a noncanonical case.we have obtained a new comparison theorem for deducing the oscillation property of (1) from the oscillation of double first-order differential equations.By providing new relationships to link the solutions of the studied equation to the corresponding function, we have established new and effective criteria for examining whether the solutions of the equation exhibit oscillatory behavior or tend to zero.Our results clearly contribute to enhancing the understanding of the behavior of the solutions of the studied equation and expanding and completing the study found in the previous literature.On the other hand, the possibility of expanding this study remains an inspiring research point for researchers as a direction to benefit from the existing techniques to establish criteria that define the oscillatory behavior of solutions for wider classes of advanced higher-order differential equations of the form
2,195.2
2024-06-29T00:00:00.000
[ "Mathematics" ]
Air hydrodynamics of the ultrafast laser-triggered spark gap We present space and time resolved measurements of the air hydrodynamics induced by ultrafast laser pulse excitation of the air gap between two electrodes at high potential difference. We explore both plasma-based and plasma-free gap excitation. The former uses the plasma left in the wake of femtosecond filamentation, while the latter exploits air heating by multiple-pulse resonant excitation of quantum molecular wavepackets. We find that the cumulative electrode-driven air density depression channel initiated by the laser plays the dominant role in the gap evolution leading to breakdown. INTRODUCTION Considerable work has been done over the past several decades investigating the triggering of high voltage (HV) gas discharges by intense laser pulses. Spark gap discharges are used in widespread applications including HV surge protection and power switching, high energy laser triggering, and as ignition sources in combustion engines. The theory of spark-gap discharges is rich in basic physics and has been discussed at length in the literature [1][2][3][4][5][6][7][8][9][10] . Spark gaps rely on acceleration of free electrons between the cathode and anode by the gap electric field, driving further ionization by collisional avalanche ionization. In the conventional picture, breakdown starts with the development of one or more 'streamers', i.e. avalanche-ionization induced protrusions of charge, which under the action of additional resistive heating of the gas and consequent lowering of neutral gas density, create a higher conductance channel bridging the cathode and anode. Laser heating of the intra-gap gas can enable control of the discharge current path 11 . The use of low energy ultrafast laser pulses can improve this control by generating, via multiphoton or field ionization, a continuous extended length of low density plasma 12 . Extended focal volumes can be generated by optical elements such as cylindrical lenses or axicons 12 or by relying on nonlinear self-guiding by femtosecond filamentation 13,14 . In the case of filamentation in air, on-axis electron densities are typically ≲ 10 cm , 15 constituting only ~0.1% fractional ionization at atmospheric density. Few-nanosecond Qswitched lasers, by contrast, can generate higher plasma densities through electron avalanche, but longitudinally extended and contiguous energy deposition is a challenge. The use of double pulse schemes 12,16 or picosecond lasers 17 have been proposed as solutions providing higher density contiguous plasmas. Regardless of the pulsewidth used, laser triggering of HV discharges in past work has depended on gas ionization by the laser, with the discharge initiated either by the newly conductive channel enabled by the plasma 18 , or by the reduced gas density channel driven hydrodynamically by the gas heating 19,20,21 , where in the latter case the lower density reduces the breakdown threshold electric field 22 . For femtosecond pulses, because of the relatively low plasma densities and conductivity generated, it has been proposed that hydrodynamic response and on-axis density reduction is the primary mechanism responsible for discharge initiation 20,21 . Among the things we demonstrate in this paper is that a density depression generated with little to no ionization is equally effective in initiating a discharge. Early work by Loeb 6-8 and Meek 9 explained HV breakdown discharges in terms of streamer formation. In all experiments using laser pre-ionization of spark gaps, it is clear that the electric field-driven evolution of the inter-electrode plasma and gas before breakdown is crucial in determining the characteristics of the breakdown itself. However, to our knowledge, the relative roles of the conducting plasma and the hydrodynamic gas density reduction in promoting breakdown have not been assessed. In this paper, we perform space-and time-resolved measurements of the plasma and gas evolution in a high voltage electrode gap at times after the application of an ultrashort laser pulse or pulse train up to the point of breakdown. We find that the cumulative electrode-driven air density depression channel initiated by the laser pulse plays the dominant role in the gas evolution leading to breakdown. II. EXPERIMENTAL SETUP The spark gap consists of two hemispherical tungsten electrodes of radius = 1.27 cm spaced 3 mm−10 mm apart, with 2 mm diameter axial holes for entrance and exit of the heating laser pulse or pulse train, plus an interferometric probe pulse (see Fig. 1(a)). To generate pulse trains, single pulses from a Ti:Sapphire laser (λ=800nm) were first passed through a nested interferometer 28 ('pulse stacker') which generates eight replica pulses, with the inter-pulse delays controlled by motorized translation stages (~10 fs step size). For single pulse experiments, all but one of the pulse stacker arms was blocked. The pulse or pulse train was then passed through an adjustable grating compressor allowing control of the pulsewidth. Inserting the pulse stacker upstream of the compressor avoided nonlinear distortion in the stacker's beamsplitting optics. The laser was then axially focused through the electrode holes (using a = 50 cm lens at /45), giving a confocal parameter of 2 = 4 mm and 1 ⁄ intensity radius = 23 µm, with the beam waist placed midway between the electrodes. In general, the axial extent of the gas excitation was longer than 2 owing to the onset of self-focusing and filamentation, as discussed later. As shown in Fig. 1(a), the electrodes were connected in parallel with a C=4.4 nF capacitor bank, which was charged through a 1 kΩ resistor up to +30 kV by a DC HV power supply (Spellman High-Voltage model SL30PN10). The diode, inductor, and capacitor near the power supply act as an RF choke to shunt to ground any strong transients from the spark gap breakdown. A current measurement circuit (inside the green dashed box) is inserted in series with the spark gap ground electrode for some of our measurements. The gas and plasma evolution between the electrodes was monitored by a variably delayed interferometric probe pulse (λ=532 nm, 10 ns) electronically synchronized (~1 ns jitter) and co-propagating with the femtosecond air excitation pulse(s) (see Fig. 1(a)). The probe pulsewidth and timing jitter were small compared to the onset timescale of breakdown (see below). The interaction region was end-imaged through the hole in the positive electrode onto a folded wavefront interferometer, with the object plane adjustable. Interferometric background images (femtosecond pulse off) were taken on every shot by passing the pulse(s) through an optical chopper before the compressor. The probe beam was cleaned by a spatial filter prior to the interaction region, producing smooth, low noise phase fronts. With use of the chopper, our single shot interferometric measurements were limited to a noise floor of < 40 mrad. Extraction of interferometric phase ∆ ( ) was performed as in ref. [29], yielding refractive index perturbation profiles ∆ ( ) axially averaged over the gap width, where is a transverse coordinate with respect to the spark gap axis. Figure 2 shows a time sequence of air refractive index perturbation profiles ∆ ( ) following application of a 65 µJ, 100 fs FWHM laser pulse to a 5.5 mm electrode gap for (a) 0V and (b) 17 kV/cm applied to the gap. Based on measurements and simulations in our prior work [29][30][31] , the profiles in (a) are explained as follows. When a 50-100 fs laser pulse is focused into air, energy is deposited primarily through optical field ionization and non-resonant rotational Raman excitation of the air molecules (the laser bandwidth is not wide enough for vibrational Raman excitation) 32 . The laser-produced plasma recombines to the neutral gas on a <10 ns timescale 15 , while the excited molecular rotational wavepacket collisionally decoheres on a ~100 ps timescale 32 . Owing to the finite thermal conductivity of the surrounding neutral atmosphere, the radial zone has approximately the filament core radius of ~50 µm 15,29 . The result is an extended region of high pressure at temperatures up to a few hundred K above ambient 30 . The onset of this pressure spike is much faster than the acoustic timescale of the gas, ⁄~100 ns, where ~3.4 × Figure 2. (a) Evolution of refractive index shift profiles ∆ ( ) at a sequence of probe delays following heating at = 0 by a single 100 fs, 65 µJ laser pulse in the spark gap with zero gap field. The outward propagating yellow ring, seen in the frames at delays < 600ns, is a single-cycle acoustic wave. (b) Same measurement as (a), but with 17 kV/cm gap field. In the case of HV applied across the gap, the on-axis density hole is observed to deepen and widen relative to the 0 V case at all delays. ~100 ns after the filament is formed 31 , as seen in Fig. 2(a). Later panels show that by ~1 µs, the acoustic wave has long since left the filament region, leaving a density depression ('density hole') at elevated temperature and in pressure equilibrium with the surrounding gas. Over ~100 µs to ~1 ms, the density hole decays by thermal diffusion. After pressure equilibrium has been achieved, the 'area' of the density hole profile is a proxy for the laser energy deposited per unit length in the gas, as shown in ref. [33], where is the specific heat of air at constant volume, and are the ambient gas density and temperature, = 2 ⁄ is the probe wavenumber, and is the gas refractive index. While Eq. (1) was applied in ref. [33] to femtosecond laser-generated density holes, it will also apply to calculating energy deposited by any heating mechanism that is fast compared to thermal diffusion into the surrounding gas, which has a ~millisecond timescale. We use this broader applicability of Eq. (1) in much of the analysis of this paper. A. ROLE OF FILAMENT PLASMA AND DENSITY HOLE IN HIGH VOLTAGE BREAKDOWN We first assess the roles of the laser produced plasma and the gas density depression in the high voltage breakdown process. A first set of experiments was performed in which air density holes of the same depth were generated, either with or without initial plasma. In the case of a single filamenting pulse that generates plasma in the usual manner, the pulse energy was chosen (22 μJ) to produce an on-axis density hole depth ∆ ⁄~3% at 1 μs delay after the pulse, where is the background air density and ∆ is the on-axis density reduction. In the plasma-free case, we achieved the same ∆ ⁄~3% hole depth at 1 μs delay by using an 8-pulse sequence of 12.5 μJ pulses (below the ionization threshold of the oxygen molecule) from the pulse stacker to rotationally heat the air's nitrogen molecules. The inter-pulse timing in the sequence was adjusted to ~ 8.3 ps (the rotational revival time of N 2 ) in order to maximize the rotational wavepacket excitation and air heating 34 . On the basis of oxygen's ionization rate intensity dependence, ∝ , we expect single pulse excitation to produce at least (22 µJ/12.5 µJ) ~90 × more plasma than the pulse sequence. Figure 3. (a) Energy deposited in the intra-gap air as a function of gap field for the case of an initial plasma (green curve, single 22 µJ laser pulse) and the case of little to no initial plasma (red curve, 8 pulse sequence with 12.5µJ/pulse). The electrode spacing is 4 mm. For the green curve, a single laser pulse formed a plasma filament between the electrodes. For the red curve, the air in the electrode gap was heated via N 2 rotational excitation by the resonant 8-pulse sequence. In both cases, the initial on-axis density hole depth was ∆ ⁄~3% at a delay of 1µs. Fig. 1(b). Each point is an average over 25 consecutive laser shots, while the error bars correspond to the standard deviation. The plotted points terminate where breakdown occurs. Below ~13 kV/cm, the energy absorbed in the gas (~0.05 µJ) in both cases is consistent with the ∆ ⁄~3% density hole imprinted by the single pulse or pulse train. Above ~13 kV/cm, there is increasing gas heating in the single pulse (plasma) case, consistent with electron impact ionization and resistive heating driven by the high voltage. At the ~13 kV/cm threshold, ~1 eV for a mean free path in air ~0.5 μm 35,36 . This is sufficient for electrons to reach several eV over multiple collisions or in the tail of the distribution, enough energy to surmount the nitrogen vibrational Π shape resonance peaking past ~2 eV 1 . We speculate that because of this vibrational energy sink, gap fields below ~13 kV/cm are unable to accelerate electrons sufficiently to accumulate the ~12-15 eV needed for impact ionization of O 2 and N 2 . Both the additional carriers and the higher energy electrons then heat the air via elastic and inelastic collisions, rapidly deepening the density hole and increasing until the onset of breakdown at ~22 kV/cm, as seen in the green curve. In both plasma-free and plasma cases, however, the breakdown threshold is ~22-23 kV/cm, and occurs roughly where the energy absorbed by the gas (as measured by the density hole volume) is comparable at ~0.5µJ. This suggests that the density hole is the main factor in setting the breakdown threshold. To the extent that pre-existing free electrons are involved, their acceleration in the gap field serves mainly to heat the air to generate the density hole. In the nominally plasma-free case (red curve), pre-breakdown gas heating appears to have occurred, but mainly at fields just below the breakdown level. In another view of the dynamics induced by a single pulse, Fig. 3(b) shows the deepening and widening of the density hole with increasing gap field, until breakdown occurs at ~23 kV/cm. In the next set of experiments, we examined the effect of the energy of a single filamenting (plasmaproducing) laser pulse on the onset of breakdown. Figure 4 plots energy deposition, determined via Eq. (1), as a function of gap field (gap length 5.5 mm) for four laser pulse energies. It is seen that for higher pulse energy, more energy is deposited in the gap and the gap breakdown field (where the curves terminate) is reduced. At fields below the sharper upturn of each curve, the energy deposition is seen to slowly increase or stay roughly constant. For each curve, the upturn is associated, as in Fig. 3, with the onset of sufficient electron acceleration for impact ionization of O 2 and N 2 along with increasing gas heating and widening/deepening of the density hole, which increases channel conductivity. The higher initial electron densities generated by higher laser energies serve to more quickly establish the density hole volume for onset of breakdown, which occurs at lower gap fields. Fig. 5(a) terminates at the breakdown field near ~15 kV/cm. For the smaller gaps, the effective breakdown field is slightly larger (~16 kV/cm) because, as seen in Fig. 1(b), the field peaks over a smaller fraction of the gap width. For the longer gaps, the peak field occupies a larger fraction of the gap; the breakdown field converges to ~14-15 kV/cm. In all of these runs, the laser vacuum confocal parameter is 2 ~ 4 mm centred on the gap, with spot size = 23 µ . However, filamentary propagation is responsible for extended plasma generation, increasing the electron density away from the vacuum beam waist by many orders of magnitude. This is seen in the filamentation simulations of Fig. 5(b), which are compared to the ∝ ( ( )) ionization yield of a linearly propagates left to right. By comparison, the multiphoton ionization yield of a linearly propagating pulse is many orders of magnitude lower away from the linear beam waist. It is seen that filament peak electron density occurs slightly upstream of the linear beam waist consistent with self-focusing and filamentation. B. EFFECT OF SPARK GAP ELECTRODE SEPARATION propagating pulse. These simulations were performed with a GPU implementation of the unidirectional pulse propagation equation, which includes the full molecular response of air 37,38 . The plots of Fig. 5(a) show that for low fields, the energy deposition per unit length remains roughly constant, and is higher for smaller gaps. This is consistent with laser energy deposition from the filament plasma, whose axial average electron density is higher for short gaps (see Fig. 5(b)). Only once the field increases to ~8 kV/cm, the curves for all gaps begin to converge as resistive heating (by electrons driven by the gap field) begins to dominate. C. TIME DEPENDENCE OF INTRA-GAP GAS HEATING Without applied HV, after the filament-heated air sheds its single cycle acoustic emission and achieves pressure equilibrium (as seen in Fig. 2(a) for >~200 ns), the density hole decreases in depth and widens as thermal diffusion occurs to the surrounding air. With the application of HV across the gap, the gas is heated continuously after the initial filament energy deposition (as seen by the widening and deepening density hole in Fig. 2(b)). Figure 6 shows plots of the gas heating as a function of time in a 5.5mm gap for a range of voltages for a fixed filamenting laser pulse (65 μJ, 100 fs). The highest voltage was intentionally kept just below the breakdown threshold. By = 200 ns (indicated by the vertical black line), the acoustic wave has just been shed from the density hole. It is only after that time that Eq. (1) can be used to determine absorbed energy. By ~200ns, the gap HV has already heated the air, as indicated by the increasing density hole volume with gap voltage, and heating is shown increasing out to at least ~100 µs. This heating remains predominantly localized to the density hole region imprinted by the initial laser pulse, as seen in Fig. 2(b) and Fig. 3(b). (1) can be applied. The density hole volume is seen to continuously increase out to the maximum probe delay of 100 µs, with an increasingly rapid increase with gap voltage. Each point is an average of 25 shots, and the curves are best fits of the points to = . Even by 200ns, higher gap voltages are seen to do more heating. D. PRE-BREAKDOWN GAP CURRENT MEASUREMENTS To verify that the pre-breakdown air heating is resistively driven by the gap voltage, we inserted an auxiliary current monitor shown by the dashed green box in Fig. 1(a). It consists of a 100MΩ resistor in parallel with a miniature gas discharge tube (LittelFuse CG110) linked to ground from the gap electrode. The discharge tube acts as a shunt to ground for the extremely large transient currents generated at spark of the laser pulse, followed by a rollover and then an exponential decay, beginning at approximately the vertical dashed black lines. The sharp rise measurement is limited by the probe and oscilloscope frequency response. As the gap field increases, the rollover part of the curves shorten and the response approaches a pure exponential decay following the sharp rise. The ⁄ = 115 μs decay time (see Fig. 8 , where ~50 μm is the approximate air channel radius (see Fig. 2 (b)) and = 4 mm is the gap spacing. Using ~2 pC gives ~4 × 10 cm . This is orders of magnitude lower than the electron density generated in our femtosecond filaments (see Fig. 5(b)), indicating that electrons from the filament plasma contributes little directly to these current pulses. Figure 8 shows results of current measurements made using the two sets of laser parameters of Fig. 3: The single filamenting pulse (22 µJ, 100 fs), and the sequence of eight pulses (12.5 µJ, 100 fs) of lower intensity, separated by the N 2 rotational revival interval of 8.3ps. As discussed earlier, the pulse energies were chosen so that the resulting density depression was ∆ ⁄~3% for each case. The impulse response of the gap to the single pulse is shown in Fig. 8(a) for 22 kV/cm gap voltage. Figure 8(b) shows peak current vs. gap field for single (red) and 8-pulse (blue) cases, and a laser-free curve (green) for comparison, where below breakdown, the current is zero. As the gap voltage increases, the peak current for the single pulse case increases faster than the 8-pulse case, owing to its higher initial electron density and increasing air heating and density hole volume (as seen in Fig. 6), increasing the air channel conductance. Even though the 8-pulse train generates less than ~1% of the electron density of the single pulse (see earlier discussion), inspection of Fig. 8(b) shows that its resulting peak current is roughly 50% of the single pulse case until they converge when the gap voltage is close to the breakdown threshold. This strongly suggests Figure 8. Current measurements (below the breakdown threshold) using the green-boxed current monitor depicted in Fig. 1(a). (a) Transient gap current at gap field 22 kV/cm induced by single 22 µJ, 100 fs pulse; this is in the impulse response limit (see Fig. 7(a) and discussion). The dashed red line is an exponential fit with ⁄ = 115 μs. (b) Peak gap current vs. gap voltage below the breakdown threshold for the single pulse (red, plasma) and 8-pulse (blue, no plasma) cases. The red and blue curves converge and terminate where breakdown occurs at ~25 kV/cm. The no-laser case (green) is shown for comparison; breakdown occurs near ~30 kV/cm. that the main role of the laser-whether through generating filament plasma or by rotational excitation-is to initially heat the air and increase the density hole volume to provide enhanced channel conductance; thereafter, free carriers are largely supplied by the gap electrodes. In addition to the air heating, the filament plasma mainly provides initial free carriers in the gap, while for the rotationally heated gas, the initial carriers are likely provided by corona/streamers. In both cases, at sufficiently high gap voltage, the carriers increase in number owing to impact ionization and continue to resistively heat the gas, increasing the density hole volume and conductance. Just before breakdown, the currents are comparable (Fig. 8(b)) as are the density hole volumes (Fig. 3). IV. CONCLUSIONS We measured, for the first time to our knowledge, the spatial and temporal dynamics of gas heating in an ultrashort laser-triggered spark gap prior to breakdown. To elucidate the relative roles of the plasma and the air density depression induced by the laser, we performed longitudinal interferometry through the gap electrodes along with gap current measurements. We find that under all conditions, resistive heating driven by the applied gap field acts to widen and deepen the intra-gap air density channel, leading to eventual breakdown for sufficient field strength. In the case of plasma and density hole generation by a single filamenting pulse, the electrons are driven by the gap field and further heat the air and widen and deepen the hole. In the case of little plasma but comparable density hole generation from an 8-pulse sequence, the lower current initially provided by corona/streamers is preferentially channeled through the higher conductance hole, heating it further. As gap voltage is increased, impact ionization increases the gap current to the point where the current and the density hole volume in both cases are comparable. This leads to similar breakdown thresholds. To summarize, once a density hole is created between the electrodes, and regardless of how it was formed, it acts as a preferred channel through which subsequent gap field-driven current may flow. This current collisionally heats the channel further, widening and deepening it and increasing conductance and current, leading to eventual breakdown for sufficient gap field. For ease of diagnostic access, we examined relatively short spark gaps; however, we expect this scenario to apply to much longer femtosecond filamenttriggered discharges.
5,483.8
2020-05-28T00:00:00.000
[ "Physics" ]
Developing Small-Cargo Flows in Cities Using Unmanned Aerial Vehicles : Modern technology allows for the simplification of a number of functions in industry and business. Many companies have achieved a high level of robotisation and automation in the use of services, including companies operating in the transport sector, where smart systems help to control load planning, the issuing of documents, the tracking and transportation of shipments, etc. Drones can be exploited as smart assistants in delivering cargo in cities. Since it is a new technology capable of working autonomously, it presents various legal, psychological, and physical challenges. This article presents an analysis of the scientific literature on the development of small-cargo flows using drones and a research methodology on the development of the use of drones, presenting a model which helps to address the issue of cargo delivery in cities. Introduction Integrating UAVs into urban freight logistics offers benefits such as traffic relief, faster deliveries, cost efficiency, environmental sustainability, improved accessibility, and enhanced safety, ultimately contributing to more efficient and sustainable urban freight transportation systems.Electric vertical take-off and landing vehicles (eVTOL) are expected to be the key drivers for urban air mobility (UAM) scenarios by satisfying on-demand air travel needs in the short or medium term [1]. Unmanned aerial vehicles (UAVs) deliver goods with fewer emissions than traditional delivery vehicles, thus contributing to environmental sustainability in cities.By reducing dependence on fossil fuel-powered vehicles, the use of UAVs contributes to reducing air pollution and its associated health risks. It should be noted that the development of urban freight flows using unmanned aerial vehicles (UAVs) is important for several reasons.UAVs can bypass congested roads, reducing traffic congestion in cities, especially during peak hours.This reduces trafficrelated delays and disruptions and ensures smoother freight transport.Unmanned aerial vehicles (UAVs) can deliver goods faster than traditional ground-based transport modes.This is particularly advantageous for the delivery of urgent consignments such as medical supplies and organs, where speed is a crucial criterion. The donation-transplant network's complexity lies in the need to reconcile standardised processes and high levels of urgency and uncertainty due to organs' perishability and location.Both punctuality and reliability of air transportation services are crucial to ensure the safe outcome of a transplant [2]. UAVs can reduce delivery costs by optimising the routes and requiring minimal human intervention.This results in cost savings for businesses and consumers and makes delivery more affordable.UAVs can also reach areas that are difficult for conventional vehicles to reach, including densely populated urban areas and remote locations.This enhances accessibility to goods and services, particularly for residents in underserved areas.UAVs operate above ground traffic, reducing the risk of accidents and collisions on busy city streets.This enhances the overall road safety and minimises the potential for accidents involving delivery vehicles. Relevance.According to statistical data, cargo flows are constantly increasing, which leads to higher flows of freight vehicles not only in urban areas but also in rural areas.Heavy vehicular traffic in cities is one of the major reasons behind the search for new technologies.An excessive number of cars in cities causes traffic congestion, which puts a strain on urban logistics, leading to economic and environmental problems [3].The number of people living in cities is growing rapidly [4], making it difficult to satisfy consumer needs.Logistic companies are implementing digitised solutions in their operations, but, regardless, there are obstacles that hinder this progress. The delivery of goods to the end user is known as the last mile [5].It is usually the most expensive and cost-intensive segment in the transport chain [6].Giant companies, such as Amazon, DHL, or Jingdong, have been solving these problems using unmanned aerial vehicles (UAVs) for last-mile deliveries to the final destination [7]. Good market research and the right application of drones in logistics would allow this new technology to become indispensable [8].Currently, UAV infrastructure has many obstacles that make the implementation of this new transport system still complicated and requiring new solutions. The following main research problem of freight transport can be distinguished: the lack of the capacity to properly identify the characteristics of urban freight transport in urban transport systems.This affects their ability to make effective decisions to support the implementation of sustainable transport policies such as urban freight models.Local authorities do not take a systemic approach to urban freight transport.This results in a lack of clearly defined policy objectives or corresponding performance indicators.There is a lack of comprehensive research on how urban freight patterns are applied to improve the implementation of measures.As a result, a reliable link between the policy objectives supported by sustainable urban transport models and the means of policy implementation can hardly be established [9]. Topological analyses based on complex networks help to better understand the characteristics of these networks and the characteristics of their dynamic behaviours.This can help to study phenomena such as robustness, resilience, or propagation processes [10].To reduce all logistics costs, companies are now changing to an air mode, but it is necessary to clarify which shipments should be sent by said air mode.Several other parameters such as shipment value, shipment volume, product type, and reliability of the shipping method should be considered while choosing the shipping method.Before choosing between air and sea shipping methods, it is necessary to carefully calculate and compare the costs [11]. According to Comi et al. [12], the long-term effects of transport-land use interactions can be considered using LUTI-type modelling, mainly in the development of localisation models for urban distribution centres and large shopping centres. Comi et al. [13] stated that it could be useful to have an overview of a city's similarities or differences in terms of freight transport.According to Comi et al. [13], this type of framework can serve as a useful ex ante assessment guideline to identify the different classes of factors for each sustainability goal noted.It should also allow planners to check whether the experimental results in a city are consistent with the results obtained in the city through the goals defined for other cities.Nuzzolo et al. [14] propose a travel chainordering model to simulate retailer restocking in an urban-metropolitan area.It is part of a general modelling framework developed by the authors to simulate urban freight demand, taking into account demand and logistics subsystems.Nuzzolo et al. [14] proposed that the logistics subsystem of the modelling system could be divided into two parts: the first one, which defines the order of the travel chain, and the second, which takes into account the choice of stopping places.Nuzzolo et al. [14] focused on the specification and calibration of a travel chain-booking model using data collected in the city centre of Rome.Also, Nuzzolo et al. [15] analysed agent-based modelling (ABS) for load distribution modelling as a challenge and an opportunity for future developments in this research field.According to Nuzzolo et al. [15], different stakeholders are involved in urban load distribution, and ABS allows for considering many types of agents, each with its own specific objective function, behaviour, specific characteristics, needs, and aspirations.As stated by Nuzzolo et al. [15] using this modelling approach, an agent that acts to achieve one (or more) goals, guided by certain criteria, interacting with other representatives and learning from their own experience, represents the interested party.Nuzzolo et al. [15] found, in their review of articles, that the impact of a wide set of urban logistics measures can be assessed and that research methods in this area are improving, often coupling agent-based simulation with another model (e.g., vehicle routing). Nuzzolo et al. [16] mentioned that this paper focuses on models for estimating vehicle O-D matrices by an item/quantity approach.Considering the complexity of representing the restocking phenomenon, estimating the vehicle OD matrix from a given quantity or delivery O-D matrix is quite difficult, and only the literature reports some applications for test cases.Nuzzolo et al. [16]'s proposed modelling framework overcomes these limitations by specifying the number of pre-trip stops for restocking and sequential delivery location selection.Nuzzolo et al. [16] also considered that restockers may behave differently in relation to trip characteristics. The main problem of the topic is the fact that insufficient attention is paid to the use of UAVs for transporting small cargo in cities.The aim of the paper is to analyse the current and continuously evolving situation of UAV adaptation, define the possibilities of their use in cities, conduct a qualitative study and build a model to solve the problems relating to cargo delivery in cities, and present conclusions. The main objectives of this article are the following: to identify the main aspects in the transportation of cargo by unmanned aerial vehicles (UAVs); to identify problems in the development of small freight flows, to analyse first and last mile features; to define unmanned aerial vehicles; to carry out research to identify problem areas in the application of drones; and to develop a transport model that will help to solve the main urban logistics problems. The research methods hereby applied are a scientific literature analysis and an expert survey. Problems in the Development of Small Freight Flows Prices.Cargo transportation volumes depend on price.Prices are set in light of the specific characteristics of a mode of transport, where two options are available: the first option is charging based on short-term marginal costs, while the second one involves increasing fees through short-term marginal costs to cover all transport costs (i.e., costs of operation, loading, etc.) [17].Fuel costs account for the major share of the total costs in the transport sector [18].The growing price of petrol and diesel increases the cost of transport; thus, companies increase their cargo transportation mark-ups to avoid losing profits.In order to save on logistics services, reducing fuel consumption to the minimum is important.The cost of fuel is the key component in setting the transport price [19]. By using unmanned navigation, overall transportation costs, including fuel costs, can be expected to decrease by transitioning to more efficient, safer, and better-managed traffic flows.However, to accurately assess potential changes in the cost structure, further research is needed, taking into account specific factors such as vehicle type, routes, and regional differences.Unmanned navigation can change the cost structure for road transport.Unmanned navigation systems can use more detailed information about road conditions, traffic flow, work zones, and other factors to choose optimal routes.This can reduce fuel costs, as the route is planned to avoid traffic jams, road closures, or other obstacles.Unmanned vehicles can be programmed to carry cargo or passengers at the optimal speed and select the most economical engine modes to reduce fuel consumption.Unmanned vehicle systems can coordinate their actions with other vehicles to maximise road usage.This can reduce traffic congestion, accelerate movement, and reduce waiting time, which is usually associated with fuel consumption.Also, unmanned navigation systems can monitor and analyse traffic conditions in real time and make decisions to avoid situations that could increase fuel costs, such as aggressive driving, excessive braking, or speed fluctuations. For this reason, the use of new technologies such as UAVs could help to reduce the vehicle numbers on the roads for small parcel deliveries and also reduce fossil fuel costs. Environmental pollution.The transport sector is a rapidly growing sector with the highest greenhouse gas emissions [20].Epidemiological studies have shown that air pollution contributes to a wide range of adverse human health effects, including respiratory and cardiovascular diseases [21].Varying carbon dioxide emissions result in companies facing volatility in transport service prices [22].Vehicles must comply with emission requirements, and companies are encouraged to purchase newer vehicles that are less polluting or to look for new technologies for the delivery of goods.The use of UAVs for the delivery of small parcels would help to reduce the number of freight vehicles on the roads and, at the same time, air pollution in the cities. Infrastructure.In many places, road infrastructure is not properly adapted for cargo transportation.Unpaved unsuitable roads and a low number of terminals contribute significantly to transport problems.Therefore, in order to transport small cargo efficiently, companies need to spend considerable resources on infrastructure development to deliver freight quickly and efficiently [23]. Long transportation times.With the growth of e-commerce and the growing number of people, small-cargo flows will continue to increase as a global trend [24].Therefore, with increasing numbers of orders, the transport sector will continuously be pressured to deliver cargo to end users efficiently and as quickly as possible.Transport congestion significantly reduces traffic efficiency [25].Electric vertical take-off and landing vehicles are expected to be the key drivers for urban air mobility (UAM) scenarios by satisfying on-demand air travel needs in the short or mid-term and also for small-cargo transportation. Safety and security.Ensuring the safety and security of unmanned aerial vehicles in urban areas is crucial, as they can pose risks, including collisions, invasion of privacy, and misuse.It is important to pay attention to the implementation of strict rules governing the operation of UAVs in urban areas, including the requirements for pilot certification, the registration of UAVs, and compliance with flight restrictions.There is also a need to use geo-fencing technology to create virtual boundaries around sensitive areas, such as airports, government buildings, and congested public spaces, to prevent UAVs from entering restricted airspace. UAVs need to broadcast real-time identification and location information so that authorities can track their movements and identify operators in the event of incidents or violations.Equipping UAVs with collision avoidance systems such as radar and optical sensors to detect and avoid obstacles in their flight path would reduce the risk of collisions with buildings, vehicles, and other unmanned aircraft. Attention should also be paid to the implementation of encryption and authentication mechanisms to prevent unauthorised access to UAVs and their control systems, thus reducing the risk of hijacking or cyber-attacks. It is important to establish guidelines for UAV operators to respect the privacy rights of individuals, including restrictions on surveillance and data collection activities.It is also important to develop protocols for responding to emergencies involving UAVs, such as accidents, malfunctions, or unauthorised intrusions, to reduce potential risks to public safety. First and Last Mile The location of the first and last mile also causes major disruptions in a city's overall logistics system.In order to avoid traffic congestion and gridlocks and make efficient use of small delivery companies, new solutions are being searched for to meet the needs of consumers without causing harm to the city.The recent emergence of self-service parcel terminals offers the possibility of picking up an order at a specific location, but this does not fully satisfy consumers. The possibility of using unmanned aerial vehicles has been receiving increasing attention.UAVs are a new mode of cargo transportation that improves ecology, speeds up delivery times, and frees up the city [26]. The operating costs of UAVs depend on energy, and the optimisation of delivery is closely linked to optimal weight ratios and the price of the drone [27].Their adjustable height allows these vehicles to travel to even hard-to-reach locations. The full adaptation of UAVs in densely populated metropolises will make the delivery of small freight much cheaper than using existing courier or shuttle services.To achieve a fully automated transportation of goods by UAVs, a system that works flawlessly and is able to react by itself to certain failures in real time is needed [28]. Due to their relatively low emissions, drones are a better solution than motorcycles or trucks [29].The net emissions of drones are quite low compared to traditional modes of transport, but they still exist [30].Aircraft can help reduce air pollution in large cities, as most of them are powered by electricity. Increasing consumer demand and the many problems of road transport make it inefficient to transport small goods in the last mile of a delivery by the existing modes of transport.The last mile or last kilometre is the last leg of a journey comprising the movement of goods from the transportation hub to the final destination for the consumer.In order to save the environment and deliver goods faster, a new and recently emerging technology-unmanned aerial vehicles-would come in highly handy.This technology can reduce the environmental problem of transport, allowing goods to be delivered to hard-to-reach places much faster than by any currently existing mode of transport. Adaptation UAVs in Cities 2.3.1. Choosing Unmanned Aerial Vehicles and Their Control and Software The increase in e-commerce and parcel deliveries has caught most shops and parcel delivery services unprepared, with delays, misdirection, or loss of parcels, leading to high customer dissatisfaction.All this is leading e-shops and parcel delivery services to increasingly look for alternative delivery methods.One of these is the delivery of parcels and goods by unmanned aerial vehicles, known as drones. In Lithuania, drone delivery is still at a very early stage of development, as in most other countries.Until the beginning of this year, Lithuania had rules on the use of drones, and, since this year, certain European Commission regulations have come into force, setting out rules and requirements for the owners and pilots of drones in the EU.Compliance with these rules and requirements does not prohibit the transport of goods or parcels by UAVs.However, aircraft that are designed to carry dangerous goods, people, or fly over people are subject to certification requirements. The delivery of goods and parcels by UAVs is a solution that can reduce delivery times, road congestion, environmental pollution, and delivery costs. However, there are still a number of challenges, such as adopting rules and directives allowing the transport of goods and parcels by UAVs, ensuring customer privacy, and integration into existing supply chains.The first trials are underway in Lithuania, although delivery by UAVs is not new, as the potential of UAVs for parcel delivery had been discussed as early as in 2013, when Amazon started testing its fleet of UAVs under development.Later, in 2020, a major step was taken towards the legalisation of parcel delivery by UAVs when the US Federal Aviation Administration approved new rules allowing the operation of aircraft weighing more than 250 g over people and moving vehicles. In addition, Amazon and several other companies, such as UPS and Wing, a subsidiary of Google, have obtained certificates allowing them to operate a fleet of unmanned aircraft.Amazon even has a target of delivering parcels within half an hour of ordering and sees UAVs as the technology with the most potential to achieve this goal. In Lithuania, the delivery of goods by unmanned aerial vehicles (UAVs) is also not a new technology.Topocentras carried out a demonstration delivery where a mobile phone was delivered by UAVs from the parking lot of a shopping centre to a nearby skyscraper.In 2020, a Lithuanian record for parcel delivery by UAVs was set when a parcel was flown 5 km away to a real customer. Stringent Technological Requirements The concept of unmanned aerial delivery is quite simple.An order is created and placed on a mobile app or website and processed at a local delivery point.The parcel is packed in a special box, which is hooked onto an unmanned aerial vehicle (UAV) and delivered to the customer's home.The UAV is an essential element in this chain and is subject to stringent technological requirements. The aircraft used to deliver parcels can be remotely and autonomously controlled.They must be equipped with warning systems for obstacle detection and avoidance, and their rotating parts must be protected. One example is UPS, which has recently developed its fleet of delivery aircraft using wingcopters.This technology features a patented guide rotor mechanism that includes two flight modes: multi-rotor, which allows the aircraft to hover in the air, and fixed-wing, which allows it to fly forward.This allows the aircraft to take off and land vertically.Aerodynamic solutions ensure that the aircraft remains stable even in adverse weather conditions.The aircraft can cover a distance of up to 100 km with a parcel weighing around 2 kg.Amazon's newest parcel delivery aircraft has similar features to its predecessor from UPS.This aircraft can travel up to 24 km with a parcel weighing around 2 kg.Wing's parcel delivery aircraft are distinguished by their 1 m wingspan, which allows them to cover a distance of up to 20 km with a parcel weighing around 1.3 kg. There Are Three Main Types of Drones Multi-rotor drones have strong robotic arms and the highest pick-up capacity compared to other types of drones.They can be used for longer deliveries and for transporting heavy parcels.The drone's arms ensure that the cargo can be properly secured.They can transport cargo over longer distances.Hybrid drones have a slightly lower lifting capacity compared to multi-rotor drones.They have a lighter body and can fly to higher altitudes to avoid interference and obstacles.Hybrid UAVs represent a versatile solution for a wide range of aerial tasks, including surveillance, mapping, environmental monitoring, and cargo transportation.Their ability to harness the strengths of multiple propulsion technologies makes them well-suited for demanding and dynamic operational scenarios. A hybrid UAV is a type of drone that incorporates a blend of propulsion systems, combining the advantages of different power sources for improved performance and versatility.Rather than relying solely on one type of propulsion, such as electric motors or internal combustion engines, hybrid UAVs integrate multiple power technologies.These may include combinations of electric motors, traditional fuel engines, fuel cells, or even renewable energy sources like solar panels.UAVs with hybrid propulsion systems offer several benefits: they can fly for longer durations and cover greater distances compared to drones powered solely by electric batteries.This extended flight time is advantageous for missions requiring prolonged aerial surveillance, mapping, or data collection.The combination of different power sources allows hybrid UAVs to carry heavier payloads without compromising flight performance.This capability enables the integration of advanced sensors, cameras, or other equipment for diverse applications.Hybrid UAVs can adapt to varying mission requirements and environmental conditions by leveraging different power sources as needed.This flexibility enables optimal energy management and performance optimisation based on specific mission objectives.By incorporating redundant power systems, hybrid UAVs offer improved reliability and safety during flight operations.Redundancy minimises the risk of power failure and ensures continued operation even in the event of system malfunction. Fixed-wing drones can travel the planned distance at high speed and in a very short time.The delivery times are short and fast.The only drawback is that the packaging has to be light as such drones cannot carry much weight. UAVs use an autonomous autopilot system.The terminal is equipped with a maintenance centre to be used for storing, charging, and servicing drones.Delivery drones operate in an environment where the public may be exposed to aviation risks.The system should be designed so that drones are treated as an aircraft or helicopter with the same aviation safety principles and general regulations.It should be mentioned that the main obstacle during their deployment would be the acceptance of the new mode of transport by people living in urban areas. As UAVs are relatively rare, people are reluctant to accept untested innovations immediately.The deployment of a model can take from a few months to several years, depending on government restrictions and public attitudes.It is also important to stress that the implementation of this proposal will require completely new governmental regulations and legal aspects to avoid problems. Also, having integrated into the drones the FlytOS smart modules and sensors with integrated SBC (Nvidia Jetson Nano, DJI Manifold 2, Raspberry Pi 3B+/4), these drones will be able to accurately land or take off and avoid collisions.Such integration with a UTM engine or national airspace services can give more information on the airspace, flight warnings, and weather conditions for optimal route planning, avoiding no-fly zones and manned aircraft. Definition of Unmanned Aerial Vehicles UAVs are still a new technology and only recently have their performance and use in the transport sector started to be explored [8].The following is a list of the definitions of UAVs provided by different authors (see Table 1). Beard, McLain, 2012 [31] A cargo drone is an electric or semi-electric vehicle with a certain number of rotors, capable of transporting cargo from point A to point B by air. Type of air transport of cargo Giones, Brem, 2017 [32] A cargo drone is the first major step towards protecting nature in the logistics sector.Environmental protection Layne, 2015 [33] A cargo drone is a vehicle for transporting very small loads in urban areas.The future of urban logistics Patel, 2016 [34] A cargo drone is an electric vehicle offering the functions of cargo transport, mapping, surveillance, and photography. Multifunctional means of transport Wang, 2016 [35] A cargo drone is a means of transporting goods in case of emergency. Lightning-fast mode of transport Goodchild, Toy, 2018 [36] A cargo drone is an electric or semi-electric vehicle for transporting small cargo in hard-to-reach areas. Transporting freight in hard-to-reach areas Chauhan et al., 2019 [37] A cargo drone is a means of transporting small cargo to reduce environmental pollution. Environmental protection Different authors provide different descriptions of cargo UAVs, but they all agree that they are a new and evolving mode of transport for small-cargo carriages.Most authors emphasise the advantage of this mode of transport in preserving nature.As unmanned aerial vehicles use electricity, they are an excellent solution to replace existing modes of transport, especially in urban or hard-to-reach areas. With the growth of e-commerce and increasing numbers of people, the flow of small cargo will only increase [24].Therefore, as the number of orders increases, so will the pressure on the transport sector to deliver cargo to end users efficiently and within the shortest possible period of time. Transport congestion significantly reduces the efficiency of cargo delivery [25].UAVs can reduce delivery times by up to 75% [38,39]. The delivery of small cargo and fast-food meals can contribute to meeting new consumer needs not only in metropolises but also in remote regions. Barriers to the Use of UAVs Unmanned aerial vehicles, like all technical devices, have parts that are subject to wear and tear. Regulation of cargo UAVs.The most important aspect in cargo transportation is represented by governmental regulations, rules, and responsibilities, without which transportation in the airspace would be impossible [40].Most people think that UAVs are uncontrollable, invasive, and disruptive devices in the airspace [41].However, for this not to be the case, this new means of transport requires new legal regulations.The entire regulatory framework should be based on the protection of the landscape, settlements, people, airspace, and traffic [42]. Governments should ensure the presence of the necessary infrastructure.This will require highly accurate navigation and a coherent and connected overall system [43].It could also include banning UAVs from certain areas, such as airports, military camps, government buildings, schools, and parks [44]. The problem of UAV routing.One of the most important problems in the last mile of the transportation of small loads by UAVs is the problem of vehicle routing [45]. Technical barriers of UAVs.The most commonly discussed problems include flight range, aircraft speed, batteries, and carrying capacity [46].Also, it is necessary to determine how much and what kind of new infrastructure will be needed for UAVs.This problem can be solved by using a combination of UAVs and trucks for delivery [47]. Public attitudes towards UAVs.It is expected that, with more and more information and positive examples, public attitudes towards UAVs will significantly improve and change for the better in the future [48].Exposure to noise can cause people to become irritable, stressed, and sleep deprived, also resulting in negative effects on the cardiovascular and metabolic systems [49]. Impact of unmanned aerial vehicles on wildlife.A number of research works have shown that wildlife-vehicle collisions are a major problem in many countries [50].As drones usually fly at low altitudes, they will also pose a risk to wildlife and disturb the natural environment [51].Automated drones may fail to detect flying birds or scare them away with their sound, colliding with them, and, thus, injuring the animal and damaging the cargo being transported and the UAV itself. The regulation of unmanned aerial vehicle (UAV) corridors and designated zones.In Figure 1 is showed the regulation of unmanned aerial vehicle (UAV) corridors. In Lithuania, the regulation of unmanned aerial vehicle (UAV) corridors and designated zones is governed by several institutions and legal acts.The Lithuanian Transport Safety Administration (TSA) is responsible for establishing and enforcing air traffic management and safety rules.It may participate in and coordinate the process related to UAV corridors and designated zones.The Civil Aviation Administration (CAA) may also have a role in regulating the use of UAV corridors and designated zones.This institution can provide recommendations regarding airspace usage and safety.The Special Forces Aviation Battalion (SPJ AVBAT) is the part of the Lithuanian Armed Forces responsible for the execution of military UAV operations and technical aspects.The State Border Guard Service of the Republic of Lithuania (VSAT) may be responsible for the management and utilisation of UAV corridors if they are related to border protection or territory surveillance.In Lithuania, the regulation of unmanned aerial vehicle (UAV) corridors and designated zones is governed by several institutions and legal acts.The Lithuanian Transport Safety Administration (TSA) is responsible for establishing and enforcing air traffic management and safety rules.It may participate in and coordinate the process related to UAV corridors and designated zones.The Civil Aviation Administration (CAA) may also have a role in regulating the use of UAV corridors and designated zones.This institution can provide recommendations regarding airspace usage and safety.The Special Forces Aviation Battalion (SPJ AVBAT) is the part of the Lithuanian Armed Forces responsible for the execution of military UAV operations and technical aspects.The State Border Guard Service of the Republic of Lithuania (VSAT) may be responsible for the management and utilisation of UAV corridors if they are related to border protection or territory surveillance. The legal framework relevant to the regulation of UAV corridors and designated zones may include civil aviation regulations, airspace usage rules, national security requirements, etc.This could encompass various legal acts, such as the Civil Aviation Act, security regulations, state border protection rules, and so on. Formulating a Scientific Problem If properly adapted, drones in urban logistics can operate separately or be integrated with other modes of transport, allowing for a more efficient use of infrastructure and for maximising the quality of transport for customers. For drones to change and gain a foothold in the market, the problem of their application in logistics must be solved.One of the biggest obstacles to the adoption of drones is not a technological problem but a legal one.In many countries, there are no laws allowing UAV cargo transport, or they are very limited.The creation of this legal framework is severely hampered by people's ill-will towards this technology.People are not used to The legal framework relevant to the regulation of UAV corridors and designated zones may include civil aviation regulations, airspace usage rules, national security requirements, etc.This could encompass various legal acts, such as the Civil Aviation Act, security regulations, state border protection rules, and so on. Formulating a Scientific Problem If properly adapted, drones in urban logistics can operate separately or be integrated with other modes of transport, allowing for a more efficient use of infrastructure and for maximising the quality of transport for customers. For drones to change and gain a foothold in the market, the problem of their application in logistics must be solved.One of the biggest obstacles to the adoption of drones is not a technological problem but a legal one.In many countries, there are no laws allowing UAV cargo transport, or they are very limited.The creation of this legal framework is severely hampered by people's ill-will towards this technology.People are not used to having unmanned vehicles constantly flying over their heads, and the fear that they may be used for surveillance rather than cargo transport prevents the rapid development of unmanned aircraft infrastructure and the creation of legal regulations. The main aim of this article is to assess the applicability of UAVs in logistics and develop a model that has certain legal regulations and meets people's needs and societal attitudes, which would allow us to increase the flow of small-cargo shipments using UAVs. Methodology of Research on the Development of Small-Cargo Flows Using Unmanned Aerial Vehicles The qualitative research method was selected, as it is more acceptable for analysing the current problems in small-cargo transport and finding a solution to these problems through the use of a new mode of transport-unmanned aerial vehicles.A researcher has to take into account the requirements of their research participants.The form of a standardised interview and questionnaire was chosen to obtain experts' answers and reflections.The experts chosen for this qualitative research were privately presented with 10 questions in the form of a questionnaire.Following Kardelis [52], the questionnaire was designed according to all the research requirements and met the following criteria: • The exact procedures and requirements for submitting answers to the questions were specified; • An explanation was provided as to why the problem was being analysed and why this qualitative research was being conducted; • All the questions were designed to be as simple as possible, so that the respondent would know exactly what information their answer would convey; • The questions were precise and specific in order to obtain a correct understanding of the experts' views on the chosen topic; • Understandable answer options within a limited scope were selected to accurately reflect the views of the experts interviewed; • To ensure the anonymity of the experts, several questions were close-ended; • The questions were formulated so as to give the experts the freedom to answer the questions simply, offering multiple choices; • To ensure the accuracy of the questionnaire and retain the experts' attention throughout this research, the questionnaire was brief and clear, allowing us to collect strong and correct expert opinions. The key research objectives were the following: • to identify the main aspects affecting transportation by cargo UAVs; • to define the role of UAVs in the transport sector; • to analyse the types of existing drones that could be used to deliver small loads; • to investigate whether the proposed use of drones as a solution to the problem will contribute to improving the transport of small goods. Also, generally, in the expert research approach, the aggregated opinion of a group of experts is taken as the solution to the problem at hand (the outcome of the solution).If a decision is to be made on the basis of expert judgements, the degree of agreement between the experts' opinions is assessed.It is essential to determine the consistency of the experts' opinions by applying multi-criteria assessment methods.The reliability of the panel's judgements depends on the level of knowledge of the individual experts and the number of members.Assuming that the experts are sufficiently accurate measurers, it can be said that the reliability of the expertise of the panel as a whole increases with the number of experts.The type of survey used in this study was essentially a variant of the expert evaluation method described above. In our case, the chosen method was important enough to clarify the consistency of the experts' opinions. To identify the objectives of this study, 10 different experts were selected for questioning.This number of experts was chosen to ensure the accuracy and quality of the assessment of the consistency of their opinions.In order to reveal the competences of the experts, they were asked to provide their length of service in logistics, experience in the transport of small goods, and university degree.All the experts in the study had at least a Bachelor's degree and between 7 and 20 years of current work experience in the logistics sector.It was found that the minimum number of years of experience of the experts in the field of small goods' transport was 6 years.Also, all the experts interviewed had a Master's degree from a university.The questionnaire, as mentioned above, contained ten different questions (five closed and five open).To ensure the accuracy of the experts' answers, the qualitative questionnaire was administered in a separate private room, where there were no unauthorised people present at the time.This method allowed us to ensure the anonymity of the respondents and the accuracy of the answers.A list of questions was drawn up for the questionnaire, together with a justification as to why this particular question was being asked and what the answer would reveal. All the included questions were based on an analysis of the problems and areas of operation of UAVs.The questions covered several problematic areas of UAV operation and deployment, namely, societal, economic, and political ones. Methodology of Assessment of Expert Opinions The Kendall's Coefficient of Concordance was used to assess this research and calculate the concordance between the experts' opinions. To exclude non-concordant assessments, the method of calculating the concordance coefficient (Kendall's) was used to test the consistency in the experts' opinions.A group of selected experts m was assessed from the quantitative perspective using quality object indicator n (the experts were evaluated using a certain selected indicator m). The selected experts (E 1 , E 2 , ..., E m ) were presented with the questionnaire, and quantitative importance scores (B 1 , B 2 , ..., B n ) were awarded for the quality criteria (X 1 , X 2 , ..., X N ) of the object based on the respondents' experience, knowledge, and opinions.This way, the experts received scores for their background and knowledge.The most important quality criterion received the highest score, awarding scores in a descending order, down to the lowest score, which was 1. In the course of our analysis of the questionnaire, a table of the scores awarded to the experts was drafted (see Table 2).The concordance between the experts' opinions was then calculated using Kendall's concordance coefficient W according to the resulting estimates and scores [53].The score B ij of each criterion was converted into rank R ij .In this case, the most important criterion was changed to 1, then moving to the least important criterion, in an ascending order (where the least important criterion had the highest rank).The following formula was used to convert the scores into ranks: where m-number of experts; n-number of criteria; and B ij -score awarded by the expert.The dispersion concordance coefficient (W) reflected the sum of the ranks of each indicator (R i ) with respect to the experts, according to the following formula (where i = 1, 2, . .., n): Specifically, R i 's deviation from the sum of squares S of the overall mean R was the following: This was followed by the formula of the overall mean R: To obtain the average rank for each criterion, the sum of the ranks was divided by the number of experts (where i = 1, 2, . .., n): where R ij -the rank assigned to the respondent's criterion; and m-number of respondents. The formula for the sum of the ranks and the difference in the constant value was the following: The formula for the squares of the sum of the ranks and the difference in the constant value (see Table 3) was as follows: Means of ranks (3) The calculations added up to the total sum S, where S is the actual sum of the squares (in the presence of no associated ranks), then obtaining In practice, the concordance coefficient is used when its threshold value has been clarified and the estimates are considered to be still concordant. Where the number of objects is greater than n > 7, the significance of the concordance coefficient is obtained according to Pearson's criterion (chi-squared) χ 2 . The random variable is calculated as follows: Then, distribution χ 2 follows, with degree of freedom of v = n − 1.In our study, the level of significance α was selected from the distribution χ 2 table, with a degree of freedom v = n -1, thus obtaining critical values.If the calculated value χ 2 was greater than the critical value χ 2 , the experts were considered concordant. When the value of the number of indicators m is between 3 and 7, the distribution χ 2 should be applied with caution, as the critical distribution value χ 2 may be higher than the calculated one, in which case concordance coefficient probability tables or tables of critical value S at 3 ≤ n ≤ 7 would have to be used. The minimum value of the concordance coefficient (W min ) expresses the opinion of the experts on a certain criterion at a certain significant level α and a degree of freedom of v = n − 1, which is concordant, making the formula where χ 2 v,α is the critical Pearson statistic. Concordance between Experts' Opinions To check the concordance between the experts' opinions, the respondents were asked to number the most important factors that had the greatest impact on the delivery of small cargo in cities on a scale from 1 to 9, where 9 was the most important and 1 was the least important factor.These answers helped us identify the factors that slowed down the delivery of small goods and made it problematic.All the respondents interviewed were asked a question, listing nine answers in a sequence, assigning a letter to these influencing factors in a sequential order: A-hard-to-reach delivery address; B-shortage of drivers; C-insufficient pace of upgrading the roads and assignment of new addresses; D-environmental fees for cargo transport; E-price of transportation of first and last mile; F-transportation time; G-increasing competition; H-expensive fuel; and I-inefficient use of transport, empty kilometres. All the experts' answers on the most important factors that have the greatest impact on the delivery of in cities are presented in Table 4.Then, the experts' answers, converted into ranks, were calculated using the following formula: According to the data in the table, all the squares of the sum of the ranks were added, and the total sum S was obtained: where S is the actual sum of the squares (in the absence of associated ranks).Then, Kendall's concordance coefficient was calculated as follows: When the number of objects was greater than n > 7, the significance of the concordance coefficient was obtained using Pearson's criterion (chi-squared) χ 2 . Then, distribution χ 2 with v = n − 1 degree of freedom was carried out.The level of significance α was chosen from the χ 2 distribution, as can be seen in Table 5.The lowest value of the concordance coefficient (W min ) expressed the experts' opinion on a given criterion, if the given significance level α and degree of freedom v = n − 1 were concordant. To sum it up, if the calculated value χ 2 was greater than the critical value χ 2 , the experts' opinions were concordant, while the ranks showed the common opinion of all the experts. Analysis of Research Results This Section first discusses the problems analysed in the literature and the qualitative empirical findings.As mentioned above, the aim was to investigate the problems of the transport of small cargo, the existing modes of transport, the situations where UAVs provide the most benefits in logistics, and the possible applications of UAVs in small-cargo transport.This analysis was mainly based on a literature review and experts' insights, examining the respondents' opinions on the most prominent challenges and drawbacks related to the current use of UAVs, analysing the resources required for drone deployment and their maintenance, and also answering questions related to the cost of using drones compared to other last-mile delivery methods. Main ways to reduce first-and last-mile problems in urban logistics.The bar chart below lists the methods identified by the experts that they believe to reduce the first-and last-mile problem (see Figure 2).perts' opinions were concordant, while the ranks showed the common opinion of all the experts. Analysis of Research Results This Section first discusses the problems analysed in the literature and the qualitative empirical findings.As mentioned above, the aim was to investigate the problems of the transport of small cargo, the existing modes of transport, the situations where UAVs provide the most benefits in logistics, and the possible applications of UAVs in small-cargo transport.This analysis was mainly based on a literature review and experts' insights, examining the respondents' opinions on the most prominent challenges and drawbacks related to the current use of UAVs, analysing the resources required for drone deployment and their maintenance, and also answering questions related to the cost of using drones compared to other last-mile delivery methods. Main ways to reduce first-and last-mile problems in urban logistics.The bar chart below lists the methods identified by the experts that they believe to reduce the first-and lastmile problem (see Figure 2).Almost all the experts identified two main ways to reduce first-and last-mile problems in urban logistics in our open-ended question.These were bans on heavy goods' vehicles in cities and the installation of self-service parcel terminals in convenient locations in a city.Eight experts named both of these factors. Key Factors to Consider When Introducing New Modes of Transport in Urban Logistics In their answers, the experts pointed to increasing the sustainability of cities and reducing environmental pollution and social impact as the key factors.The experts divided urban sustainability into three main criteria: economic efficiency, environmental protection, and social wealth creation.Almost all the experts identified two main ways to reduce first-and last-mile problems in urban logistics in our open-ended question.These were bans on heavy goods' vehicles in cities and the installation of self-service parcel terminals in convenient locations in a city.Eight experts named both of these factors. Key Factors to Consider When Introducing New Modes of Transport in Urban Logistics In their answers, the experts pointed to increasing the sustainability of cities and reducing environmental pollution and social impact as the key factors.The experts divided urban sustainability into three main criteria: economic efficiency, environmental protection, and social wealth creation. The problems arising from freight transport are quite diverse.The experts considered the environmental and accessibility problems associated with cargo transportation or distribution, particularly in urban areas, to threaten the viability and sustainability of urban areas.The efficient distribution of cargo reduces congestion and emissions.There are many solutions to these main problems, and the experts grouped them into four categories: • Functional impact on the whole city, and, in particular, technical response to circulation needs by integrating the flow of goods in the overall traffic; • Economic consequences, as cargo transport is related to the quality and efficiency of the servicing road; • Integration into land-use planning; • Social and environmental impacts with a direct effect on the quality of life. Current modes of transport of small goods in cities.The bar chart below shows the currently available and used modes of transport of small goods in urban areas identified by the experts (see Figure 3). The experts' answers show that courier services and distribution are the main and most commonly used methods of delivery of small cargo.The main objective of distribution is accessibility and cost reduction.It must always be ensured that customers have access to a sufficient quantity of products and the ability to receive the replenishment of goods quickly and effortlessly.Resources required for drones.Drones have certain requirements that need to be met before they can be used.The experts identified some of these special conditions, such as the right temperature, fast delivery, and trained personnel during the take-off and landing to receive a special package.In addition to trained personnel, special premises/warehouses must also be available to operate drones.At an organisational level, local warehouses are most often used for small deliveries.The experts highlighted activities related to the drones themselves as a necessary resource.They shared the view that drones are the most cost-effective way of delivering goods in the last-mile context when delivering to hard-to-reach locations or in the case of a need to receive the cargo urgently. Most suitable cargo for UAV delivery.The pie chart below shows the experts' views on the most suitable cargo for transportation by UAVs (see Figure 4).The experts' answers show that courier services and distribution are the main and most commonly used methods of delivery of small cargo.The main objective of distribution is accessibility and cost reduction.It must always be ensured that customers have access to a sufficient quantity of products and the ability to receive the replenishment of goods quickly and effortlessly. Resources required for drones.Drones have certain requirements that need to be met before they can be used.The experts identified some of these special conditions, such as the right temperature, fast delivery, and trained personnel during the take-off and landing to receive a special package.In addition to trained personnel, special premises/warehouses must also be available to operate drones.At an organisational level, local warehouses are most often used for small deliveries.The experts highlighted activities related to the drones themselves as a necessary resource.They shared the view that drones are the most cost-effective way of delivering goods in the last-mile context when delivering to hard-to-reach locations or in the case of a need to receive the cargo urgently. Most suitable cargo for UAV delivery.The pie chart below shows the experts' views on the most suitable cargo for transportation by UAVs (see Figure 4).All deliveries using UAVs could be classified as small deliveries, as UAVs are not currently capable of delivering heavier loads due to their "immobility" and relatively new technology.Three experts indicated that human organs, blood, vaccines, and other small medical supplies are the most suitable cargo for UAVs.Several experts also mentioned that drones could take over lightweight and expensive cargo, such as jewellery, but there is a high likelihood of such cargo being stolen.A total of 40% of the experts replied that UAVs would be able to transport small and inexpensive cargo and would be less likely to damage cargo in the case of accidents.Such cargo would not require additional insurance and could be carried in an easier manner in urban infrastructure. Most suitable type of UAV for transporting small cargo.The pie chart below shows the experts' answers on the most suitable type of UAV for transporting small goods (see Figure 5).All deliveries using UAVs could be classified as small deliveries, as UAVs are not currently capable of delivering heavier loads due to their "immobility" and relatively new technology.Three experts indicated that human organs, blood, vaccines, and other small medical supplies are the most suitable cargo for UAVs.Several experts also mentioned that drones could take over lightweight and expensive cargo, such as jewellery, but there is a high likelihood of such cargo being stolen.A total of 40% of the experts replied that UAVs would be able to transport small and inexpensive cargo and would be less likely to damage cargo in the case of accidents.Such cargo would not require additional insurance and could be carried in an easier manner in urban infrastructure. Most suitable type of UAV for transporting small cargo.The pie chart below shows the experts' answers on the most suitable type of UAV for transporting small goods (see Figure 5).currently capable of delivering heavier loads due to their "immobility" and relatively new technology.Three experts indicated that human organs, blood, vaccines, and other small medical supplies are the most suitable cargo for UAVs.Several experts also mentioned that drones could take over lightweight and expensive cargo, such as jewellery, but there is a high likelihood of such cargo being stolen.A total of 40% of the experts replied that UAVs would be able to transport small and inexpensive cargo and would be less likely to damage cargo in the case of accidents.Such cargo would not require additional insurance and could be carried in an easier manner in urban infrastructure. Most suitable type of UAV for transporting small cargo.The pie chart below shows the experts' answers on the most suitable type of UAV for transporting small goods (see Figure 5).In their answers, 70% of the experts said that the most suitable type of UAV is a hybrid drone, as it is quite solid and strong for delivering cargo of different weights. Key challenges related to the use of drones.The main challenges are related to the weight and sensitivity of the items being transported.Four experts pointed out that the purchase price of drones is currently one of the biggest challenges.They also said that drones change and develop very quickly.This may lead to price changes in the future as the technology becomes more affordable.The total cost of the use of drones includes maintenance, storage, and the training of operators. Reasons hindering deliveries by UAVs.The bar chart below illustrates the expert's answers as to why small goods are still not delivered by UAVs (see Figure 6).In their answers, 70% of the experts said that the most suitable type of UAV is a hybrid drone, as it is quite solid and strong for delivering cargo of different weights. Key challenges related to the use of drones.The main challenges are related to the weight and sensitivity of the items being transported.Four experts pointed out that the purchase price of drones is currently one of the biggest challenges.They also said that drones change and develop very quickly.This may lead to price changes in the future as the technology becomes more affordable.The total cost of the use of drones includes maintenance, storage, and the training of operators. Reasons hindering deliveries by UAVs.The bar chart below illustrates the expert's answers as to why small goods are still not delivered by UAVs (see Figure 6).According to the experts' answers, the main reason for the relatively slow development of the transportation of small cargo by UAVs is the safety of people and personal information.This was identified by 9 out of the 10 experts.As drones are mostly unmanned and fly along already-established air corridors, accidents can happen where drones fall and injure people walking on the ground. Advantages of unmanned vehicles.The bar chart below shows the experts' responses on the advantages of UAVs for small-cargo transportation (see Figure 7).According to the experts' answers, the main reason for the relatively slow development of the transportation of small cargo by UAVs is the safety of people and personal information.This was identified by 9 out of the 10 experts.As drones are mostly unmanned and fly along already-established air corridors, accidents can happen where drones fall and injure people walking on the ground. Advantages of unmanned vehicles.The bar chart below shows the experts' responses on the advantages of UAVs for small-cargo transportation (see Figure 7).ment of the transportation of small cargo by UAVs is the safety of people and personal information.This was identified by 9 out of the 10 experts.As drones are mostly unmanned and fly along already-established air corridors, accidents can happen where drones fall and injure people walking on the ground. Advantages of unmanned vehicles.The bar chart below shows the experts' responses on the advantages of UAVs for small-cargo transportation (see Figure 7).The chart shows that all the experts recognized the advantage of UAVs as a means of reducing environmental pollution.Nine experts also identified the speed of UAVs: as drones do not require the existing roads, there is no congestion in the airspace, and this allows cargo to be delivered directly to the destination in the fastest available way. Proposed Model of Operation of Unmanned Aerial Vehicles The analysis of the scientific literature and the surveying of the experts showed that the transport of small goods in the first and last logistics mile is one of the most important and most difficult parts of the system to manage in urban logistics.This part of the chain is constantly looking for the most efficient way to deliver goods to final consignees.The most challenging delivery situations are in densely populated and rapidly expanding cities.The e-commerce network is constantly expanding, and the demand for small parcel deliveries is constantly increasing.Optimising the first-and last-mile delivery of small consignments is a major focus, and new perspectives are constantly being sought to address this problem.In the parcel industry, parcels arrive from post offices to a central The chart shows that all the experts recognized the advantage of UAVs as a means of reducing environmental pollution.Nine experts also identified the speed of UAVs: as drones do not require the existing roads, there is no congestion in the airspace, and this allows cargo to be delivered directly to the destination in the fastest available way. Proposed Model of Operation of Unmanned Aerial Vehicles The analysis of the scientific literature and the surveying of the experts showed that the transport of small goods in the first and last logistics mile is one of the most important and most difficult parts of the system to manage in urban logistics.This part of the chain is constantly looking for the most efficient way to deliver goods to final consignees.The most challenging delivery situations are in densely populated and rapidly expanding cities.The e-commerce network is constantly expanding, and the demand for small parcel deliveries is constantly increasing.Optimising the first-and last-mile delivery of small consignments is a major focus, and new perspectives are constantly being sought to address this problem.In the parcel industry, parcels arrive from post offices to a central warehouse from which they are then distributed to other destinations, such as other post offices.In this transfer option, several parcels for different customers are brought to decentralised facilities that are easily accessible to the customers.This decentralised location may be either a parcel locker or a shop.Compared to home delivery, the delivery of multiple customer shipments to a decentralised pick-up location saves time and costs for the service provider, which speeds up the handling time for increasing volumes of shipments, reduces the delivery costs, and facilitates urban mobility.Post offices, usually located in high-traffic areas, are stationary, unattended delivery machines operating 24 h a day, 7 days a week.They store small goods for delivery to the final recipient and often also provide the opportunity to send parcels.Drones can be an excellent choice for such parcel services between the post office and the terminal.In the UAV systems already developed, the drone currently makes a direct flight to the customer's home or business, delivers the parcel, and returns to the base.The back-and-forth delivery model has some drawbacks associated with a distributed network of UAVs delivering packages using a one-way drone network.The drone delivers the cargo directly to the customer and returns back empty.The same delivery model-i.e., directly to the customer-is more expensive, as it requires twice as many resources, twice as much airspace, twice as much navigation, twice as long tracking times, and twice as much battery power.Everything is doubled, while the same end result is achieved.In addition, an empty return journey is a complete waste of time and an inefficient use of the drone.To improve the quality of life in cities and effectively apply the concept of the first and last mile, it is essential to develop an alternative to unmanned aerial vehicles (UAVs) for the transport of small loads.UAVs speed up the delivery time of small goods and reduce the costs incurred during delivery using conventional freight vehicles. The delivery times for parcels via unmanned aerial vehicles (UAVs) to mailboxes or parcel lockers can be influenced by various factors.The distance between the distribution centre or hub and the destination mailbox or parcel locker will affect the delivery time.Shorter distances generally result in quicker deliveries.Flight speed and efficiency in the UAVs' flight can impact the delivery times.Faster UAVs can cover distances more quickly, reducing the delivery times.The operational hours of a UAV delivery service will determine when the deliveries can take place.Deliveries may be limited to certain hours of the day, typically during daylight hours and in good weather conditions.Adverse weather conditions such as high winds, rain, or fog can affect UAV operations and cause delays in the deliveries.Compliance with airspace regulations and obtaining the necessary permissions or clearances can influence the delivery routes and timings.Delays may occur if airspace restrictions are in place or if there is congestion in the airspace.The size and weight of the parcels that can be carried by the UAVs will affect the delivery times.Larger or heavier parcels may require additional time for loading and unloading.UAVs' battery life and the need for recharging between deliveries can impact the delivery times.UAVs may need to recharge or swap batteries, which can add to the overall delivery time.The delivery times for parcels via UAVs to mailboxes or parcel lockers will vary depending on these factors and the specific policies and capabilities of the drone delivery service provider.Typically, drone delivery services aim to provide timely and efficient deliveries within a reasonable timeframe.It is important to emphasise that the "Regular updating of Google Maps" means that Google periodically refreshes the data and information available on Google Maps to ensure their accuracy and relevance.This includes updating map imagery, street views, business listings, road information, and other geographical data.The frequency of updates to Google Maps can vary depending on several factors, including satellite imagery, street view, user contributions, partnerships, and data providers.The frequency of these updates to satellite imagery can vary depending on the availability of new imagery from the satellite providers.In some areas, imagery may be updated annually or even more frequently, while, in other areas, updates may occur less frequently.Street-view imagery is updated periodically as Google sends out Street View vehicles to capture street-level images.The frequency of these updates depends on factors such as the popularity of the area, changes in the road infrastructure, and the available resources for data collection.Users can contribute to Google Maps by adding or editing information about places, businesses, roads, and other features.These contributions can help keep map data up to date between the official updates from Google.Google may have partnerships with other companies or data providers that contribute to these map updates.These updates may occur on a separate schedule from Google's own data collection efforts.The frequency of the updates to Google Maps can vary widely depending on the type of data being updated, the availability of new information, and other factors.Google aims to provide the most up-to-date and accurate mapping data possible to its users. There is a model for a new combined system of delivering lightweight, small goods to newly installed self-service parcel terminals in cities.This method is perfectly suited for online orders.It would use maps, which are updated on a regular basis.By locating the address of the consignee, the system could automatically suggest the nearest self-service parcel terminal, thus ensuring the safest and fastest delivery of small shipments to the right consignee in urban areas.It would also allow for choosing one's preferred delivery time if the goods are to be received when the consignee is away and cannot claim their package immediately. An application where customers could check the status of their parcels should be introduced, allowing the customer to access it on any smart device having an internet connection.Also, a website should be developed for the creation of orders for the transport of goods by unmanned aerial vehicles.This system would have shipment status updates and last-mile-tracking capabilities.The website would allow users to check the status of their shipments in real time, and automatically send emails and notifications at different stages of the delivery (see Figure 8).introduced, allowing the customer to access it on any smart device having an internet connection.Also, a website should be developed for the creation of orders for the transport of goods by unmanned aerial vehicles.This system would have shipment status updates and last-mile-tracking capabilities.The website would allow users to check the status of their shipments in real time, and automatically send emails and notifications at different stages of the delivery (see Figure 8).All shipments in this system would be delivered into a single terminal equipped for loading or unloading drones.Once an order has been placed online and the cargo has been received at the terminal, an unmanned aerial vehicle (UAV) with delivery authorisation will use the terminal's drone navigation systems and scanners to locate the shipment, pick it up, and deliver it to the selected location.It will also pick up the cargo from the self-service parcel terminal to be taken back to the loading terminal, thus ensuring the return or reshipment of shipments.This will make the use of UAVs more efficient, as selfservice parcel terminals are designed not only for picking up small goods but also for sending them out.The aim of this system is to speed up the delivery of goods, make good use of the airspace, and reduce pollution and congestion in cities.All shipments in this system would be delivered into a single terminal equipped for loading or unloading drones.Once an order has been placed online and the cargo has been received at the terminal, an unmanned aerial vehicle (UAV) with delivery authorisation will use the terminal's drone navigation systems and scanners to locate the shipment, pick it up, and deliver it to the selected location.It will also pick up the cargo from the self-service parcel terminal to be taken back to the loading terminal, thus ensuring the return or reshipment of shipments.This will make the use of UAVs more efficient, as self-service parcel terminals are designed not only for picking up small goods but also for sending them out.The aim of this system is to speed up the delivery of goods, make good use of the airspace, and reduce pollution and congestion in cities. When an order is placed on the website, the customer will have to enter their details, such as full name, email address, mobile phone number, and delivery location.A PIN and a barcode will be generated and sent to the person having paced the order and the drone selected to deliver the order, specifically to the drone's smart information system.Orders can be placed online at home or at the selected self-service parcel terminal location using a touch-screen system.There shall also be a possibility to print out a barcode at the selfservice parcel terminal to be attached to the parcel for the drone to recognise it.After placing a parcel into the self-service parcel terminal, the order-processing department will receive the information on the system and, once everything has been planned, instruct the UAV to transport the parcel to the warehouse.This system is a comprehensive UAV automation solution for the shipment of small cargo in the first-and last-mile transport stage. Unmanned aerial vehicle (UAV) delivery refers to the transportation of cargo from point A to point B using unmanned aerial vehicles (UAVs).Such UAVs are either autonomous or remotely controlled by human pilots.The infrastructure that supports drone delivery operations requires the seamless integration of reliable drone hardware and software.All the shipments will be completed from the loading terminal to the selected selfservice parcel terminal, or vice versa.If a shipment is forwarded further, a drone will pick up small cargo from the selected self-service parcel terminal and deliver it to the loading terminal from which the cargo will be loaded onto road vehicles for onward transportation. Adaptation of Self-Service Parcel Terminals The chosen drone will be able to carry up to 27 kg of cargo per flight.It will be able to automatically take off and land on smart self-service parcel terminals, which will be specially designed to load and unload small cargo automatically.Dronedek mail self-service parcel terminals will be used to this end.The self-service parcel terminals seamlessly integrate into automated processes, including sorting, scanning, and storing express mail, and will have high-tech features, such as facial recognition and ID scanning. Dronedek mail self-service parcel terminals have a wide range of technical features, making them the most advanced on the market in terms of drone delivery capacity.These features include the following: However, there are the issues of the maintenance costs and longevity.The average wear and tear of a drone is 10 years. Safety.Emergencies such as flight system failure, bad weather, or other disasters can happen at any time.In addition to the standard drone fuses already installed in such an unmanned device, the FlytNow provides an emergency landing option.It is possible to set up selected landing points along a transport corridor and, in the event of a disaster, drop the drone at the nearest emergency point.The drone will also be equipped with advanced geo-fencing features, allowing to draw a polygon on the map along the delivery route to prevent drones from falling outside the specified area (no-fly zones). To protect the drone in unavoidable situations, a safety parachute will be installed at the top of the drone to avoid accidents or loss of communication and allow safe landing without damaging expensive equipment and the drone itself. Routes and transport corridors.To establish accurate transport corridors in the urban airspace, it will first be necessary to obtain authorisations for the transport of small goods in cities, as the transport of goods using unmanned aerial vehicles (UAVs) is strictly forbidden without the approval of the Federal Aviation Administration.Anyone flying a drone is responsible for flying in accordance with FAA guidelines and regulations.This means that, as a drone pilot, one needs to know the rules of the sky and where it is and is not safe to fly.Also, the above regulations also provide information on airspace restrictions, especially around airports, so that drones do not pose a danger to people or other aircraft.FAA-Recognized Identification Areas (FRIAs) are defined geographic areas where drones can be flown without remote ID equipment.The FAA provides a free digital toolkit of outreach materials to federal, state, and other partners to inform drone operators that flying in certain areas is prohibited.In order to establish precise transport corridors in the Vilnius airspace, it will first be necessary to obtain permits to transport small goods in the city, as it is strictly forbidden to transport goods using unmanned aerial vehicles (UAVs) without the approval of the Federal Aviation Administration.As far as conventional UAV operations are concerned, the project will start on a small scale and will be developed further.There will be three initial routes, where the drone takes off from the terminal to the post office and then returns back, which would mean returning along the same corridor.Deliveries will be made, when the route is clear, only during the day for now, but, if there is demand, we could consider the possibility of delivering small goods at night.This is technically viable but would require enhanced security systems and aviation approval.The battery endurance of the chosen drone is sufficiently high compared to the size of the Vilnius urban area, so waiting for permission to land or take off should not be a problem.With an initial fleet of three UAVs, the maximum capacity of the system will only be limited by the capacity of the lockers.There are currently two dispatch points, with 32 lockers each. In the drone delivery system, drones will not travel using the current road route maps.UAVs need a different route and an air corridor that will bypass no-fly zones and tall buildings and reach an existing post office as quickly as possible.The drones will be provided with continuously updated maps, and these will be incorporated into the airspace.They will fly over buildings, and the shortest regulated air corridor will be created in agreement with the Lithuanian Transport Safety Administration. The drones will be programmed to automatically transport cargo along the existing air corridor.Designating drone terminals involves clear signage and markings to indicate their purpose and areas of operation, making it easy for operators to identify authorised areas while adhering to safety regulations and operational guideline.Indeed, drone terminals are often labelled with signs or symbols such as "Drone Landing Zone" or "UAV Operations Area", clearly identifying the specific locations assigned to drone activities.Yellow markings typically denote the primary landing and take-off areas for drones within the terminal.These areas serve as designated zones for launching and recovering drones safely. Navigation on the user's side is tracked by a map provided by the delivery system, to be followed by the drone like any other tracking system, except that the route is by air instead of on the ground.In any case, the drone's algorithm detects barriers and obstacles, manages its path, and sets it in such a way as to reach the user, while a trained professional located at the terminal monitors the drone's journey in real time. As the drones will be operating over public airspace, security is taken very seriously, so the only take-off and landing points are at the top of the post office, self-service parcel terminal stations, and loading terminals.The drones will also fly along pre-defined "air corridors" between the parcel stations, which will be chosen to pose the least risk to the people below, including flying over covered walkways.The air corridors are designed so that we know who is in the route area, and the altitude is such that we can adequately separate the drones from known obstacles.Since drones are equipped with a safety system, it will always be able to deviate slightly from a straight path during the journey to avoid obstacles and ensure safety. As mentioned above, based on the results of the literature analysis, the main aspects of UAV freight transport were defined, the problems of the development of small freight flows were identified, the first-and last-mile characteristics were analysed, and a model for the development of small freight flows in urban areas using UAVs was developed on the basis of the results of the expert survey method.It was then submitted to the same experts for evaluation. In order to test the applicability of the model, new questions were prepared for the experts previously interviewed on how to reduce the problems of small goods' transport in urban logistics, how to solve the problem of security, and how to integrate terminals, post offices, and UAVs.The answers of the experts were positive, but some potential glitches in the implementation of the model were identified.The experts identified that the main bottleneck during deployment would be the adoption of a new mode of transport by the people living in these urban areas.As UAV transport is relatively rare, people are not immediately receptive to innovation.The deployment of this model could take from months to years, depending on the public's attitudes and government restrictions.The experts also stressed that the model requires new government regulations and legal aspects. Experts' observations and suggestions for improving the model.As road transport is the most polluting mode of transport, the experts suggested that, instead of using the existing warehouse, a completely new terminal should be built to accommodate different modes of transport.This would further reduce transport costs and slowly achieve the European Union's ambition of combining transport modes in the future. In summary, the use of unmanned aerial vehicles (UAVs) with the highest level of intelligence, automation, safety, and reliability would enable this delivery method to overcome the difficult road conditions and traffic congestion common in urban areas.Offering UAVs for the urban delivery of small goods as an innovative logistics solution would lead to the exploration of new routes.In a large and growing market, where more efficient first-and last-mile delivery are important, a new combined UAV system with postal machines could be the future solution for the faster and more sustainable delivery of small parcels to the end user. 1. The conducted analysis of the scientific literature showed that predicting the future of the development of UAVs and the effectiveness of this technology in the transport sector for first-or last-mile deliveries is a difficult task.The future market situation and the development of drones will depend on the improvement of UAVs, the readiness of society to accept this new mode of transport, and the cost-effectiveness of them in a certain region for a certain function. 2. Society's acceptance of drones and their regulation were identified as the key barriers to the development and integration of UAVs in the transport sector.The accommodation of such cargo flows requires a reliable airspace management system and new legal regulations to support the commercial delivery of cargo using drones. 3. The research conducted through the application of the expert survey method identified key factors related to improving urban sustainability and environmental pollution and social impacts.The main and most commonly used modes of delivery of small goods were courier services and distribution.The security of people and that of personal information were identified as the key reasons for the relatively slow development of the transportation of small goods by UAVs.The research results also highlighted the advantages of UAVs in terms of their ability to reduce environmental pollution and their speed.4. During the expert study, the application possibilities of drones in logistics were evaluated, and a model was created that would meet certain legal regulations, people's needs, and society's preferences, which would allow to increase the flow of smallcargo transportation with the help of drones. Figure 2 . Figure 2. Ways to reduce first-and last-mile problems in cities identified by the experts (compiled by the authors). Figure 2 . Figure 2. Ways to reduce first-and last-mile problems in cities identified by the experts (compiled by the authors). Figure 3 . Figure 3. Experts' answers distinguishing the existing modes of transport of small goods in the city (compiled by the authors). Figure 3 . Figure 3. Experts' answers distinguishing the existing modes of transport of small goods in the city (compiled by the authors). Figure 4 . Figure 4. Experts' answers on the most suitable cargo for UAV deliveries (compiled by the authors). Figure 4 . Figure 4. Experts' answers on the most suitable cargo for UAV deliveries (compiled by the authors). Figure 5 . Figure 5. Experts' answers on the type of UAVs best suited for small-cargo transport (compiled by the authors). Figure 5 . Figure 5. Experts' answers on the type of UAVs best suited for small-cargo transport (compiled by the authors). Future Transp. 2024, 4 ,Figure 6 . Figure 6.Experts' answers on the main reasons preventing the transport of small goods by UAVs (compiled by the authors). Figure 6 . Figure 6.Experts' answers on the main reasons preventing the transport of small goods by UAVs (compiled by the authors). Figure 7 . Figure 7. Experts' answers on the advantages of UAVs for small-cargo transportation (compiled by the authors). Figure 7 . Figure 7. Experts' answers on the advantages of UAVs for small-cargo transportation (compiled by the authors). Figure 8 . Figure 8. Model of the development of small-cargo flows using unmanned aerial vehicles in cities (compiled by the authors). Figure 8 . Figure 8. Model of the development of small-cargo flows using unmanned aerial vehicles in cities (compiled by the authors). Table 2 . Scores of importance awarded to the experts' opinions (compiled by the authors). Table 3 . Ranks of expert opinions and their use (compiled by the authors). Table 5 . Ranks of expert answers and their sum and average (compiled by the authors).
18,191.6
2024-05-01T00:00:00.000
[ "Engineering", "Environmental Science", "Law", "Computer Science" ]
Single-state state machines in model-driven software engineering: an exploratory study Models, as the main artifact in model-driven engineering, have been extensively used in the area of embedded systems for code generation and verification. One of the most popular behavioral modeling techniques is the state machine. Many state machine modeling guidelines recommend that a state machine should have more than one state in order to be meaningful. However, single-state state machines (SSSMs) violating this recommendation have been used in modeling cases reported in the literature. We aim for understanding the phenomenon of using SSSMs in practice as understanding why developers violate the modeling guidelines is the first step towards improvement of modeling tools and practice. To study the phenomenon, we conducted an exploratory study which consists of two complementary studies. The first study investigated the prevalence and role of SSSMs in the domain of embedded systems, as well as the reasons why developers use them and their perceived advantages and disadvantages. We employed the sequential explanatory strategy, including repository mining and interview, to study 1500 state machines from 26 components at ASML, a leading company in manufacturing lithography machines from the semiconductor industry. In the second study, we investigated the evolutionary aspects of SSSMs, exploring when SSSMs are introduced to the systems and how developers modify them by mining the largest state-machine-based component from the company. We observe that 25 out of 26 components contain SSSMs. Our interviews suggest that SSSMs are used to interface with the existing code, to deal with tool limitations, to facilitate maintenance and to ease verification. Our study on the evolutionary aspects of SSSMs reveals that the need for SSSMs to deal with tool limitations grew continuously over the years. Moreover, only a minority of SSSMs have been changed between SSSM and multiple-state state machine (MSSM) during their evolution. The most frequent modifications developers made to SSSMs is inserting events with constraints on the execution of the events. Based on our results, we provide implications for developers and tool builders. Furthermore, we formulate hypotheses about the effectiveness of SSSMs, the impacts of SSSMs on development, maintenance and verification as well as the evolution of SSSMs. Introduction Models play a central role in model-driven engineering (MDE) (Whittle et al. 2014).While models are typically used to facilitate team communication and serve as implementation blueprints, in the area of embedded systems modeling, models have been extensively used for such goals as code generation, simulation, timing analysis and verification (Liebel et al. 2014).One of the most popular modeling techniques used to specify the behavior of software are state machines. Many guidelines have been proposed on how one should model system behavior using state machines (Ambler 2005;Dennis et al. 2009;Prochnow 2008;Schaefer 2006).One of the recommendations commonly repeated both in books (Ambler 2005;de San Pedro and Cortadella 2016;Dennis et al. 2009) and online resources, 12 is that a state machine model is only meaningful if it contains more than one state, and if each state represents different behavior.The intuition behind this guideline is that a model should contain non-trivial information, otherwise it merely clutters the presentation of ideas (Ambler 2005).Single-state state machines (SSSMs)-affectionately known as "flowers" due to their graphical representation shown in Fig. 1-violate this recommendation, yet they are known to have been used, e.g., as models of decision making in conversational agents (Kronlid 2006), and in the supervisory control of discrete event systems (Chen and Lafortune 1995).From the growing body of software engineering literature we know that software developers do not always follow recommendations or best practices and often have valid reasons not to do so (Businge et al. 2013;Palomba et al. 2018;Tufano et al. 2017). We believe that understanding why a widespread recommendation is not followed in practice is the first step towards improvement of modeling tools and practice.In this paper, we extend our previous study on understanding the use of SSSMs in practice (Yang et al. 2020).In our previous study, we conducted an exploratory case study at ASML, the leading manufacturer of lithography machines.We employed the sequential explanatory strategy (Easterbrook et al. 2008).We first mined the archive for 26 components totalling 1500 models to understand the prevalence of SSSMs (RQ1) as well as the role played by SSSMs (RQ2).Then we discussed our quantitative findings with software architects to understand why they opt for SSSMs (RQ3) and what advantages and disadvantages of SSSMs they perceive (RQ4). We observed that SSSMs make up 25.3% of the models considered.These SSSMs are often used with other models as design patterns to achieve developers' goals.We identified five such design patterns that are repeatedly used in multiple components.The used SSSMs and design patterns provided industrial evidence on how developers deal with existing code base and tool limitations that are the common problems in MDE adoption (Liebel et al. 2014).Given ASML has a large portion of its code base developed with the traditional 1 GYAN http://gyan.fragnel.ac.in/ ∼ surve/OOAD/SCD/SC Guide.html 2 https://www.stickyminds.com/article/state-transition-diagramsFig. 1 A flower model (SSSM).The circle represents the single state and the arrows going from and to the same state represent the transitions.The incoming arrow indicates the initial state software engineering practices, 20.3% of SSSMs are used on boundary of "model world" to interface model-based components with existing code-based components.Most SSSMs (64.7%) are used to circumvent the limitations of the modeling tools used by ASML, e.g., lack of means to specify data-dependent behavior.As a workaround, developers have to implement the intended behavior with hand-written code.Because of that, the majority of the SSSMs for this purpose is also used on the boundary to interface models with handwritten code inside the components.Apart from dealing with the common MDE challenges, around 7.6% of SSSMs are designed to ease long-term maintenance of the models.Our interviews also revealed that SSSMs pass verification easily, which is considered as both an advantage and a disadvantage by developers.This implies the trade-off between the effort spent on designing a model that maximizes the advantage of verification and the extra cost caused by downstream problems due to inadequate verification. Building on our previous study (Yang et al. 2020), we explored how SSSMs evolve in this study (RQ5) with the aim of obtaining a complementary view of how developers use SSSMs in practice.Particularly, we investigated for a representative component when SSSMs are introduced in the systems (RQ5.1) and how developers modify SSSMs (RQ5.2).We answered these questions by mining the historical data of the largest state-machinebased component in the company and manually inspecting the modifications developers made during the evolution of SSSMs.We observed that the SSSMs introduced to ease maintenance and verification appeared in the early phase of component development and their number did not increase over the years.However, over the years more and more SSSMs are needed to deal with tool limitations.Particularly, encapsulating data-dependent behavior implemented with hand-written code is the main reason why developers introduce additional SSSMs in the recent years.This observation suggests that practitioners should thoroughly evaluate the strengths and limitations of modeling tools, taking the future development of their applications into account.Furthermore, we observed that less than 6% of models were changed between SSSM and MSSM during their evolution, implying that most SSSMs are stable.The stability of these SSSMs is also reflected in the number of transitions.SSSMs are more likely to become MSSMs than the other way around.The predominance of evolution from SSSMs to MSSMs can be seen as an example of increasing complexity of a system, suggesting possible applicability of Lehman's laws of software evolution (Lehman 1979) to models operating in a hybrid model/code context, and calling for further research into this topic.By comparing work-in-progress revisions (available on Git) and integration revisions (available on ClearCase), we observed that developers often have a series of modifications on SSSMs in response to the review that occurs before integration.This indicates that the changes of SSSMs might be driven by peer discussion in the review process, suggesting future research on model review practice.When it comes to the modification developers made to SSSMs, the typical modification we found is adding events with constraints and conditions to the execution order of the events and removing events, as opposed to modifying the execution order of the existing events.This observation suggests that the tool builders should consider prioritizing and facilitating the addition and removal of events when designing a user interface. Based on our results from these two studies, we formulate some implications for developers who would like to adopt state-machine-based solutions, as well as for tool builders and researchers. The remainder of this paper is organized as follows.Section 2 presents the preliminaries related to this study.In Section 3, we present our study context.In Section 4, we present our first study aimed at understanding the prevalence of SSSMs, the role played by them, why developers use them and the advantage and disadvantage perceived by developers.In Section 5, we present the study of evolution of SSSMs.We discuss threats to validity in Section 6.We then discuss the implications in Section 7. The related work is discussed in Section 8. Finally, the conclusions are presented in Section 9. Preliminaries We introduce the notion of SSSM and the relevant parts of the tool-chain used at ASML. Single-State State Machine Intuitively, in its simplest form a state machine is a collection of states and transitions between them.Some state machine modeling languages, such as UML state machines, have additional mechanisms (e.g., nested states and state variables) that can represent state information.We exclude the nested states and state variables from consideration as the nested states and the values of state variables can be flattened into simple states (Petrenko et al. 2004;Kim et al. 1999). In our study, we consider a state machine as a single-state state machine (SSSM) if the state machine has syntactically only one state.We call any other state machine a multi-state state machine (MSSM).For example an MSSM can have more than one state, nested states or make use of state variables. A State Machine Modeling Tool: ASD Analytical Software Design (ASD) is a commercial state machine modeling tool developed by Verum (2014).It provides users with means of designing and verifying the behavior of state machines, and subsequently generating code from the verified state machines. Model Type and Relation There are two types of components in a system developed with ASD, namely an ASD component and a foreign component.The ASD components depend on each other in a Client-Server manner where a client component uses its server components to perform certain tasks.The ASD components consist of Interface Models (IM) and Design Models (DM) which are specified by means of state machines.The DM implements the internal behavior of a component, specifying how it uses its server components.The relation uses is realized by three types of events: call event, reply event and notification event (Fig. 2, left).According to the ASD manual, an event is analogous to a method or callback that component exposes.The declaration of a call event contains the event name, parameters and the return type.A call event with a "void" return type has "VoidReply" reply event, while the one with a "valued" return type can use all user-defined reply events.For instance, call event task ([in]p1:string, [out]p2:int):void is a void type call event with an input and an output parameter.Notification events with output parameters are used to inform clients in synchronous or asynchronous ways, similar to callback functions in such programming languages as C and Python.The IM specifies the external behavior of a component.It prescribes the client components of the ASD component in which order the events can be called and what replies they can expect, i.e., interface protocol.The same IM can be implemented by multiple DMs.In cases such as component reuse, ASD components interact with foreign components, non-model components implemented as hand-written code.To support communication between ASD components and foreign components, the external behavior of a foreign component is represented by an IM. Figure 2 (right) shows an ASD-based alarm module where ASD component Alarm uses ASD component Sensor and a foreign component Siren.In the remainder of the paper, we also refer to foreign components as code-based components. Verification and Code Generation One of the major benefits of using ASD is the possibility to formally verify behavior of the models. For each component, the type of verification performed by ASD can be summarized into two steps.First, ASD verifies whether each DM has correct behavior, in the sense that its behavior is deterministic and does not contain any deadlocks, or livelocks.It should also not perform illegal sequences of calls.The role of the IM in this check is just to provide the verification tool with information on which calls are considered illegal.For our alarm module example, ASD checks whether DM Alarm calls occur in the order specified in IMs ISensor and ISiren.Second, ASD verifies whether the DM of a component, together with the between the DM and IM.Verifying this refinement guarantees that the IM can be used as an abstract representation of the DMs behavior in further analysis of the system.For our alarm module example, ASD verifies whether DM Alarm, together with IMs ISensor and ISiren refines IM IAlarm correctly.Code in the selected target language (e.g., C++) can be automatically generated once the system is free of behavioral errors.Note that the IM and DM have different roles, not only in system modeling but also in the verification and code generation.The IM provides an abstract view of the behavior of a component while DM provides a detail view.Both IM and DM are used to understand software, communicate between engineers, and verify the behavioral correctness.However, only the DM contains the implementation details that are used to generate code. Study Context To get a deeper understanding of the use of SSSMs in embedded systems industry, we conducted an exploratory case study.that consists of two complementary studies present in Sections 4 and 5. Case study is an empirical method aimed at investigating contemporary phenomena in a context (Runeson and Höst 2009;Yin 1994). We follow the recommendation of Runeson and Höst and intentionally select a case of analysis to serve the study purpose (Runeson and Höst 2009).We conduct our exploratory case study at ASML.The company uses the commercial state machine modeling toolchain Analytical Software Design (ASD) developed by Verum (Verum 2014), described in Section 2.2, to develop the control software of their embedded systems, providing a paradigmatic context to our study.The company uses ASD to design and verify the behavior of state machines, and subsequently generate code from the verified state machines. We obtain all components developed with ASD in the system, except for those that are not accessible due to international legislation or contain strategic intellectual property.These 26 components are continuously maintained; code generated based on these models runs on the machines produced by ASML.Each component is formed by multiple interacting IMs and DMs.In total, we obtain 924 IMs and 576 DMs, with the number of IMs per component ranging from 2 to 349, and DMs from 0 to 284.Table 1 gives an overview of the 26 components.For the sake of confidentiality, we refer to these components as A, . . ., Z and cannot share the models.Note that, other than these 26, components developed with traditional software engineering still make a large portion of the software system of the machines.Therefore, these 26 components have to interact with the existing code-based components. 4 Understanding the Use of SSSMs (Yang et al. 2020) In this section, we present our previous study that investigated the prevalence of SSSMs (RQ1), the role SSSMs play (RQ2), the reason why developers use them (RQ3) and the advantages and disadvantages of them perceived by developers (RQ4) (Yang et al. 2020).In Section 5 we extend this study to explore the evolutionary aspects of SSSM.We present our method in Section 4.1 The results for RQ1-4 are presented in Sections 4.2, 4.3 and 4.4.We then discussed threats to validity in Section 6. Methods We employed sequential explanatory strategy which consists of a quantitative phase and a qualitative phase (Easterbrook et al. 2008).Figure 3 gives a high-level overview of our research method.To answer RQ1, we study the prevalence of SSSMs by analysing models of the 26 components.To answer RQ2, i.e., to understand the role played SSSMs we combine two complementary approaches.On the one hand, according to Wittgenstein (Wittgenstein 2009), the meaning is determined by use.Thus we exploit structural dependencies (cf.(Antoniol et al. 1998;Dong et al. 2007)) to identify the implemented by and uses relations between IMs and DMs, i.e., the use of models.On the other hand, we expect the role of the SSSM to be reflected in its name, in the same way the names of objects have been extensively used to uncover the responsibilities of software objects (Garcia et al. 2013;Nurwidyantoro et al. 2019;Kuhn et al. 2007). In the qualitative phase, we conduct a series of interviews to answer RQ3 and RQ4.The interviews were recorded and audio was transcribed.To derive and refine the theory based on the obtained qualitative data, we employ Straussian grounded theory because it allows us to ask under what conditions a phenomenon occurs (Stol et al. 2016).We opt for an iterative process to reach the saturation.It is important to note that in the sequential explanatory strategy the results from the quantitative phase is used to inform the subsequent qualitative phase.This means the concrete study design for RQ3 and RQ4, e.g., the interview questions, is determined by the results of RQ1 and RQ2.For example, depending on the number of identified SSSMs, we opt for different interview strategies; if the number of SSSMs will be small enough then we can request the experts to explain the reasons behind every SSSM.Otherwise, we need to prompt the discussion based on the findings we obtained from the analysis of structural dependencies and names.We detail the procedures of the qualitative phase in Section 4.4.1. Prevalence Analysis (RQ1) We answer RQ1 by analysing the frequency of SSSMs in the 26 components in Table 1. Data Analysis We analyse 1500 ASD models corresponding to components A-Z.We first convert each model into an Ecore model (Steinberg et al. 2008) using a tool developed by ASML.The conversion process is lossless, i.e., the Ecore models can be converted back to the original ASD models.We then use EMF Model Analysis tool (EMMA) (Mengerink et al. 2017) to measure the number of states #state and the number of state variables #sv.An SSSM is a model with #state = 1 and #sv = 0. ExecuƟng Grounded Theory process RQ3 and RQ4 Fig. 3 Overview of our research methods This tendency for using SSSMs mainly for IMs can also be observed in smaller components.In 13 out of 26 components more than 50% of IMs are modeled as an SSSM.On the contrary, only 26 SSSM-DMs are present, and they are present in 11 out of 26 components.Furthermore, although SSSMs are generally popular among IMs, different components show different degrees of usage; SSSMs make up more than 70% of IMs in components E, I, Q, V and W while less than 10% in components A, R and T. Role of SSSMs (RQ2) Since SSSM-IMs are the lion's share of SSSMs, when answering RQ2, RQ3 and RQ4 we focus exclusively on SSSM-IMs.We start with data collection of structural relations between models and the names of models, followed by an analysis of results. Data Analysis To study what roles the SSSM-IMs play, we split IMs into three mutually exclusive locations, namely: 1. disconnected (disc): IMs that are neither implemented nor used by a DM. 2. boundary (bd): IMs that are used by at least one DM but not implemented by any DMs, or IMs that are implemented by at least one DM but not used by any DMs.They are on the boundary of "model world" independent from whether code is present on the other side of the boundary.3. non-boundary (nb): IMs that are implemented by at least one DM and used by at least one DM. We use EMMA (Mengerink et al. 2017) to extract structural relations implemented by and uses from models, and classify IMs based on these three locations. To get complementary insights, we analyse names of models.We follow commonly used preprocessing steps (cf.(Thomas et al. 2014)) including tokenization based on common naming conventions such as under scores, camelCase and PascalCase (syntok 2014), stemming (Wiese et al. 2011) and removal of stop words and digits using the NLTK package (Tookkit 2014).We also observe that the names often contain abbreviations with the sequence of capitals, e.g., IOStream.Hence, prior to tokenization we manually collect a set of abbreviations from the names, compute how frequently they are used per model and remove them from the names.As a result, for each component we obtain two documentterm matrices with models acting as documents.The matrices describe the frequency of terms (including the abbreviations) that occur in a collection of the names of SSSM-IMs and MSSM-IMs, respectively. We conjecture that the terms appearing in the SSSM-IM set while not in the MSSM-IM set (Exclusive), and the terms that appear in both sets (Shared) with high frequency in the SSSM-IM set might suggest the role of SSSM-IMs.Therefore, for each component we further obtain the sets of Exclusive and Shared terms.To identify the "most important" shared terms we compute the odds ratio of each term, i.e., ratio of the share of SSSM-IMs containing term t and the share of MSSM-IMs containing term t. Results Table 2 is a contingency table showing how many SSSM-IMs and MSSM-IMs fall into each location group.We observe that overall bd-models are more likely to be SSSM, while nb-models are more likely to be MSSM. However, such an overall assessment might obscure differences between the components, in particular since component B is much larger than the remaining components.Hence, per component we apply statistical techniques to determine whether for an IM being an SSSM depends on the location group it belongs to.Since only component B has disconnected models, we exclude disc from the statistical analysis.For each component, we construct a 2 × 2 contingency table recording the number of SSSM-IMs and MSSM-IMs for each location.To analyse the contingency tables we opt for Fisher's exact test (Fisher 1922) rather than a more common χ 2 test: indeed, many components have few IMs and the normal approximation used by χ 2 requires at least five models in each group, i.e., at least 20 IMs per component.The null hypothesis of Fisher's exact test is that the type of IM (SSSM vs. MSSM) is independent of its location (bd vs. nb).Figure 4 shows the p-values obtained: for 9 out of 26 components the p-value is smaller than the customary threshold of 0.05 and the odds ratio (i.e., the ratio of the share of SSSM-IMs from boundary and the share of MSSM-IMs from boundary) is larger than one.This means that we can reject the null hypothesis for these 9 components, i.e., the type of IM depends on whether it is on the boundary of the "model world".We also observe that the components where the null hypothesis can be rejected tend to have more IMs than those where the null hypothesis cannot be rejected. Next, we identify the terms frequently used in names of the IMs.In total, we obtain 472 terms from the names of IMs for components A-Z.Table 1 gives an overview of the number of Exclusive terms, the number of Exclusive terms with more than five occurrences (Exclusive&Frequent), the number of Shared terms, and the number of Shared terms with an odds ratio larger than one (Shared&OR>1), as well as the number of Shared terms with frequencies higher than five and an odds ratio larger than one (Shared&OR> 1&Frequent). We observe that some terms are exclusively used in SSSM-IMs.However, only components D, K, N and S contain exclusive terms with more than five occurrences as shown in Table 1.The three such terms in component D are "data", "foreign" and "barrier".Components K, N and S have one such term: "access".Based on this observation, we conjecture that developers might think SSSMs particularly suit a certain functionality related to "data", "foreign", "barrier" and "access".We do not further investigate the low-frequency Exclusive terms because we expect them to be less likely to disclose the common roles SSSMs play. Out of the 26 components, 22 have terms shared in SSSM-IMs and MSSM-IMs.15 components have shared terms with an odds ratio larger than one, i.e., the models containing the term in their names are more likely to be SSSMs.As shown in Table 1, such terms are frequent in nine components.For component B Fig. 5 shows frequently occurring shared terms with an odds ratio greater than one.We anonymize the domain-specific terms and refer to them as t1,...,t5 for confidentiality reasons.Term "foreign" belongs to group Shared&OR> 1&Frequent in component B but to group Exclusive&Frequent in component D. This suggests that the roles reflected by the same term might be implemented differently Fig. 5 Frequency and odds ratio of terms that belong to Shared&OR> 1&Frequent for component B in different projects.Moreover, it seems that domain-specific terms are very important as they are topping the odds-ratio list. In other eight components that have a non-empty group Shared&OR> 1&Frequent, there are in total nine domain-specific terms identified as t6,...,t14 and five non-domainspecific terms "error", "servic", "seqenc", "measur" and "data".The terms from groups Exclusive&Frequent and Shared&OR> 1&Frequent, and the corresponding occurrences in the names of the SSSM-IMs from the 26 components are summarized in Table 3.These are the terms repeatably used in the names of SSSM-IMs. We conjecture that terms in Table 3 encode the reasons why developers use SSSM-IMs and use these terms to prompt discussion in the follow-up interviews. Procedure Following the sequential explanatory research strategy, we refine the concrete steps for the qualitative phase based on the outcomes of the quantitative phase. Iterative process We start the process by considering the largest component (component B) as we expect it to produce the richest theory.We conduct semi-structured interviews with architects of the component under consideration, perform open coding of the interview transcripts to derive categories of SSSM-IMs, perform member check to mitigate the threat of misinterpretation (Buchbinder 2011), and label the SSSM-IMs in all components using the categories derived.If at this stage all SSSM-IMs have been labeled, saturation has been reached and the process terminates.Otherwise, we select a not yet considered component with the largest number of unlabeled SSSM-IMs and iterate.Figure 6 summarizes the process we follow. Interview design The interview questions stem from the quantitative findings.First of all, reflecting on the findings for RQ2 we ask why do developers use SSSMs more often on the boundary of the "model world" than in other parts?To discuss the goals of using disconnected, boundary and non-boundary SSSM-IMs, we provide a list of SSSM-IMs for each location and ask: what goals do you intend to achieve with an SSSM-IM in disconnected/boundary/non-boundary parts?Next, for each term identified either as Exclu-sive&Frequent or as Shared&OR> 1&Frequent, we provide a list of SSSM-IMs containing the term and ask two questions: what responsibilities does the term imply?and why do you use SSSMs to implement these responsibilities?To obtain as rich information as possible, we send a list of SSSM-IMs to our interviewees before the interviews, allowing them to refamiliarize themselves with the models.We do not disclose the interview questions prior to the interview.To answer RQ4, we ask developers about advantages of using single-state state machines and the disadvantages.We have the interviews in a meeting room with a whiteboard.Interviewees can draw on the whiteboard for explanation.We take photos of the whiteboard after interviews. Coding procedures After initial interviews, we conduct open coding on the interview transcripts, identifying the goals that developers attempt to achieve, the solutions they employ and the location of the used SSSM-IMs (boundary/non-boundary/disconnected).For example, when we ask questions about term "foreign", we obtain the following answer: "We want to create formal models that is why we use ASD.The problem here is the outside world is not formal.So it can behave as expected or unexpected, we don't know ... If people follow the rules, all boundaries need to be armored.The important aspect is that the calls from foreign side must be accepted by every state.As foreign IM, you cannot restrict anything because you don't know the behavior of foreign (components)". Based on this answer we identify the developers' goal as protecting formal models from informal and unknown foreign behavior, the solution they employ should not restrict the order of events from foreign side, and the location of the SSSM-IM is boundary.The solution is augmented by details with photos that we took from the whiteboard.We refer to the detailed solution as design pattern.Each design pattern can be 1) an SSSM-IM, 2) a combination of an SSSM-IM and the DM(s) that implement it, or 3) a set of SSSM-IMs and other models.The open coding process results in a set of categories that consist of goals, locations and design patterns.For instance, category armoring the boundaries of models emerges from the previous example.Next, we perform axial coding to group these categories based on the core reason behind, i.e., why developers would like to achieve the goal?For instance, the core reason behind category armoring the boundaries of models is that models have to work with the existing code base.In addition, we also identify the advantages and disadvantages from our interviewees' answers. Member check The first author conducts the coding tasks.In order to ensure that the categories are correctly identified, we perform member check (Buchbinder 2011) with our interviewees.The member check is a validation activity that requests informant feedback to improve the accuracy of the derived the theory.This resulting adjustment on categories is represented by the dashed line in Fig. 6. Label SSSM-IMs The first author reviews each SSSM-IM and labels it based on the derived categories.For instance, we can determine whether a model is an instance of category armoring the boundaries of models by checking if it is on boundary and implements the design pattern we identified for this category. Reasons of Using SSSM-IMs (RQ3) We reach saturation with three face-to-face interviews and two interviews through emails.Table 4 provides an overview of our results.We identify four core reasons why developers use SSSM-IMs: 1) using models together with existing code base, 2) dealing with tool limitations, 3) facilitating maintenance and 4) easing verification.For each core reason, developers have at least one goal to achieve with SSSM-IMs.353 out of 354 SSSM-IMs can be explained by the core reasons and goals listed in Table 4. Before discussing Table 4, we briefly review the model that cannot be explained by it.It is a disconnected SSSM-IM that should have been removed once it was no longer used ("dead code").We refer the design patterns that involve a set of models to D1,...,D5 as shown in Fig. 7.For the sake of generalizability, we do not explain the design pattern that is used to achieve goal EaseRefactoring because it is specific to the semantics of the modeling language provided by ASD suite In the remainder of this section we discuss the reasons, goals and design patterns shown in Table 4. Using Models together with Existing Code Base As mentioned, a large portion of software base was developed with the traditional software engineering methods.Hence, the model-based components need to interact with the existing code-based components.The behavior of the models is formally verified and can only interact with each other according to the protocol specified in the IMs.By nature, when communicating with foreign components, model-based components operate under the assumption that foreign components behave as specified.However, due to the lack of formal specification, the behavior of code-based components is not formally verified and often unknown.This means that developers need a mechanism to "protect" models from non-verified and unexpected behavior of code-based components. To achieve the goal, developers come up design pattern D1 shown in Fig. 7.The core idea of this pattern is to create a layer which accepts any order of calls from the code side at first and then only forwards the allowed order of the calls to the model side.By implementing this idea, both code-based components and model-based components are not aware of the presence of each other. Next we discuss how the elements in the pattern work together.Developers would like to protect Core which is a group of models from the non-verified of code-based components Foreign Client and Foreign Server.IMs IForeign are SSSM-models which allow any order of input events while DMs Armor forward the allowed calls specified in IMs IProtocol which describes the order of events expected by Core.In order to trace the unexpected behavior from Foreign Client and Foreign Server, DMs Armor also record protocol deviations with Logger so that it is easier to distinguish failures caused by protocol violations from failures caused by functional errors. Dealing with tool limitations ASD suite has several limitations preventing developers from specifying the intended behavior of models.As workarounds, developers have to manually implement the behavior with general-purpose programming languages.This also results in the use of code between models inside a model-based component and raises the need of interfacing with the code. DataEncapsulation One of the limitations of ASD suite, is the lack of a way to specify data-dependent behavior: one can declare parameters for the events in models to pass data transparently from one model to the other but the control decision cannot be made based on a parameter value. 3The pass-by data eventually ends up in code where the data-dependent behavior can be programmed.To work around this limitation, developers store and manage data in hand-written code known as data stores inside the model-based components.The developers' goal is to have a mechanism allowing the models to read and write each piece of data.Design pattern D2 in Fig. 7 is used to achieve the goal. In the system under study, each piece of data in a data store is associated with an ID.For the sake of example, assume that a control decision has to be made based on the comparison of two data values associated with ID d1 persistently stored in DataStore1 and DataStore2 respectively.Because models can only pass data transparently, there is a need to implement hand-written code known as Algorithm which offers call events triggering the comparison task, and returns reply events that inform about the result.To obtain the control decision based on the comparison, DM DataFunction is used to fetches the data corresponding to d1 from DataStore1 and DataStore2.Then it passes the fetched data to Algorithm to obtain the result. Based on the received reply, DataFunction synchronously returns a reply to the client models that ask for a decision.For complex applications, DataFunction needs to intensively interact with data stores and Algorithm in order to derive results.To reduce the coupling between data-aware code and data-independent models, IM im4 is an SSSM which only specifies the call events and the possible replies so that the underlying data-related interactions between code and DataFunction are hidden from the models that only expect a decision.Similar to IM im4, IM im3 only specifies the signatures of independent functions implemented with code. When it comes to data access, a write operation for data associated with a specific ID is required to be performed before a read operation for the corresponding data.Naturally, developers would like to specify the required order in IMs im1 and im2 so that the interaction protocol between DataFunction and these IMs is explicitly defined, and subsequently verified before code generation.However, since data-dependent behavior is not supported by ASD, im1 and im2 are SSSMs which only specify the signatures of call events and replies for the intended data operations.The interaction protocol, in this case, is implicitly encoded in code for these data stores, requiring test efforts to examine correctness. EventCollector Another tool limitation that influences how developers design software is that client models cannot select a subset of notification events to receive from their server models.This means that the client models have to receive all notification events from their server models even though some of notification events are out of their interest.To model a case where multiple client models are interested in different subsets of notification events from the same server model, design pattern D3 in Fig. 7 is used.Instead of interfacing with the server model directly, clients interface with a hand-written EventCollector which works as a router forwarding each notification event to the corresponding client according to the events that developers specify with SSSM-IMs e1,e2 and e3.Because each DM can only implement one IM developers have to inject the hand-written router between models. LibraryReuse ASD suite provides reusable libraries, such as a timer, implemented by models that can be used across different applications.However, the available libraries are limited compared to their counterparts available for general-purpose programming languages.For instance, one of missing libraries is timestamp library.As a workaround, developers use hand-written code to wrap the timestamp-related operations (e.g., converting timestamp format) into functions with output parameters (e.g., for obtaining converted timestamp).The SSSM-IMs specify the signatures of the hand-written functions so that the generated code from the models can seamlessly reuse these libraries. GlobalLiteralValue Since ASD suite does not provide means of specifying global constants as most programming languages have, developers have to use the actual literal values wherever they need them.For example, assume that we would like to use a global constant Size to store the value of the buffer size set to 100.To avoid the errors that could be introduced by hard-coding this value, developers implement SSSM-IMs and SSSM-DMs to store the value which can be obtained by calling corresponding events.Developers specify an SSSM-IM that offers call event getBufferSize ([out]p:int):void.In the corresponding SSSM-DM, the call is augmented with the corresponding output integer,i.e., getBufferSize(100).In this case, by calling event getBufferSize(n), other models that need the value can obtain variable n that holds integer 100. Facilitating Maintenance In four cases, SSSM-IMs are used to facilitate maintenance. CallMapping Client models often need to call a sequence of events on different server models.To reduce the coupling between the client model and its server models, developers implement a mapper which consists of an SSSM-IM and an SSSM-DM between the client and its servers (see D4 in Fig. 7).The SSSM-IM only specifies the signature of a void call event that can be triggered by the client model.The mapping of the call event triggered by the client model to a sequence of intended call events on other server models is specified in the corresponding SSSM-DM. FeatureSelection As the system under study is specified using principle from software product line engineering, developers separate features shared by all products from productspecific features to be configured at runtime (Capilla et al. 2014).D5 in Fig. 7 shows a design pattern supporting this separation.For the sake of an example, assume a system needs to construct different sequences of actions for the same task based on the runtime configuration of the product type.For each product, the sequence construction is triggered by the same call event Construct.To hide the product-specific details from the common models, IFeatureFwd specifies the signature of Construct which is implemented by Fea-tureVar1 and FeatureVar2.Common, as the common feature shared by all products, needs to call Construct to trigger the sequence construction on the correct variant based on the runtime configuration.However, involving Common in this feature selection breaks the separation of concerns, i.e., Common has to be aware of that different products exist.To avoid this, FeatureSwitch is implemented.At runtime FeatureSwitch reads the product type from a data store and forwards Construct to the appropriate product-specific implementation (i.e., FeatureVar1 or FeatureVar2). Since IFeature has to hide the feature selection and product-specific details from Common, it is identical to IFeatureFwd acting as an interface offering Construct.When Common calls Construct, the feature selection is performed, followed by the sequence construction based on the selection.Common is, hence, not aware of any product-specific information.Developers expect that by using this pattern the coupling between common parts and product-specific parts can be reduced and the variants can be extended without modifying the common parts. EaseRefactoring Developers also consider the ease of refactoring.Assume a model repeatedly triggers a task implemented by a sequence of e1,..., e8.Hard-coding this sequence at several invocation sites is error-prone.Moreover, any change to the sequence such as renaming an event, has to be performed at all invocation sites.Hence, developers use a solution akin to procedure abstraction to specify a sequence of events only once and reuse it wherever needed.Since the concrete solution is specific to the semantics of ASD, we do not disclose further details.Documentation IMs are sometimes used to document the signatures of functions.In such cases, developers use disconnected SSSMs to communicate the design. Easing Verification The efficiency of verification is another concern in modeling.Prior to the verification step typically carried out by a model checker, the tool-chains need to convert state machine specifications into a model checker formalism which represents the state space of the models.Behavioral correctness of models with a large state space takes a lot of time to verify.Hence, the verification step slows down the design and maintenance of the models.In our case study, we found a situation where an SSSM-IM is used to avoid verification on a large state space. The intention of the developers was to create an interface such that the number of triggers on event a should be larger than the number of triggers on event b.The corresponding state space contains all possible combinations such that a is triggered exactly one more time than b, two more times, etc.During the verification step, the model checker has to visit every single state in the state space.To ease the verification step, developers simplify the model to an SSSM with events a and b, dropping the requirement that the number of triggers on event a should be larger than the number of triggers on event b: "Scalability is a good reason to not verify this explicitly, as it does not matter if the max difference between #a -#b is 1, 2, 9 or 100.Abstracting from the exact difference makes the verification scalable, at the cost of less guaranteed correctness. (Dis)advantages of SSSM-IMs (RQ4) When it comes to the advantages and disadvantages of using SSSM-IMs, the interviewees share the same opinion.The main perceived advantage of SSSMs is the ease of verification: "The main advantage is that a flower model is stateless, it imposes no restrictions so verification passes easily and perhaps more importantly: it is easier to implement a Foreign component faithfully".Moreover, since SSSM-IMs impose no restrictions on the order of events, changes to the calling order on the client side also easily pass the verification, reducing the maintenance effort.However, the ease of verification also means that the model "will likely always pass verification" hiding potential bugs and compromising potential verification benefits.Taking both the advantage and the disadvantage of SSSM-IMs into account interviewees recommend caution when using SSSM-IMs: "people (developers) need to have a very good reason for it because it does not check anything".Furthermore, according to the observations of the interviewed architects, it usually takes a lot of time for developers to learn how to design models in a way that development, maintenance and verification can be facilitated. Evolution of SSSMs As discussed in Section 4, SSSMs are widely used in different components for various reasons, although the widespread modeling guidelines suggest not to use them.Our discussion with developers implies that SSSMs can pass verification easily, which may ease the development but also potentially hide defects.However, it is unknown yet when SSSMs are introduced in the components, and whether and how they have been modified by the developers.Understanding the life-cycle of SSSMs and the actions taken by developers to modify them can help us better understand the phenomenon of how developers use SSSMs in practice, and provide suggestions to researchers and tool builders.Therefore, to obtain a complementary view of under what circumstances SSSMs are being used, we analyze the evolution of SSSMs in the change histories of software components.Specifically, we posed the following questions: RQ 5.1 When were SSSMs introduced?This question aims at understanding when the need for SSSMs occurs.Specifically, we study whether SSSMs were introduced as soon as the development starts or whether they surged into systems due to certain maintenance needs.To this aim, we investigated the trends in the history of the SSSMs. RQ 5.2 How do developers modify SSSMs? With this question, we aim to understand whether and how developers modify SSSMs.We conjecture that there might be several evolutionary scenarios; developers might only add or remove the transitions of SSSMs, add or remove states or combinations of them.In particular, we study following questions: -5.2a Do SSSMs become MSSMs, and vice versa?-5.2b Do developers modify transitions of models that stay SSSM throughout their entire history?-5.2cWhat modifications are involved when SSSMs are modified to become MSSMs and when MSSMs are modified to become SSSMs? We answer these questions by mining model repositories and manually categorizing the changes that developers made to SSSMs. Study Subject To study the evolution of SSSM, we examined the availability of the historical data for 26 components from Table 1.After an investigation, we selected component B as our study subject.The decision is made because other components have little historical data available.The lack of historical data is attributed to the way developers version their models and infrequent modifications requested by customers.Next, we elaborate on these two reasons. Currently, the company uses two ways of working, Git-based and Break-Out-Archive (BOA)-based, illustrated in Fig. 8.Both ways of working combine two types of version control systems: Git and IBM Rational ClearCase.The component developed with Git-based way of working has a dedicated Git repository that tracks revisions made by developers.When a certain feature of the component is finished and verified using ASD, developers submit the snapshot4 to the ClearCase repository of the component.This snapshot is then integrated with the rest of the system.Differently, the component developed with BOAbased way of working does not have a dedicated Git repository.When one or more such components needs to be modified, developers create a new Git repository and import a snapshot of all the relevant components.Once the modification is finished, developers submit the snapshot to the ClearCase repository and abandon the Git repository.For the 26 components that we listed in Table 1, three components (B, C, and D) are developed with the Git-based way of working; the other components are developed with the BOA-based way of working.Only very few revisions (less than five) are available on ClearCase for these BOA-based components.We confirmed our observation with the developers who are reponsible for these components.Indeed, some components do not evolve, as stated in one of the replies: "we basically only have a single version created when the model was first introduced."We therefore further investigated the components that were developed with Git-based method (i.e., components B, C, and D), by collecting the revisions from the master branch of their Git repositories and from the integration stream of their ClearCase repositories. Table 5 shows the number of model revisions and the average number of revisions (per model) available from ClearCase, as well as for the data available from the Git repositories.Considering the average number of revisions, models from components C and D have only few revisions available for each model from their ClearCase and Git repositories.We confirmed this information with the developers responsible for these components: "Component D is running at the customer for quite some time.No issues so far, so that's why it doesn't have many versions". It can be seen that component B has the largest (average) number of revisions available because it has the longest maintenance history and it is the first ASD-based component in the company.Our previous study (Section 4) has shown that studying component B as the first step is an efficient way of deriving a theory that can be applied to other components.Based on our observations, we decided to conduct an exploratory study with component B. Data Collection and Analysis To answer RQ 5.1, we collected the snapshots from the Git repository of component B. The chronological order of commits from the master branch5 is not necessarily the order of actual commits because the history of a Git repository is represented by a graph of commits BOA-based version control Fig. 8 Git-based and BOA-based ways of working rather than a linear chain of commits (Bird et al. 2009).However, in this study we limited our scope to the master branch due to the differences between master branch the other branches.First, the master branch versions the models that are ready to be reviewed by other developers or to be submitted to the ClearCase repository while other branches version the development of machine-specific features and different releases, or the fix of certain bugs.According to the developers responsible for the components, these branches can be deleted or merged when a certain development task is finished.Second, the submitted models to the development branches may not be complete or executable (e.g., exhibiting syntactic errors).Third, developers have a different habit of committing to their own development branches (e.g., some developers commit at the end of the working day while some commit when a certain task is finished).These differences require different interpretations for the mined results.As an exploratory study on the evolution of SSSMs, we investigated the master branch, leaving the evolutionary differences present in other branches out of our scope. We collected the snapshots of the Git repository of component B based on the order they appeared in the master branch.We applied the method discussed in Section 4.3.1 to identify SSSMs.For each snapshot, we measured the number of MSSM-IMs, MSSM-DMs, SSSM-IMs, SSSM-DMs as well as the number of SSSM-IMs that are used for achieving the goals we discussed in Table 4.By analyzing the growth of the number of these models over the years, we aim to understand whether the trends differ between SSSMs and MSSMs, and between different SSSMs used by developers for achieving different goals.To answer RQ 5.2, for each model from the ClearCase repository of Component B, we collected all the revisions in chronological order.It should be noted that the history on ClearCase is a subset of the history on Git.That is, some of the commits on Git eventually appear on ClearCase for integration.The developers of component B tag the Git commits that are submitted to ClearCase.Analyzing data from both Git and ClearCase helps us understand how SSSMs evolve during development and integration.Based on the method described in Section 4.3.1,we classified each revision into SSSM or MSSM.To understand whether developers modify transitions of SSSMs, we measured the number of transitions for each model revision.Next, we identified the revisions which are classified differently from their previous revision in the master branch. Figure 9 illustrates the classification with an artificial example.The model has seven revisions r1-7 from the master branch.Revision r1 is an SSSM and the first revision of the model in the repository.The modification by developers results in revision r2, which is also an SSSM.Similarly, revisions r4-5 and r7 also belong to the same class as their previous revisions.Revisions r3 and r6, however, fall into a different class compare to their previous revisions.We define the life-cycle of a model as a series of revisions that introduce the model to the systems, transform the model from an SSSM into an MSSM or transform the model from an MSSM into an SSSM.In the remainder of this paper, we refer to such transformations as SSSM-MSSM-changes.By identifying these revisions, we extracted the life-cycle of models in the component.The life-cycle of the example shown in Fig. 9 is SSSM→ MSSM → SSSM. Next, we categorized SSSM-MSSM-changes following an open-coding process based on the Git commit message associated with the SSSM-MSSSM-changes and the differences between the before-change model revision (i.e., r2 and r5 in Fig. 9) and the after-change model revision (i.e., r3 and r6 in Fig. 9).This open-coding task is conducted by the first author who has the necessary knowledge of ASD.Since most of SSSM-MSSM-changes were made before 2015, and since then many of these developers who made the changes already work in other development groups or left the company, we found it not feasible to conduct member checking. When were SSSMs Introduced? (RQ 5.1) Figure 10 shows the number of MSSM-IMs, MSSM-DMs, SSSM-IMs and SSSM-DMs present in the Git repository over time.The figure shows an initial surge in 2013 because the first two Git commits are two large squashes of commits from an SVN repository which was used for the initial development of component B and has been removed after importing the latest snapshot into the Git repository. Overall, the total number of models in this component is growing over the years after the deployment of the component in the machines.As we learned from the developers of component B, component B is the central controller of the machines, coordinating different machine actions.Therefore, the component is likely to be extended or modified when a new r1 r2 r3 r6 r7 r4 feature is added to the machines.Developers started using SSSMs before the first deployment of the component and continuously introduced more SSSM-IMs over the years.The growth of all these types of models slowed down noticeably after 2016.This indicates that the component is gradually matured.In contrast, SSSM-DMs were introduced before the first deployment and their usage remains stable throughout the history.With Fig. 11, we zoom in on the trend for the SSSM-IMs that are used by developers for the core reasons presented in Table 4.After the initial development of the component, eight SSSM-IMs used for easing maintenance and verification were introduced in June 2013, and the number of the SSSM-IMs for this purposes did not grow significantly afterward.A closer look at the commit that contributes to the significant increase in June 2013 reveals that developers introduced the SSSMs when developing a machine-specific feature.These SSSMs abstract machine-specific details away from the client models (see pattern Feature-Selection in Fig. 7).Differently, the number of SSSM-IMs that are used to work with the existing code base mainly increased in the period of 2015 and 2017.This implies that the need for interfacing component B with foreign components increases during the period.The number of SSSM-IMs that serves as a workaround solution to tool limitations grew continuously over the years.By further zooming in on the trends for the SSSMs for dealing with different tool limitations as shown in Fig. 12, we found that the demand for SSSMs for different tool limitations varies over time.The implementation of patterns EventCollector and DataEncapsulation is the main drive behind the growth.The need for the SSSMs from pattern EventCollector grew strikingly in 2016 and became relatively stable afterward.By inspecting the related commits, we found that the rapid growth was caused by the implementation of a system design that requires component B to subscribes to a bunch of events, receive the events during runtime, and perform the corresponding actions based on the Fig. 11 Growth of the number of SSSM-IMs that are used for different reasons received events.The introduced SSSMs forward the events to the target parts of component B that are responsible for the corresponding actions (Table 4). Due to another tool limitation, developers cannot specify data-dependent behavior.The SSSMs in pattern DataEncapsulation are used to encapsulate data-dependent behavior implemented in the foreign code (Table 4).The need for data encapsulation with SSSMs appeared from the early phase and continuously grew as developers extend the functionalities of the component.Particularly, it became the main reason for introducing more SSSMs to the component in the recent years. How Do Developers Modify SSSMs? (RQ 5.2) RQ 5.2a: Do SSSMs become MSSMs, and vice versa?Table 6 shows how many IMs and DMs are always SSSM, always MSSM or with SSSM-MSSM-changes from the Git and ClearCase repositories.Note that the total number of models present in the table is 630 rather than 633 that we reported in the previous study (Section 4) because three models were removed from the repositories since our previous data collection activities.A glance at this table shows that the models from the Git and ClearCase repositories evolve differently.We validate this observation by applying the χ 2 test to the contingency table (Table 6).The null hypothesis is that the evolution of models (i.e., always SSSM, always MSSM or with SSSM-MSSM-changes) is the same regardless the source of models (ClearCase vs. Git).The computed p-value is 0.002892 which is smaller than the customary threshold of 0.05.Therefore, we can reject the null hypothesis, concluding that the models from the Git and ClearCase repositories evolve differently.The difference can be attributed to the fact that developers use these two VCSs differently, as we explained in Section 5.1.As can be seen from Table 6, the models from the ClearCase repository are more likely to stay always SSSM or MSSM while the models from the Git repository are more likely to change between SSSM and MSSM.This is explained by the fact that the Git repository stores the work-in-progress revisions, therefore, it is disclosing more modifications. Moreover, IM and DM have also evolved differently as shown in Tables 7 and 8. To validate this observation, we applied χ 2 test to contingency tables that show how many IMs and DMs are always SSSM, always MSSM or with SSSM-MSSM-changes from the Git and ClearCase repositories (Tables 7 and 8).The null hypothesis of the test is that the life-cycle of models (i.e., always SSSM, always MSSM and with SSSM-MSSM-changes) is independent of the type of models (IM vs. DM).The test result shows that the relation between life-cycle of models and the type of models is significant.The computed p-values obtained for the models from both repositories and the adjusted p-value with Bonferroni correction are all smaller than 0.00001.Since the adjusted p-value is smaller than the customary threshold of 0.05, we can reject the null hypothesis, concluding that IM and DM have evolved differently.Most DMs stay MSSM throughout their history.However, a DM is more likely to be modified with SSSM-MSSM-changes when it is not always an MSSM; In the Git The common message conveyed by data from both repositories is that for most of the models developers did not make SSSM-MSSM-changes during their maintenance activities; In the ClearCase repository, 506 out of the 630 models (i.e., 80.3%) are MSSM when they were created and have not been changed into SSSMs during their evolution.One hundred twelve models (i.e., 17.7%) were SSSM when they were created and remain to be SSSM throughout their evolution history, leaving 12 models (i.e., 2%) evolving with SSSM-MSSM-changes.Similarly, only 35 models (i.e., 5.6%) from the Git repository have been modified with SSSM-MSSM-changes. Figure 13 shows the life-cycles of models modified with SSSM-MSSM-changes.The most frequent life-cycle followed by the models is SSSM→ MSSM.Moreover, the models can be switching between SSSM and MSSM multiple times during their evolution.For example, as shown in Fig. 13, there is an IM modified with four SSSM-MSSM-changes.A closer look at the corresponding revisions and commit messages reveals that the last three SSSM-MSSM-changes were made by the same developer within 10 days for redoing a bug fix.In these commits, the developer first reverted the model to the before-fixing revision and then further modified the model for fixing the bug. In particular, we observed that the SSSM-MSSM-changes for 23 models are not present in the ClearCase repository.For example, in the Git repository, there are two models transformed from MSSM to SSSM and later back to MSSM (i.e., MSSM→ SSSM→ MSSM), which is not shown in the ClearCase repository.This is because consecutive SSSM-MSSMchanges committed to the Git repository might not be visible in the ClearCase repository, as only the to-be-integrated revisions are committed to the ClearCase repository.Figure 14 shows the 35 models that have been modified with SSSM-MSSM-changes from the Git repository.We can observe that SSSM-MSSM-changes are often made one after another within a short period of time; 17 models have been modified with consecutive SSSM-MSSM-changes within one month.For example, model m2.im was transformed from an SSSM into an MSSM 19 minutes after its creation.Such quick changes were made before introducing the model to the ClearCase repository.Therefore, the model was an MSSM when it first appeared in the ClearCase repository.Since this model was not modified with Additionally, for those models that were modified with SSSM-MSSM-changes, they often start with being an SSSM and later undergo revisions that transformed them into an MSSM.This observation is particularly reflected by the consecutive SSSM-MSSM-changes that developers committed to the Git repository after the creation of SSSMs.As can be seen from Fig. 13, four out of six life-cycles start with SSSM.In total, 31 out of 35 models follow these four life-cycles transforming models from an SSSM into an MSSM, and possibly going back and forth multiple times between SSSM and MSSM throughout their evolution.This observation implies that the behavioral restrictions are not necessarily specified when the model is created.Instead, developers may create an SSSM as the initial implementation and refine the behavior of the model with more states.RQ 5.2b: Do developers modify transitions of models that stay SSSM throughout their entire history?112 models from the ClearCase repository and 108 from the Git repository remain to be SSSMs throughout their history.For these models, we observed the stability As a consequence of removing events, the constraint on the execution order of the events or the conditions of the execution of events is also removed.9 reports the result of our open-coding task.Action Event insertion with constraint is the most frequent action developers take, followed by Event removal and Event insertion with conditions.Figure 14 shows when the actions occur in the evolution of SSSMs.These actions are based on the concept of conditions and constraints.We first explain the differences between conditions and constraints with an example shown in Fig. 15. Figure 15 (a) shows an SSSM-IM with two call events initialize and stop and one reply event ok.The client model of this SSSM-IM can call initialize and stop in any order and receive reply ok.After adding a condition to the existing events as shown in Fig. 15(b), the model gives reply event ok and transits to state idle in response to call event stop only if it is in state busy.Otherwise, the model has no responses, ignoring event stop.Similarly, the model does not give responses to event initialize when it is in state busy as the model has already 9 and the shapes indicate the type of models been initialized.Conditions created by the developers with multiple states allow models to accept all the call events, but give different replies to their client models based on their own state.Adding this condition does not require the client model to change the calling order of events initialize and stop, thus, no change is propagated from the IM to its upper-layer client models.Differently, adding a constraint to an IM requires the client models to call events in a certain order, which specifies under which circumstances a certain call event can trigger exceptions.In the example shown in Fig. 15(c), the model throws an exception if its client model calls event initialize when it is in state busy.When the exception behavior is explicitly specified in the model, the verification tool checks if the client model calls events in the expected way.The need for co-change depends on how the client model calls the events; to satisfy the verification tool, the client model needs to be modified if it can trigger the exception under any possible circumstances. A typical usage of action Event insertion with constraints is for implementing the concept of iterator that is available in many programming languages (e.g., Java).Developers intend to implement multiple FIFO (First-in-first-out) lists to store the elements that need to be processed at runtime.The lists are implemented with hand-written code.Initially, the IMs of these lists have only events append and remove which are called by the client models Fig. 15 An example shows the differences between constraint insertion and condition insertion."a/b" indicates that event b is sent to the client model when event a is called."-" indicates that no response is being made by the model to add and remove elements.In the revisions, developers implement the concept of iterator by adding events iterator and next from iterator.The client model can instantiate an iterator by calling event iterator, and traverse elements by calling event next from iterator.Developers then add constraints to the model so that the client model is only allowed to call next from iterator when event iterator is already called (i.e., an iterator is instantiated) and the list is not empty. We observed an interesting case (i.e., model m23) where the developer takes action Constraint insertion to restrict the execution order of the existing events.The before-change revision is an SSSM with commit message:" ...version for first review" while the afterchange revision is an MSSM with commit message:" ...rework after review", indicating that the action was taken in response to the review feedback.This observation indicates that developers examine whether constraints are needed when reviewing models. When modifying SSSMs, developers are more likely to add constraints to the execution of newly introduced events.Action Event insertion with constraint is taken when developers, add new events whose execution does not depend on the execution of the existing events.Figure 16 shows such an example where events subscribe and unsubscribe and the constraints on the execution order of these two events are introduced in the revision.The new events and constraints (Fig. 16b) do not impact the execution of the existing event construct.That is, event construct can still be called in any order regardless of the state of the model.In this case, to satisfy the verification tool, developers only need to ensure that the client model calls the new events subscribe and unsubscribe in the desired way so that exceptions will not be triggered.Action Event insertion with constraints often takes place when developers would like to add a new service which is not coupled with the existing service (i.e., the new service and the existing service can be used by their client models in an independent way).Similarly, action Event insertion with conditions is also widely used when a new service is introduced to the models. When it comes to transforming an MSSM into an SSSM, developers take actions Constraint removal, Condition removal and Event Removal.A typical scenario of performing Constraint removal is when developers implement pattern Model armor which allows them to remove the constraints from the IMs that interface to the foreign code, and to add models that take the role of armor to forward the intended events to upper-layer clients (see Fig. 7).Such modifications on the boundary side of the component will not require the changes on the core parts of the component.The modification shows that pattern Model armor was not always implemented from the beginning.Instead, the implementation of the pattern can be a result of refinements. Fig. 16 An example of applying action Event insertion with constraints As can be observed, developers often perform Event Removal to delete unnecessary events.An interesting example shown in Fig. 17 is a revision for fixing a bug (as indicated in the commit message).Before the action takes place, the MSSM-IM has three input events initialize, enable and enabled.Among them, initialize and enable are events that can be triggered by its client models.The MSSM-IM sends reply enabled to its clients until the occurrence of a notification event from its server.This design subsequently blocks the clients from processing other critical tasks if the notification event does not happen in time. To remove this bug, developers remove events enable and enabled that block the clients, resulting in an SSSM (as shown in Fig. 17 (b)). Our result shows that SSSM-MSSM-changes are more likely to be the consequence of adding or removing events rather than the modifications of the execution order of the existing events. Fig. 17 An example of applying action Event deletion 6 Threats to Validity As any empirical study, ours is also subject to several threats of validity. Threats to construct validity examine the relation between the theory and observation.Since there is no clear definition of single-state state machines in literature and guidelines, we operationalize the intuitive notion of an SSSM and provide our own definition.To ensure that our definition corresponds to the developers' perception of SSSMs, we explained our definition of SSSMs to the interviewees and made sure that they understood it.While it is possible that some MSSMs can be reduced to SSSMs according to some formal notions of equivalence (e.g., trace equivalence), developers tend not to think about those MSSMs when talking about SSSMs.This is why we exclude this case from consideration and treat MSSMs equivalent to SSSMs as MSSMs. Threats to internal validity concern factors that might have influenced the results.In our interview study, we derive our interview questions and strategy from our quantitative findings, which reduces the risk of asking meaningless questions that potentially bias our interviewees.Moreover, to avoid misinterpretation on developers' ideas, we performed member checks with our interviewees on the categories emerged from the Grounded Theory process.To assure the completeness of the reasons of using SSSMs, we conduct several iterations of interviews till all SSSMs from these 26 components can be explained by the collected reasons.To answer RQ 5.2c, we manually classified the modifications developers made to SSSMs by comparing before-change revisions and after-change revisions.This open-coding process is inevitably interpretative, and hence, subjective.The open-coding was conducted only by the first author due to the required knowledge of the commercial modeling tool.We were not able to conduct member checking with the authors of the revisions because most of changes were made before 2015, and since then many authors have been working in other company units or left the company. Threats to external validity concern the generalizability of our conclusions beyond the studied context.We studied 26 model-based components for the first study (Section 4).Our second study (Section 5) limited to a single component.However, this is the only component that has more than 10 revisions for each model (on average) from this company.Studying the evolution of state-machine-based software is still a challenging subject due to the lack of data.First, the use of MDE with the purpose of verification is still very limited even though the need is already evident, as surveyed by Liebel et al. (2018).Second, since the built-in verification tool formally verifies the correctness of models, the number of revisions developers made to these models might inherently lower than that they made to hand-written code.As shared by the developers that we contacted with, component D has been deployed at the customers' machines, but it does not (yet) evolve much because there is no issue found by the customers so far.Lacking of data can impact the generalizability of the findings.With this preliminary study we intend to increase the understanding of the evolutionary aspects of state-machine-based software with the evidence from industry. Moreover, we are aware that we limited our study to the components from a single company developed with the same modeling tool.We believe the conclusions and observations derived from this context are complementary to the existing literature which mainly have broad surveys on the challenges of MDE adoption, by providing concrete industrial examples.To increase the generalizability, one of the future directions could be replicating our study in other companies or using the models developed with other tools. Discussion and Implication As the main contribution, our study identified why developers use SSSM models and how SSSMs evolve in their evolution.Based on our empirical results, we provide implications for developers (Section 7.1), tool builders (Section 7.2) and researchers (Section 7.3).Some of the implications derived from our empirical study are consistent with the findings provided by other survey and interview studies on MDE adoption.Different from these studies that provide a broad insight of MDE adoption, our study aims for more in-depth insights into a certain phenomenon in state machine modeling, by applying mixed methods (i.e., interviews and repository mining) in an industry context.Therefore, we think it is still interesting to confront their conclusions with our findings. Implications for Developers Consider how to integrate models with the existing code base.In our study we found that developers introduce armoring to interface model-based components with code-based components for protecting models from unexpected behavior.In addition, we observed that the usage of SSSMs for interfacing with the existing code base is increasing as more functionalities are implemented.Our observation (in Section 5.3) suggests that practitioners should consider how to integrate models with the existing code base in a scalable way if they would like to use MDE to develop only part of their systems that need to be integrated with hand-written code.Furthermore, practitioners may consider to take the quality (e.g., availability, scalability and maintainability) of the provided integration solutions into account when evaluating candidate modeling tools.This implication concurs with one of challenges that has been reported to hinder MDE adoption in companies (MacDonald et al. 2005;Mohagheghi and Dehlen 2008;Staron 2006;Jolak et al. 2018): using MDE together with the existing code base. Be aware of the trade-off between domain-specificity and general-purpose programming language constructs.The trade-off between general-purpose modeling languages and domain-specific ones (Van Der Straeten et al. 2008) is a frequently discussed concern about MDE.Domain-specific languages, on the one hand, often offer a higher degree of specialisation for a certain modeling domain or purpose.One the other hand, they might be less flexible and expressive (van Deursen et al. 2000).We observed a large share of SSSM-IMs are used to interface with the hand-written code whose behavior cannot be modelled with ASD because of the tool limitation (Table 4).Particularly, as we observed in our evolution study (Section 5.3), due to the lack of means to specify data-dependent behavior with the tool, the need for encapsulating data-dependent behavior implemented with handwritten code is continually growing over the years, and has become the main reason for using SSSMs in the recent years.Under-specifying the order of events for data manipulation operations require additional review and test efforts.This implies that before adopting a certain modeling language and tool, practitioners need to evaluate the benefit gained from the domain-specificity and the cost caused by the loss of general-purpose language constructs, based on their application domain, while, taking their long-term development and maintenance needs into account.This implication agrees with the suggestion provided by Corcoran (2010) that "one must determine whether a given MDE approach reduces complexity visible to the developer, or whether it simply moves complexity elsewhere in the development process." Create reusable design using the modeling tool.Apart from developing patterns for interfacing with the existing codebase and dealing with tool limitations, we observed that developers also invest effort in creating patterns that are expected to ease long-term maintenance.They use SSSM-related design patterns to realize such software design principles as low coupling (e.g., CallMapping) and separation of concerns (e.g., FeatureSelection).Furthermore, future refactoring is facilitated with SSSMs implementing the idea of "packaging up sub-steps".We observed that these patterns were introduced in the early phase of the maintenance of component B and widely reused in other components.Our observation implies that practitioners can consider to build up reusable design patterns when using a certain modeling tool, to ease their development in future projects developed with the same tool.This implication is inline with earlier findings on MDE adoption (Hutchinson et al. 2014) and software engineering practice in general (Ampatzoglou et al. 2011). Balance modeling trade-off between the ease of modeling activity and the verification adequacy.As discussed by Chaudron et al. (2012), developers who work with traditional UML modeling, i.e., use models merely for analysis, understanding and communication, have to make a trade-off between effort in modeling and the risk of problems caused by imperfections (e.g., incompleteness, redundancy and inconsistencies) in downstream development.For instance, when a model serves as a blueprint of the protocol between two components, the under-specified parts in the model might be implemented inconsistently due to different interpretations by different developers, later incurring repair costs.However, investing a lot of effort in continuously refining such blueprints is not always possible (Lange et al. 2006a).Our results imply a similar trade-off that developers need to make in the context of using models for verification.Under-specifying the behavior of models might hide defects from the verification tools.However, spending too much effort in creating a more precise model with a restricted order of events slows down development process.Moreover, developers might need to spend more effort in performing changes on such models because passing verification becomes non-trivial.Our study on the evolution of SSSMs shows that developers are more likely to change an SSSM into an MSSM than the opposite (Section 5.4).Sometimes, developers consecutively make multiple SSSM-MSSMchanges within a short period of time, transforming the models back and forth between SSSM and MSSM.These work-in-progress changes often are not eventually integrated into the systems, implying that a series of refinements have to take place before integration. Implications for Tool Builders Help developers with integration.Our work calls for improving the support of integration of models and code-based components.The need to integrate models with the existing code base (Liebel et al. 2014;Hutchinson et al. 2011;Whittle et al. 2013b) and to integrate models from different domains (Tolvanen and Kelly 2010;Torres et al. 2019) has been often mentioned.However, not many studies propose how this integration can be facilitated by improving modeling tools.To provide suggestions to MDE tool builders about integration, Greifenberg et al. survey eight design patterns proposed for integrating generated and hand-written object-oriented code (Greifenberg et al. 2015).One of the discussed design patterns is the GoF design pattern Delegation (Gamma et al. 1993) which allows generated code (delegator) to invoke methods of the hand-written code (delegate) declared in an explicit interface (delegate interface).The ModelArmor design pattern we identified (Fig. 7) implements a similar idea; DM Armor takes the role of delegator invoking methods of code-based components specified in IM IForeign.However, as opposed to Delegation, ModelArmor takes into account the different properties of models and code (i.e., verified behavior vs. non-verified and unpredictable behavior), ensuring that models are protected from the unexpected behavior of the code.Our work implies that while selecting design patterns for integration, tool builders should consider different properties of generated and handwritten code.Furthermore, tool builders can (partially) automate the implementation of the integration patterns, reducing the manual development effort. Facilitate library reuse.Apart from interfacing with existing code-based components, we have observed that developers have to use code to implement what cannot be expressed by models (Section 4.4.4).For example.due to the lack of reusable common libraries, developers implement in code the behavior that requires such libraries.To address this challenge the tool builders can work on two directions.First, one can consider enriching common functionalities often used in different applications with built-in models to reduce the needs of interfacing with libraries provided by general-purpose programming languages.Second, given rich reusable libraries in general-purpose programming languages, tools should provide a way to easily reuse these libraries, similar to the wrapping mechanism that allows, e.g., Python programs to communicate with C/C++ (Beazley 1996). Meet wider specification and verification needs.We have observed that developers attempt to implement global constants with SSSMs (Section 4.4.4).This practice indicates the need to support concepts shared by multiple models.However, implementing such concepts is hindered by a well-known verification challenge: state explosion problem (Clarke et al. 2001;Baldoni et al. 2018).Such modeling tools as Uppaal (Behrmann et al. 2006) support the use of global variables (e.g., bounded integers and arrays) that can influence the control flow in the models.However, such tools have larger risk of facing state explosion when dealing with real-life applications (Doornbos et al. 2012).This implies that a tradeoff between supporting global variables and the risk of state explosion has to be resolved by tool designers.A possible resolution could be adopting hybrid solutions (Doornbos et al. 2012;Xing et al. 2010) that translate models from one tool to another, to meet wider verification needs. Implications for Researchers As befitting an exploratory case study (Runeson and Höst 2009), we propose hypotheses about the use of SSSMs in modeling practice.These hypotheses should be verified in a follow-up study. H1: The design patterns in Section 4.4.2help developers to achieve the corresponding goals.We have seen that SSSMs are extensively used for various reasons and goals. The studies on the effectiveness of GOF design patterns in OOP languages (Gamma et al. 1993) have shown that design patterns do not always achieve the claimed advantages (Ampatzoglou et al. 2015;Zhang and Budgen 2011).Moreover, passing verification easily with SSSMs might be a potential risk.This suggests a need to investigate effectiveness of these SSSM-related design patterns in order to confidently apply them. H2.1: SSSMs shorten the development time and ease modification tasks of their client models, compared to MSSMs.H2.2:The models that use or implement SSSM-IMs have more post-release defects compared to the models that work with MSSM-IMs.These two hypotheses are derived from our interviewees' perception (RQ4, Section 4.4.7).It is, however, unknown how SSSMs actually impact development, maintenance and verification activities.Investigating the impacts of SSSMs, the type of model that minimizes modeling effort, is a starting point toward better understanding of a trade-off between the effort spent on designing a model that maximizes the advantage of verification and the extra cost caused by downstream problems due to inadequate verification.We expect that the investigation of this trade-off can broaden the ongoing discussion of modeling trade-offs that is currently focusing on UML modeling (Chaudron et al. 2012;Raghuraman et al. 2019b). H3: Most models either remain SSSMs or MSSMs and are not modified with SSSM-MSSMchanges.The validation of this hypothesis may provide suggestions for tool builders.If, both hypotheses H2.2 and H3 hold, then it may indicate that there is a need to detect the SSSMs that might be associated with post-release defects during commit activities to avoid problems. H4: Most of the SSSM-MSSM-changes are related to the introduction or removal of events rather than to the modification on the execution order of the existing events.We observed the tendency in our study on a single component from a single company.It therefore requires empirical validation.In particular, validating H4 can help us understand what SSSM-MSSM-changes are more likely to occurs, and further investigate how SSSM-MSSM-changes to a model impact other models that depend on it and whether any tools are required to support the evolution.Many studies have investigated API breaking changes (Brito et al. 2018;Mostafa et al. 2017) in the context of traditional coding, proposing suggestions and tools for library and client developers.In MDE, breaking changes also deserve attention.Adding events or adding conditions to the existing events are nonbreaking changes as they do not force client models to change.However, other modifications such as removing events or adding constraints to the existing events are breaking changes that require changes on client models.A further investigation is required to understand how likely developers introduce breaking/non-breaking changes when modifying SSSMs.Moreover, a futher exploration on what kind of modifications occurs more often than others can help tool builders prioritize and facilitate certain actions (e.g., addition and removal of events) when designing a user interface. Beyond the specific hypotheses, we suggest researchers to further study the evolution of models.We observed that SSSMs are a minority and most of them are SSSM since their introduction to the systems.Particularly, SSSMs are more likely to become MSSM models than the other way around.The predominance of evolution from SSSMs to MSSMs can be seen as an example of increasing complexity of a system.This implies possible applicability of Lehman's laws of software evolution to models operating in a hybrid model/code context, and suggesting further research into this topic.By comparing Git history (work-in-progress revisions) and ClearCase history (integration revisions), we observed (in Section 5.4) that multiple SSSM-MSSM-changes often occur consecutively within a short period of time before the final revisions are available in the integration repository (ClearCase).Based on the commit messages, it can be inferred that some of them were made in response to review feedback or request of redoing a bug fix.However, it remains unclear why the previous revision was unsatisfying, due to the lack of the explanation from the authors of the commits.The observation also implies that the changes of models might be driven by peer discussion in the review process, suggesting future research on the role and practice of peer review in model evolution.In addition, given the caused permissive verification is perceived as a risk by our interviewees, we suggest proposing possible alternatives to SSSM-IMs by investigating the order in which events are actually being called during system operation.One can consider analysing the execution traces of the generated code with pattern mining techniques widely studied in the field of model learning (Yang et al. 2019;Wieman et al. 2017;Aslam et al. 2018), specification mining (Lemieux et al. 2015;Lo et al. 2011) andprocess mining (van der Aalst 2011;van der Werf et al. 2009;Gupta et al. 2018). MDE Adoption and Practice Our study is closely related to a series of empirical studies on MDE adoption and practice.Mohagheghi and Dehlen (2008) identified the need for more empirical evidence on MDE subjects by reviewing 25 papers.Twenty-one of these papers were experience reports from single projects, while four report comparative studies.The review study attempted to identify the benefits and limitations of MDE.As a result, the study found that the improvement of software quality, productivity gains and losses are not well-reported in these papers, making it hard to generalize the results.Therefore, the authors call for more empirical evidence on MDE subjects to help researchers understand MDE adoption, practice, and experience.Since then, many empirical MDE studies have been conducted to understand how MDE is being adopted and applied in practice (Hutchinson et al. 2011;Whittle et al. 2013a;Hutchinson et al. 2014;Whittle et al. 2013b;Farias et al. 2013;Pourali and Atlee 2018;Chaudron et al. 2012;Liebel et al. 2014;Mohagheghi et al. 2013).These papers explored different dimensions of MDE adoption and practice, using mostly interviews and surveys.Liebel et al. (2014) and Liebel et al. (2018) conducted a survey with 113 MDE practitioners to assess the current state of practice and the challenges in the development of embedded systems.The study found embedded software engineers use MDE mainly for simulation, code generation and documentation.The overall benefits gained from MDE outweigh the negative effects of MDE.The challenges perceived by engineers mainly lie in the sufficiency and interoperability of tools. To understand the impact of tools on MDE adoption, Whittle et al. (2013b) conducted 20 interviews with MDE practitioners, resulting in a taxonomy of tool-related considerations.In addition, the study also reveals that MDE tools, in many cases, add complexity to the development, although it was expected to help developers deal with complexity of systems.One of the problems that contributes to the insufficiency of tools is a lack of consideration for how developers actually work and think.To resolve this problem, there is a need to study how developers model systems and what challenges they face. Several studies investigated challenges developers face in modeling (Pourali and Atlee 2018;Chaudron et al. 2012).Pourali and Atlee (2018) identified the gap between users' expectation on UML modeling tools and their actual experience.The study evaluates eight modeling tools by recruiting 18 students who are experienced with UML modeling to conduct four modeling tasks.The study found that the students mainly have difficulties in fixing inconsistencies which are most in need of consideration from tool builders.The inconsistencies and other forms of imperfection (e.g., redundancy and incompleteness) might cause downstream problems, as discussed by Chaudron et al. (2012) based on a series of surveys and interviews, raising a question of how much modeling is good enough in the context of using UML as communication vehicle and implementation blueprint.Our study further reveals that this question remains when extending the use of models to verification. Furthermore, several studies went beyond the technical aspects of MDE adoption and practice, exploring the organizational, managerial and social factors that lead to successful adoption of MDE (Hutchinson et al. 2014;Hutchinson et al. 2011;Whittle et al. 2013a).Based on a series of survey and semi-structured interviews with MDE practitioners from industry, the authors conclude that an iterative and progressive approach, organizational commitment, and motivated users are required to successfully adopt MDE in industry. Similar to these studies on MDE adoption and practice, we aimed for obtaining empirical evidence to help researchers and tool builders better understand how developers use MDE in practice.Specifically, we enriched the existing knowledge of MDE practice through the lens of why developers use SSSMs that is not recommended by a widespread modeling guideline, and how developers use SSSMs. Guideline Adherence Our study is inspired by the literature on how and why software developers (do not) follow programming and modeling guidelines or best practices. A large body of literature has investigated the occurrence of violations to the common wisdom in traditional coding practice.These study observed a phenomenon that the violations often occur when the code is first introduced to the system.Tufano et al. (2015) studied when and why code smells are introduced by mining software repositories.The result shows that most of the time code smells are introduced in the development phase rather than in the evolution phase that common wisdom expects, which implies that potential poor design can be detected by performing quality checks during commit activities to avoid worse problems in future.Similarly, a study on Eclipse interface usage by Eclipse third-party plug-ins found that a significant portion of Eclipse third-party plug-ins uses "bad" interfaces and the bad usage was not removed from the systems (Businge et al. 2015).This phenomenon is further discovered by the study on how code readability changes during software evolution (Piantadosi et al. 2020).The result shows that unreadable code is a minority and most of the unreadable pieces are unreadable since their creation.Following the same strategy, our study investigated the reasons behind violations of a widespread modeling recommendation -not to use SSSMs, and the evolution of these SSSMs.We observed the same phenomenon that the violations occur when the models are created and only a small share of models are changed between SSSM and MSSM. The studies on guideline adherence have also been conducted to understand UML modeling practice.Lange and Chaudron (2004) formulated a collection of rules to asses the completeness of UML models, and further explored to what extent developers violate these rules in practice.The result shows a large amount of rule violations, suggesting that the incompleteness of models should be addressed.Lange et al. (2006b) further conducted a controlled experiment to explore the effect of modeling conventions on defect density and modeling effort.The results show that the defect density in UML models is reduced when using modeling conventions, although the improvement is not statistically significant.Different from these studies, our study explored the reasons behind the violations in state-machines modeling practice. Evolution in MDE Our evolution study on SSSMs is related to the studies of evolution in MDE.Mens et al. (2005) proposed a framework to support the evolution of UML models.The framework includes a classification of model inconsistencies and the formalism of description logic that can be used to formulate logic rules detecting model inconsistencies.In MDE practice, not only models evolve, but their meta-models in which the models are expressed also evolve (Mengerink et al. 2018;Mens et al. 2007;Favre 2005).A bunch of studies has investigated the evolution of meta-models (Mengerink et al. 2018;Gruschko et al. 2007;Etzlstorfer et al. 2017;Sprinkle et al. 2009).Mengerink et al. (2018) empirically studied how domain-specific languages (DSL) evolve by mining an industrial repository.The study distinguishes between syntactic changes and semantic changes, and found that most of DSL evolution is redefinition of its semantics.An interesting extension of our study could be investigating the syntactic and semantic changes of state-machine models. Co-evolution between different model artifacts is one of the challenges in model evolution.The approaches have been proposed to facilitate the co-evolution between meta-models and conforming models (Jongeling et al. 2020;Mengerink et al. 2016;Hebig et al. 2016a).Moreover, the recent work from Khelladi et al. (2020b) and Khelladi et al. (2020a) proposed an approach to support the co-evolution of code and metamodels, i.e., when changing matamodels, the co-evolution propagates the metamodel changes to the code that depends on the metamodel.Our study observed that many SSSMs are used for interfacing with the existing code.It remains an interesting study to explore the co-evolution between the SSSMs on boundaries of the model world and the hand-written code that interface with these SSSMs. Model Repository Mining Our study is also related to the studies that mine model repositories.Pattern and clone detection is one of the goals to mine model repositories (Babur 2018;La Rosa et al. 2015;Stephan and Rapos 2019;Stephan and Cordy 2015).Similar to our work, Stephan and Cordy (2015) mine model repositories to detect patterns.The study predefined a set of patterns using models and identified the models that are similar with the patterns within a given threshold.Differently, our exploratory study identifies the patterns by mining a type of model that is not recommended by modeling guidelines and discussing the mined results with developers.As one of the main findings, we discovered several design patterns as shown in Fig. 7. Our study can further be extended with the pattern mining approach to detect instances of discovered patterns in the entire model base. Some studies mined MDSE repositories to investigate the quality of hand-written code and generated code from models.He et al. (2016) mined 16 MDE projects and concluded that the generated code from models present more code smells than what developers usually produce in their hand-written code.By mining MDSE repositories and non-MDSE repositories, Rahad et al. (2021) further identified that hand-written code fragments from MDSE repositories suffer more from technical debt and code smells, compares to handwritten code in non-MDSE repositories.These two studies pointed out that the traditional coding guidelines are violated by code generators and developers in MDSE practice.Our study empirically shows that developers violate a widespread modeling guideline in order to integrate models with the existing code base.These studies imply that the adoption of MDSE may introduce violations to the coding and modeling guidelines that are considered to be common wisdom in software engineering practice.To improve the MDSE practice, guidelines and tools, the results of these studies call for more empirical studies to discover the workarounds and compromises that developers made when adopting MDSE. Several studies have been conducted to mine UML models.Robles et al. ( 2017) and Hebig et al. (2016b) contributed datasets with UML diagrams mined from GitHub.The datasets enable several mining studies to advance the understanding and techniques in UML modeling.Osman et al. (2018) developed the techniques to automatically classify UML models into hand-made diagrams as part of the forward-looking development process and the diagrams reverse engineered from the source code.Raghuraman et al. (2019a) mined software repositories and identified that the projects with UML models present in the repositories are less prone to defects compared to projects without UML models present in the repositories.This finding confirms the intuition that the use of UML models can improve the quality of software. Conclusion With the aim of understanding why developers violate a widespread modeling guideline, we conducted an exploratory study to understand under which circumstances developers use SSSMs in their practice.Our exploratory study consists of two complementary studies.We first investigated the prevalence and role of SSSMs in the domain of embedded systems, as well as the reasons why developers use them and their perceived advantages and disadvantages.We employed the sequential explanatory strategy, including repository mining and interview, to study 1500 state machines from 26 components at ASML, a leading company in manufacturing lithography machines from the semiconductor industry.Then, we investigated the evolutionary aspects of the SSSMs, exploring when SSSMs are introduced to the systems and how developers modify them by mining the largest state-machine-based component from the company. We observed that 25 out of 26 components contain SSSMs.The SSSMs make up 25.3% of the model base.Our interviews suggest that SSSMs are used to interface with the existing code, to deal with tool limitations, to facilitate maintenance and to ease verification.Our study on the evolutionary aspects of SSSMs reveals that the need for SSSMs to deal with tool limitations grew continuously over the years.Moreover, we observed the majority of the SSSMs are stable and have not been changed during their evolution.The most frequent modifications developers made to SSSMs is inserting events with constraints and conditions on the execution of the events. Based on our results, we provide implications to modeling tool builders and developers.Furthermore, we formulate four hypotheses about the effectiveness of SSSMs, the impacts of SSSMs on development, maintenance and verification as well as the evolution of SSSMs. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Fig. 2 Fig. 2 Model relations.Left: type of events.Right: example of an ASD module .I*** stands for an IM percentage cannot be computed as the component does not include DMs Fig. 6 Fig.6Steps in the qualitative phase Fig. 9 A Fig. 9 A set of revisions for a model.Rectangular represents SSSM.Triangle represents MSSM Fig. 12 Fig. 12 Growth of the number of SSSM-IM used to deal with different tool limitations Fig. 13 Fig. 13 Life-cycle of SSSMs from the Git and ClearCase repositories that was not finished.For example, the models were disconnected to the rest of the models in the component and thus cannot fulfill any roles in the systems.6Column#Revisions indicates the number of revisions that are the result of the corresponding action not only in terms of the number of states, but also the number of transitions.For 74 out of 112 modes obtained from the ClearCase repository and 64 out of 108 models from the Git repository, developers have not changed their number of transitions after creating them.RQ 5.2c: What modifications are involved when SSSMs are modified to become MSSMs and when MSSMs are modified to become SSSMs?Next, we discuss what actions devel- opers take to modify the models.Table Fig. 14 Fig. 14 Evolution of SSSMs present in the Git repository.The numbers indicate the modification actions shown in Table 9 and the shapes indicate the type of models Table 1 Overview, prevalence of SSSM and frequency of the identified terms for the selected state machine based projects Table 1 shows the prevalence of SSSMs in the 26 components.25 out of 26 components contain SSSMs, making up 25.3% of the 1500 state machines.Component B is the largest component among the 26 components we consider.In component B 31% of IMs are SSSMs while only 4% of DMs. Table 3 Terms that belong to groups Exclusive&Frequent and Shared&OR> 1&Frequent and the number of SSSM-IMs that contains the term Table 4 Why developers use SSSM-IMs identified from the 26 components: the core reason, goal, location, design pattern and the number of instances (SSSM-IMs) Table 5 Number of model revisions in total and the average number of revisions from Git and ClearCase repositories Table 6 Number of models from the Git and ClearCase repositories that are always SSSM, always MSSM and with SSSM-MSSM-changes Table 7 Number of IMs and DMs from the Git repository that are always SSSM, always MSSM and with SSSM-MSSM-changes Table 8 Number of IMs and DMs from the ClearCase repository that are always SSSM,always MSSM and with SSSM-MSSM-changes MSSM-changes after the first integration, it appears to be always MSSM in the ClearCase repository.In total, 27 SSSM-MSSM-changes made to 23 models (i.e., m1-23) are only visible in the Git repository, while 20 SSSM-MSSM-changes made to 12 models (i.e., m24-35) are visible in both repositories.The SSSM-MSSM-changes that are only visible in the Git repository reflect the intermediate decisions or corrections that developers made before integrating their changes into the systems. Table 9 Actions that developers take to modify SSSMs
23,167.2
2021-09-10T00:00:00.000
[ "Computer Science", "Engineering" ]
Accounting for residual errors in atmosphere–ocean background models applied in satellite gravimetry The Atmosphere and Ocean non-tidal De-aliasing Level-1B (AOD1B) product is widely used in precise orbit determination and satellite gravimetry to correct for transient effects of atmosphere–ocean mass variability that would otherwise alias into monthly mean global gravity fields. The most recent release is based on the global ERA5 reanalysis and ECMWF operational data together with simulations from the general ocean circulation model MPIOM consistently forced with fields from the corresponding atmospheric dataset. As background models are inevitably imperfect, residual errors will consequently propagate into the resulting geodetic products. Accounting for uncertainties of the background model data in a statistical sense, however, has been shown before to be a useful approach to mitigate the impact of residual errors leading to temporal aliasing artefacts. In light of the changes made in the new release RL07 of AOD1B, previous uncertainty assessments are deemed too pessimistic and thus need to be revisited. We here present an analysis of the residual errors in AOD1B RL07 based on ensemble statistics derived from different atmospheric reanalyses, including ERA5, MERRA2 and JRA55. For the oceans, we investigate the impact of both the forced and intrinsic variability through differences in MPIOM simulation experiments. The atmospheric and oceanic information is then combined to produce a new time-series of true errors, called AOe07, which is applicable in combination with AOD1B RL07. AOe07 is further complemented by a new spatial error variance–covariance matrix. Results from gravity field recovery simulation experiments for the planned Mass-Change and Geosciences International Constellation (MAGIC) based on GFZ’s EPOS software demonstrate improvements that can be expected from rigorously implementing the newly available stochastic information from AOD1B RL07 into the gravity field estimation process. Introduction For over two decades now, the satellite gravimetry missions GRACE (Tapley et al. 2004) and GRACE-FO (Landerer et al. 2020) have been monitoring and are continuing to monitor large-scale mass changes on Earth.The twin satellites are tracking ice mass loss in both Greenland (Velicogna and Wahr 2005;Sasgen et al. 2020) and Antarctica (Velicogna et al. 2014(Velicogna et al. , 2020)), changes in terrestrial water storage (Rodell et al. 2018) including the severity of drought (Boergens et al. 2020), and also sea-level change and ocean bottom pressure variations related to internal ocean dynamics (Hamlington et al. 2020;Dobslaw et al. 2020).All those processes are characterised by spatial divergence in mass transports in the Earth system that are well resolved by the monthly gravity field solutions obtained from satellite gravimetry.Mass changes in e.g. the atmosphere and ocean, however, also have significant variations at much shorter, i.e. sub-monthly, time scales.Without prior information, these high-frequency mass transport signals would degrade the monthly gravity field solutions through the effects of temporal aliasing.For this reason, they are usually accounted for in the gravity field estimation by application of a priori background model data. Non-tidal variations in the atmosphere and oceans are routinely subtracted in satellite gravimetry processing through the Atmosphere and Ocean De-Aliasing Level-1B (AOD1B) data product (Shihora et al. 2022a, b) specifically prepared within the US-German Science Data System of the GRACE and GRACE-FO missions.AOD1B was recently updated to its most recent release AOD1B RL07 and is expected to be used as a background model in the next GRACE and GRACE-FO Level-2 releases.AOD1B RL07 is based on 3-hourly atmospheric data from the ERA5 reanalysis (Hersbach et al. 2023) by the European Centre for Medium-Range Weather Forecasts (ECMWF) as well as simulated ocean bottom pressure (OBP) variations from the MPIOM ocean model (Jungclaus et al. 2013) forced with ERA5 atmospheric data.Even though background models are occasionally updated and thereby improved over time, they will necessarily remain imperfect.As a result, high-frequency signals not removed from the GRACE and GRACE-FO sensor data will lead to residual temporal aliasing artefacts in the monthly solutions.In fact, the errors due to imperfect dealiasing are considered to be among the largest contributors to the overall GRACE and GRACE-FO error (Flechtner et al. 2016). There are different approaches for mitigating the impact of residual aliasing errors in GRACE data processing.Most notably, several studies have shown that including an estimation of the uncertainty of the background model data can help improve the quality of the gravity field solutions.Zenner et al. (2010) and Kvas et al. (2019) suggested that including uncertainty estimations allows for a weighting of the measurements according to the associated model error.As a result, measurements associated with a larger model uncertainty have a reduced impact on the final gravity field solutions and therefore mitigate some of the effects from residual temporal aliasing.Similarly, employing the uncertainty estimates of ocean tide models has been shown to have a positive impact on the gravity solutions in dedicated performance simulation studies (Abrykosov et al. 2021). While assessments of the residual errors of AOD1B have been performed in the past (Dobslaw et al. 2016;Poropat et al. 2019), they were only based on AOD1B RL05.This assessment is now believed to be no longer representative for the uncertainties of AOD1B RL07 given the numerous changes made over the last two releases (Shihora et al. 2023b).In this study, we thus derive a new estimation of the non-tidal atmosphere and ocean background model errors associated with AOD1B RL07 that can be readily used in the gravity field estimation process of satellite gravimetry as well as in simulation studies.Our update is especially timely in light of the ongoing efforts towards developing future generations of satellite gravimetry missions, which include both double-pair constellations and novel quantum gravity concepts (Schlaak et al. 2022;Zhou et al. 2023). We start this work by assessing the signal content in AOD1B RL07 and how the represented variability has changed, especially for the oceanic component in Sect. 2. This is done by comparing the AOD1B update to the ITSG2018 daily gravity field solutions (Kvas et al. 2019;Mayer-Gürr et al. 2018).We then develop an estimation of the uncertainties in the atmospheric component of AOD1B through a comparison of the employed ERA5 reanalysis data to other state-of-the-art atmospheric reanalyses (Sect.3).In Sect.4, we focus on the uncertainties in the oceanic component using ensemble simulations where we quantify both the impact of the atmospheric forcing and the impact of the intrinsic variability.The derivation of a new time-series of true errors representative of the uncertainties within AOD1B RL07 is presented in Sect.5, and the computation of new error variance-covariance matrices for the application in simulation studies is described in Sect.6.The paper concludes with early application examples of the newly derived stochastic information for GRACE-like simulations (Sect.7) and a summary in Sect.8. Comparing AOD1B to ITSG daily solutions We start by examining the signal content of AOD1B RL07 in relation to residual signals remaining in previously published GRACE/GRACE-FO gravity field time-series.A comparison of the new release RL07 to its predecessor RL06 shows that the largest differences in variability are found in the oceanic domain.In contrast, the atmospheric differences over the continents are much smaller (Shihora et al. 2023b).This can be expected given the lack of assimilated observations in the ocean simulation.In turn, this also suggests that the uncertainties of AOD1B are going to be dominated by the dynamic contribution of the simulated ocean bottom pressure.To assess the degree to which the oceanic mass variations are not captured in RL07, we make use of a series of daily gravity field solutions provided by the Institute of Geodesy at Graz University of Technology (ITSG).ITSG-GRACE2018 (ITSG2018 in the following) is provided in terms of spherical harmonic coefficients up to degree and order 40 and is based on a combination of GRACE measurements and prior stochastic information in a Kalman smoother framework (Kvas et al. 2019;Mayer-Gürr et al. 2018).These daily solutions incorporated AOD1B RL06 in its processing in conjunction with the previous estimate of the associated AOD1B uncertainties.They thus represent residual mass variations not captured by AOD1B RL06.Given their global coverage and connection to actual GRACE observation, the daily gravity field solutions have already been applied successfully in several oceanic applications (Bonin and Save 2020) and were also utilised in the assessment of differences in high-frequency ocean model simulations (Schindelegger et al. 2021). For our analyses, we synthesise an equiangular onedegree grid based on spherical harmonic coefficients from daily ITSG2018 solutions for 2004-2006.Similar to the approaches of Eicker et al. (2020) and Schindelegger et al. (2021), we transform a binary land-ocean mask from spherical harmonics onto the same grid and reject all grid points with a value below 0.8 to generate a coastal buffer.As a reference, the resulting standard deviation of the residual OBP signal of the ITSG2018 solutions is shown in Fig. 1 for three frequency bands as obtained from a fourth order Butterworth filter.The variability is shown for 3-10 days (Fig. 1a), 10-30 days (Fig. 1b) and 30-60 days (Fig. 1c). For the highest frequencies, the residual OBP variations are mainly located in coastal regions as well as in the Southern Ocean in resonant basins and in the band of the Antarctic Circumpolar Current (ACC).Especially in the Bellingshausen Basin, the residual variability reaches values up to 2 hPa, i.e. 2 cm in equivalent water-height.The picture is similar for the moderate frequencies (10-30 days) although the strongest signals are now found south-west of Australia.For the longest frequencies we consider here (30-60 days), the residual OBP variability is generally much weaker, suggesting that the OBP variations at these frequencies are better captured by the AOD1B RL06 background model data which were subtracted during the satellite data processing.As a reference, we also show the standard deviation of AOD1B RL07 in the same frequency bands in Fig. 2. Comparing both figures indicates that for the shortest periods, the residual ITSG variability matches the overall variability of the background model, i.e. the residual circulation signal is proportional to the overall signal.This is especially visible in the Southern Ocean.In other parts, such as the northern part of the Pacific and especially for longer periods, that correspondence is significantly reduced. Next, we consider the impact of the new release AOD1B RL07 on the residual OBP variations.As the ITSG2018 time-series already considers the AOD1B RL06 background model data, we compare the ITSG signal content only to the update of AOD1B, i.e. the difference (RL07-RL06).We then assess the impact of the model update by computing explained variances using: where AOD1B is the update to the AOD1B background model data through RL07. The results are shown in Fig. 3 for the same three frequency bands as before.In all three bands, there are clearly regions where the update to AOD1B captures part of the residual circulation signal (red) and regions where the update does not capture the residual variability (blue).Blue areas are especially prevalent for the highest frequencies in the lower latitudes.While the same is true for the 10-30-day and 30-60-day bands, the effect is less pronounced.However, it should be noted that the negative explained variances could also indicate that the ITSG solution does not capture the variability properly, which may particularly hold for highest frequencies (Schindelegger et al. 2021).Regions where the AOD1B update captures the residual variability are in all three cases found in the band of the ACC as well as in the Arctic Ocean for the medium and long periods.Comparing the results to the amount of residual variability presented in Fig. 1, we find that the regions with negative explained variances correspond largely to areas where there is very little residual variability present in the ITSG time-series.This is especially clear for the shortest periods where the ITSG variability around the equator is close to zero.As a result, the explained variances, which are a metric relative to the ITSG variability, likely appear as highly exaggerated.When focusing only on regions with a significant amount of residual circulation signal, however, it turns out that these correspond to the areas with a positive explained variance.Examples are, for instance, the Bellingshausen Basin for the 3 -10-day band, or off the coast of south-western Australia in the 10 -30-day case.So while in many regions, especially lower latitudes, the two datasets do not correspond well, there are some regions such as parts of the Southern Ocean and the Arctic, where the variability is better captured by AOD1B RL07.Those conclusions are also consistent with previous evidence based on satellite altimetry and GRACE-FO alongtrack data reported by Shihora et al. (2022a). Based on the results presented so far, there are local and regional improvements when considering the new release of AOD1B.However, not all of the residual oceanic mass variations present in the ITSG2018 daily solutions are captured.Hence, the uncertainty of the background model data can be expected to show significant changes compared to the earlier estimation of Dobslaw et al. (2016) which calls for a new error assessment.In the following section, we thus focus on the calculation of a new realisation of true errors for AOD1B RL07 based on model differences from atmospheric reanalyses as well as differences in ocean model simulations for subsequent use in satellite gravity data analysis. 3 Atmospheric surface pressure differences AOD1B considers the non-tidal mass variations from both the oceans and the atmosphere.For RL07, the atmospheric component is based on the ECMWF's ERA5 reanalysis data (Hersbach et al. 2023) until 2017, followed by operational ECMWF data from 2018 onward.While the reanalysis data are constrained through observations, they will still include errors induced by insufficient or conflicting observations as well as insufficient modelling of atmospheric dynamics.These uncertainties are typically distributed globally and depend for example on parametrisations, orography, etc.A common approach to address these is through the comparison of NWM fields published by different institutions. We therefore compare the ERA5 surface pressure data to two other state-of-the-art atmospheric reanalyses: the Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA2) (Gelaro et al. 2017), from NASA's Global Modeling and Assimilation Office, and the Japanese 55-year Reanalysis (JRA55) (Kobayashi et al. 2015) from the Japan Meteorological Agency.Some characteristics regarding resolutions and data assimilation scheme are given in Table 1. As all three reanalyses feature a different horizontal resolution, we unify all datasets by remapping to a regular 0.5 • grid following Dobslaw et al. (2016).We further resample to a six-hourly temporal resolution and subtract the mean surface pressure in each case.This is also in line with the resolution of the previous error assessment.To eliminate the impact of low frequencies and high-frequency atmospheric tidal signals which are not part of AOD1B, we subsequently apply a bandpass filter with cut-off periods of 1 and 30 days.In all three cases, the largest differences in surface pressure variations are found in the Southern Ocean and Antarctica.Differences between ERA5 & MERRA2 as well as MERRA2 5a where the surface pressure variability exceeds 10 hPa in high latitudes, differences between the reanalyses reach up to 30% for low latitudes (< 20 • ) and only 10% for higher latitudes (> 20 • ).These small differences between the reanalyses show that surface pressure variations are generally well captured by all three of them.The result is not surprising, given that the reanalyses share essentially the same physics and observational data considered for assimilation.Only in regions where the density of observations is sparse, e.g. in Antarctica, larger differences are found.Based on the differences in the reanalyses presented in this section, we thus choose to base the atmospheric component of the new uncertainty estimation on the differences between ERA5 and MERRA2. MPIOM ensemble simulations Next to the residual uncertainties in the atmospheric component, we also consider uncertainties in the ocean domain.In contrast with the atmospheric mass variations provided by NWMs, the OBP variability is not observationally constrained.Instead, it is based on free-running forward simulations with the Max-Planck-Institute for Meteorology Ocean Model (MPIOM) (Jungclaus et al. 2013) forced using atmospheric fields from the ERA5 reanalysis.More details on the configuration of the ocean model are given in Shihora et al. (2022a).Given the lack of observational constraints, the residual uncertainties in AOD1B RL07 are expected to be much larger compared to the atmospheric component as it was already the case for the previous estimation.To get an estimate of the residual uncertainty of the oceanic component of AOD1B, we set up an ensemble simulation using MPIOM. In particular, we focus on two sources of uncertainty.The first source is based on the differences in the atmospheric reanalyses which will result in differences in the ocean dynamics and consequently also in differences in the OBP variability.In the following, we will refer to the variability induced by the atmosphere as forced variability.As a second contribution, chaotic intrinsic variability can arise through non-linear ocean processes.While they are typically associated with smaller scales, they can map into larger variations through non-linear interactions (Arbic et al. 2012;Zhao et al. 2021).We will refer to these variations as intrinsic variability going forward. Forced variability Excluding tides, high-frequency mass variations in the oceans are largely caused by atmospheric surface winds leading to a redistribution of water masses.These wind-driven barotropic changes are particularly pronounced in middle to high latitudes.Gradients in atmospheric surface pressure over the oceans can also drive relevant OBP variations (Ponte 1993).While at low frequencies the ocean surface can be expected to compensate the atmospheric pressure anomalies, at higher frequencies, the response is non-equilibrium (i.e.involves currents and mass motion).While atmospheric reanalyses such as ERA5 capture atmospheric dynamics with a great deal of realism, they are of course not perfect and residual uncertainties remain as shown in the previous section.This includes presented differences in surface pressure but also differences in other relevant atmospheric fields such as surface winds. To assess the impact of these atmospheric differences on the MPIOM simulations, we perform three model experiments from 1995 to 2020.One is forced using atmospheric ERA5 data.This simulation is thus equivalent to the simulation used in AOD1B RL07 with the only difference being that for the simulation here we have used a 3-hourly forcing frequency in order to be consistent with the other two experiments.The second and third MPIOM runs use either the MERRA2 or the JRA55 reanalysis data for the atmospheric forcing.In both of these cases, we also use 3-hourly forcing.The atmospheric forcing considers contributions from atmospheric pressure, near-surface horizontal wind speed and stresses, solar radiation, precipitation, cloud cover, temperature and dew point temperature.We extract bottom pressure fields from the ocean model, subtract the temporal mean and bandpass filter the results using 1-and 30-day cut-off periods. As a reference, we show the OBP variability, i.e. standard deviation, of the ERA5 forced run alone in Fig. 5b.Regions with the highest variability exceeding 5 hPa are found in the Southern Ocean in the region of the ACC, especially in the Bellingshausen Basin and the South-Australian Basin.Additionally, high variability is found in shelf areas as well as the Arctic Ocean where OBP variations are largely driven by barotropic pressure changes (Bingham and Hughes 2008). We now turn to the differences between the simulations with varied forcing.Figure 6 shows the standard deviation of OBP differences between the ERA5 and MERRA2 runs (a), the ERA5 and JRA55 runs (b), and the MERRA2 and JRA55 runs (c).In all three cases, the largest differences in variability match the regions that show a high variability in the first place as shown in Fig. 5b.The largest signals are found again in the Southern Ocean, where the wind-driven barotropic variability is generally high, but also differences in the atmospheric reanalysis data are largest (see Fig. 4).In these regions, the OBP differences reach values of up to 1 hPa which amounts to 15 -20% of the variability is those regions and even over 50% east of the Drake Passage.This highlights the sensitivity of the high-frequency mass variations in the Southern Ocean to the atmospheric forcing.Comparing the individual subfigures reveals that the differences between the ERA5 and JRA55 forced runs are smaller than the difference of either to the MERRA2 forced simulation.Again, this is consistent with the differences in atmospheric surface pressure 4 where the differences between ERA5 and JRA55 tend to be the smallest. Intrinsic variability Next we turn to the estimation of the contribution due to intrinsic variability in MPIOM.Oceanic intrinsic variations emerge not through variations in the atmospheric forcing but instead emerge from mesoscale turbulence on scales of O(10−100) km and O(10−100) days (Sérazin et al. (2015) and references therein).Impact of this initial intrinsic variability is also found at much larger and longer (i.e.interannual) scales (Penduff et al. 2011) suggesting a spontaneous inverse cascade to these scales (Sérazin et al. 2018).Studies show that this intrinsic variability has a significant impact Fig. 5 Standard deviation of ERA5 surface pressure (SP) and simulated ocean bottom pressure (OBP) from MPIOM using atmospheric ERA5 forcing data.Results are bandpass filtered using a 4th-order Butterworth filter to only contain periods in the 1-30-day band Fig. 6 Standard deviation of ocean bottom pressure (OBP) differences from three different MPIOM simulations where the atmospheric forcing is varied.Subfigures show the results for differences between ERA5 vs. MERRA2 atmospheric forcing (a), ERA5 vs. JRA55 forcing (b) as well as MERRA2 vs. JRA55 forcing (c).All OBP fields are bandpass filtered using a 4th-order Butterworth filter to only contain periods in the 1-30-day band on a number of oceanic variables such as sea-level (Sérazin et al. 2015), ocean heat content (Penduff et al. 2019), transports (Cravatte et al. 2021) or OBP (Zhao et al. 2021) on various time-scales.In order to disentangle the impact of the atmospheric forcing from the intrinsic variations, a common approach is to perform parallel ocean simulations with identical forcing that only differ in some small initial perturbations.Differences in the resulting variability can then be used to examine intrinsic variations (Sérazin et al. 2015).We employ the same approach in the following. We perform three MPIOM simulations that are all based on ERA5 atmospheric forcing data.All simulations start in the year 1994 from a transient ERA5 run started in 1960 based on a 2000 year long spin-up simulation with daily climatological forcing.One simulation, the reference, uses the "correct" initial conditions for the year 1994.For the second simulation, we start the run in 1994 but employ the initial conditions based on the year 1993, thus shifting the initial state by 1 year.The third run uses the initial conditions after only 1000 years of the spin-up.This represents a larger difference in the initial conditions for comparison.We then compute differences in 3-hourly OBP between all three simulations and band-pass filter the results to only include frequencies in the 1-30 day range.The results in terms of standard deviations of OBP differences are given in Fig. 7. Figure 7a shows the difference between the reference simulation and the simulation with the initial conditions shifted by 1 year, whereas Fig. 7b shows the difference between the reference and the simulation using the initial conditions after half the spin-up.Figure 7c shows the difference between the two simulations using shifted initial conditions.We explicitly note here the difference in scale, which is in this case given in Pa, compared to previous figures. The impact of the initial conditions varies strongly with latitude.Differences are largest in the Southern Ocean along the band of the ACC and reach values of up to 30 Pa.In contrast with the impact of the atmospheric forcing, the differences are thus smaller and amount only to 30-50% of the forced variability in the Southern Ocean.Other regions show a very limited sensitivity to the choice of initial conditions.Most of the open ocean and also coastal areas are largely unaffected.Comparing the three standard deviation differences with each other shows that the spatial distribution and magnitude are very similar suggesting that the distance in time by which the initial conditions are shifted is not relevant here.As the MPIOM configuration used here and in AOD1B Fig. 7 Standard deviation of ocean bottom pressure (OBP) differences from three different MPIOM simulations where the initial conditions are varied.Subfigures show the results for differences between using the correct initial conditions for 1994 vs. a 1-year shift in initial condi-tions (a), 1994 vs. initial conditions after half of the spin-up simulation (i.e.1000 years of spin-up) (b) as well as a one-year shift vs. using the ocean state after half of the spin-up (c).All OBP fields are bandpass filtered to only contain periods in the 1-30-day band is not eddy-permitting, there is likely little mesoscale activity and consequently also a reduced amount of intrinsic variability (Penduff et al. 2019).This might explain why the impact of the initial conditions is comparatively small compared to the forced variability and compared to results presented by Zhao et al. (2021). AOe07 time-series In the previous sections, we have analysed and compared atmospheric mass variations from different state-of-the-art reanalyses, which gives an estimate of the residual atmospheric uncertainty.Similarly we have examined the oceanic uncertainty in the forced variability and intrinsic chaotic variability in MPIOM through ensemble simulations.Next, we combine the individual components to derive a single timeseries of true errors representative for AOD1B RL07 that can be used in the processing of either satellite gravimetry or in simulation studies for MAGIC. The final time-series should merge the atmospheric information over the continents with the oceanic uncertainties elsewhere.For the atmosphere, we choose to use the differences between the ERA5 and the MERRA2 reanalyses.This combination is compatible with the ERA5 based AOD1B RL07 but also shows the larger differences in atmospheric mass variations and is thus deemed a better estimate of the residual uncertainties.For the ocean component, we combined the impact of the forced variability and the intrinsic variability in OBP.This is done through a new MPIOM simulation forced with atmospheric MERRA2 data that is also based on a shift in the initial conditions by one year.The uncertainty information over the oceans is then given by the difference between the new simulation to the reference simulation based on ERA5 data.Atmospheric tides in the surface pressure data and atmospherically induced tides in the simulated ocean bottom pressure are estimated and subtracted in the same way as for AOD1B RL07 and as described by Balidakis et al. (2022).The atmospheric and oceanic components are subsequently combined into a single time-series of 6-hourly fields and then highpass filtered using a cut-off frequency of 30 days in order to only represent the high frequencies relevant for satellite gravimetry. There is, however, an issue that needs to be addressed especially for the ocean component.As the uncertainty estimation is based on differences between OGCM simulations with MPIOM only, the uncertainties are likely underestimated in their magnitude.As indicated by Quinn and Ponte (2011), model differences as they are used here tend to underestimate the high-frequency OBP variations, especially when they are based on simulations using the same ocean model.Similarly, the impact of the intrinsic variability greatly depends on the resolution of the ocean model (Sérazin et al. 2015).We here use, just as in AOD1B RL07, MPIOM's TP10L40 configuration based on a 1-degree tri-polar grid and thus expect the intrinsic variations to be under-represented in the ensemble configuration.While the spatio-temporal pattern of OBP variability in the Southern Ocean is captured, its magnitude and certain peculiarities (e.g. the Argentine Gyre) are likely underestimated.Similar to the approach of the previous uncertainty estimation (Dobslaw et al. 2016), we thus calculate a scaling factor for the oceanic component.This is done through a comparison of the oceanic uncertainty estimate to the residual OBP variations in the ITSG2018 daily solutions after RL07 has been subtracted.We then select a global scaling factor to adjust the uncertainty time-series up to the ITSG variability without exceeding it.Note that the global factor of 2.4 is applied to the oceanic component of the time-series only.For the atmospheric contribution over the continents, the strong dependence on the assimilated barometer data means that no scaling is required.We note, however, that this procedure does not introduce any actual GRACE or GRACE-FO gravity information from the ITSG solutions into the error time-series.This approach merely compares the overall magnitude in variability. The thus derived final error time-series, labelled AOe07, can then be considered an estimation of the residual uncertainties in the AOD1B RL07 background model.In Fig. 8, we show the standard deviation of the new AOe07 time-series (a).In addition, Fig. 8b shows the previous uncertainty estimation AOerr, which was developed by Dobslaw et al. (2016) to represent the model deficiencies of AOD1B RL05.AOerr was based on operational and ERA-Interim atmospheric data from the ECWF as well as simulated OBP from the OMCT ocean model.The AOerr time-series is available for the 12year period from 1995 to 2006 as part of the ESA ESM. Globally, the new time-series features a smaller variability, reflecting the improvements made to the AOD1B background model data over the years.The reduction is both visible over the oceans and over the continents.The discrepancy in the accuracy of the background model data between the atmospheric component over land and the simulated ocean bottom pressure is still reflected in the new time-series. The largest uncertainties are found in the coastal areas where the amplitude of the barotropic high-frequency variations is also comparatively high.Additionally, increased uncertainties are also found in the Southern Ocean, as described in Sect.4.1.Compared to the previous release, the magnitude is smaller in the Southern Ocean although the regions with enhanced uncertainties remain similar.In contrast with the previous estimation, however, the uncertainties in the Arctic Ocean are significantly reduced which can be attributed to MPIOMs much better representation of the Arctic Ocean based on a tri-polar grid. Technically, AOe07 is available as a 6-hourly series of fully normalised Stokes coefficients from a spherical harmonic expansion up to degree and order (d/o) 180.The time-series covers 26 years from 1995 to 2020 and thus extends the previous version by 14 years.The data can be accessed, together with the previous uncertainty assessment, via the ESA ESM repository under ftp://ig2-dmz.gfzpotsdam.de/ESAESM/. Variance-covariance matrices One way to include the error estimation of background models is through the use of a variance-covariance matrix (VCM) which represents the spatio-temporal uncertainties.Such a VCM can then be used either in the gravity field estimation process or in dedicated simulation studies as shown by Abrykosov et al. (2021) for the case of ocean tides.In the latter case, the approach offers an opportunity to signifi-cantly improve the gravity field retrieval performance if the non-tidal background stochastic modelling is improved as well.In this section, we derive a new VCM based on the updated AOe07 time-series that captures the spatio-temporal uncertainties of the non-tidal atmosphere and ocean highfrequency mass variations. The calculation of the VCM is based on the computation of both variances as well as covariances between the Stokes coefficients via where X l 1 ,m 1 stands for the C and S Stokes coefficients of degree l 1 and order m 1 while X represents the temporal mean value.Variances are computed analogously using l 1 = l 2 and m 1 = m 2 .Based on Eq. 1, a fully populated stationary VCM is calculated up to d/o 40.This then matches the resolution of current GRACE daily solutions such as ITSG2018.Additionally, a second diagonal matrix, containing thus only variances, is calculated up to the full d/o 180.Like the AOe07 timeseries, both matrices are publicly available under Shihora et al. (2023a).We note that the VCMs computed in this way do not include any regularisation as it is implemented, e.g. for ocean tides (Abrykosov et al. 2021).In contrast with the tidal case, the AO VCM is based on a much larger number of epochs (i.e. 26 years of 6-hourly data) and in preliminary tests does not pose any problems in the application.Nonetheless, it should be kept in mind that the VCM is not regularised.In the following, we will turn to possible applications in GRACE-like and Next-Generation-Gravity-Mission (NGGM) simulations. Application in simulation studies In the context of satellite gravimetry, the background model uncertainties (as given by AOe07 and the VCM) can be applied in a variety of ways.Specifically, within the GRACE and GRACE-FO data processing, it can be applied in the gravity field estimation process to weight observations according to the uncertainty of the associated background model information.Secondly, AOe07 can be used in dedicated satellite gravimetry simulation studies.Especially the estimation of gravity field retrieval errors is regularly performed in the context of future gravity mission scenarios and their comparison.Here we present an example application of AOe07 in a Mass-Change and Geosciences International Constellation (MAGIC) simulation scenario that consists of a polar pair at 488 km and an inclined pair at 397 km.As a preparatory step, we perform a GRACE-FO baseline simulation for the year 2002 using GFZs EPOS-OC software (Zhu et al. 2004).The time-variable source model for the simulation is based on the ESA Earth-System-Model (ESA ESM) which also provides the so called DEAL coefficients (Dobslaw et al. 2016).The DEAL coefficients contain the unperturbed de-aliasing model that is applied as a background model in case that perfect model-based AO predictions are assumed.To arrive at a realistically perturbed background model for the simulations, the DEAL coefficients are either perturbed using the old AOerr product, which then results in a perturbed background model that corresponds to AOD1B RL05, or the new AOe07 series, which is designed to represent the capabilities of AOD1B RL07. In the simulation, we perform two gravity field recoveries using either DEAL + AOerr or DEAL + AOe07 as a background model and subsequently subtract the HIS component (i.e.terrestrially stored water, continental ice-sheets and solid Earth) of the ESA ESM to derive gravity field retrieval errors.We note that the retrieval errors obtained in this way contain both the AO-aliasing error and the HIS aliasing.Alternatively, the gravity field recoveries can be performed using ether DEAL + AOerr + HIS or DEAL + AOe07 + HIS, which then excludes any HIS aliasing errors in the final retrieval errors. Figure 9 illustrates the gravity field retrieval error for the new AOe07 error time-series (red, dashed) in comparison with the previous AOerr time-series (blue, solid).The mean HIS signal is given in black as indicator for the actual geophysical signal that needs to be captured by a satellite mission.Thin lines represent monthly results for 2002, while the bold lines indicate the yearly mean results.Subfigure (a) shows the results including only the AO-aliasing error, while (b) includes the AO-aliasing contribution as well as the HISaliasing. The simulation results suggest an overall reduction in the retrieval error of a few mm EWH, which we consider as relevant.For the mean results shown here, the RMSE is reduced by about 30 mm.While some reduction is also seen in parts at low degrees, the reduction is best visible for degrees above about d/o 30, and as a result, the mean HIS signal is recoverable to slightly higher spatial resolution.While there are slight variations from month to month, the results are fairly consistent for all computed months of the year 2002, indicating that when AO errors are considered, the simulated gravity retrieval error is reduced using the new AOe07 error estimation.Most likely, this can be attributed to the overall reduced amplitude of the new error time-series compared to its predecessor.Comparing the two subfigures shows that the impact of the HIS-aliasing is much smaller than the AOcontribution.This also underlines why we specifically focus on the AO contribution in the error estimation.It also corroborates the decision of the GRACE project team to abstain from the application of a hydrological de-aliasing model for Level-2 gravity field processing. Summary and conclusions For this study, we have produced and examined a new quantification of the remaining uncertainties in the latest release 07 of the non-tidal Atmosphere and Ocean De-Aliasing Level-1B (AOD1B) product.The newly updated uncertainty estimation is called AOe07 and can be used both for the processing of monthly gravity field solutions for mitigating the impact of residual temporal aliasing, and for dedicated simulation studies of future satellite gravimetry mission concepts. We have shown that the new release of AOD1B does indeed improve the representation of the oceanic highfrequency variability by comparing both RL07 and RL06 to the daily ITSG2018 gravity field solutions in regions where ITSG displays a significant amount of variability.Considering that the represented variability of AOD1B is less accurate over the oceans compared to the atmospheric contribution over land (Shihora et al. 2023b), an update of the residual uncertainty estimation was deemed necessary. The new estimation is based on model inter-comparisons both for the atmosphere and the ocean.For the atmospheric part, we consider model differences between the latest ECMWF reanalysis ERA5, which is the basis of AOD1B As, however, the oceanic component is based on model differences from the same ocean model, the uncertainties are most likely underestimated.Similar to the previous release by Dobslaw et al. (2016), this issue has been addressed through a time-invariant global scaling factor that is applied to the entire oceanic domain.The scaling factor is determined based on a comparison of the model differences to the residual variability in the daily ITSG data.While this is not an ideal approach, it allows the magnitude of the variability to be adequately captured.However, we note that the ratio of intrinsic versus forced variability is smaller than in comparable studies (Zhao et al. 2021) and the contribution of the intrinsic variability is under-represented in certain regions.We attribute this to the limited spatial resolution of the MPIOM configuration, which does not resolve mesoscale processes which are the ultimate driver of the intrinsic variability.We believe that, given the ongoing progress towards the next generation of satellite gravimetry missions and the associated studies, the timely availability of applicable background model uncertainties is important, so that AOe07 has been made available nonetheless.In a possible future refinement, additional investigations that specifically address the impact of the ocean model resolution as well as exploring differences based on different ocean models including high-resolution shallow-water codes should be considered along with a larger number of model runs to better capture the effects of the intrinsic variability.In that way, an even more realistic uncertainty estimate could be provided. Based on the AOe07 time-series, we also provide a variance-covariance matrix that represents explicitly the (time-averaged) spatial correlations for further use in simulation studies.A successful example of the application of VCMs is given in Abrykosov et al. (2021) for the case of ocean tides.Lastly, we have demonstrated the application of the new data-set in an exemplary satellite gravimetry simulation for the future MAGIC constellation.The results, which compare the gravity field retrieval error when using either the new AOe07 data or the previous error estimation, show a general improvement in the monthly retrieval by about 30 mm EWH, especially at higher degrees, and thus demonstrate a better signal recovery by a few degrees.We recommend that simulation studies are based on the ESA ESM DEAL coefficients in combination with AOe07 to arrive at a realistically perturbed de-aliasing model that is compatible with recent developments in background models. Fig. 3 Fig.3Amount of variance explained by the update of AOD1B, i.e. the difference (RL07-RL06) using the residual circulation signal from ITSG2018 daily gravity field solutions as a base time-series.Results Fig. 8 Fig. 8 Standard deviation of the new time-series of true errors adapted for AOD1B RL07 (a) and the previous error estimation (b).In both cases, 12 years from 1995 to 2006 are used Fig. 9 Fig. 9 Simulated degree amplitudes of gravity retrieval errors in mm EWH considering only the AO noise contribution.Results using the previous AOerr (new AOe07) estimation are given in blue (red).Thin
9,087.4
2024-04-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Bootstrapping the O(N) Archipelago We study 3d CFTs with an $O(N)$ global symmetry using the conformal bootstrap for a system of mixed correlators. Specifically, we consider all nonvanishing scalar four-point functions containing the lowest dimension $O(N)$ vector $\phi_i$ and the lowest dimension $O(N)$ singlet $s$, assumed to be the only relevant operators in their symmetry representations. The constraints of crossing symmetry and unitarity for these four-point functions force the scaling dimensions $(\Delta_\phi, \Delta_s)$ to lie inside small islands. We also make rigorous determinations of current two-point functions in the $O(2)$ and $O(3)$ models, with applications to transport in condensed matter systems. Introduction Conformal field theories (CFTs) lie at the heart of theoretical physics, describing critical phenomena in statistical and condensed matter systems, quantum gravity via the AdS/CFT correspondence, and possible solutions to the hierarchy problem (and other puzzles) in physics beyond the standard model. Quite generally, they serve as the endpoints of renormalization group flows in quantum field theory. The conformal bootstrap [1,2] aims to use general consistency conditions to map out and solve CFTs, even when they are stronglycoupled and do not have a useful Lagrangian description. In recent years great progress has been made in the conformal bootstrap in d > 2, including rigorous bounds on operator dimensions and operator product expansion (OPE) coefficients , analytical constraints [33][34][35][36][37][38][39][40][41][42][43][44][45], and methods for approximate direct solutions to the bootstrap [46][47][48][49], including a precise determination of the low-lying spectrum in the 3d Ising model under the conjecture that the conformal central charge is minimized [50]. These results have come almost exclusively from analyzing 4-point correlation functions of identical operators. It is tantalizing that even more powerful constraints may come from mixed correlators. In [51] some of the present authors demonstrated that semidefinite programming techniques can very generally be applied to systems of mixed correlators. In 3d CFTs with a Z 2 symmetry, one relevant Z 2 -odd operator σ, and one relevant Z 2 -even operator ǫ, the mixed correlator bootstrap leads to a small and isolated allowed region in operator dimension space consistent with the known dimensions in the 3d Ising CFT. With the assistance of improved algorithms for high-precision semidefinite programming [52], this approach has culminated in the world's most precise determinations of the leading operator dimensions (∆ σ , ∆ ǫ ) = (0.518151(6), 1.41264 (6)) in the 3d Ising CFT. The immediate question is whether the same approach can be used to rigorously isolate and precisely determine spectra in the zoo of other known (and perhaps unknown) CFTs, particularly those with physical importance. In this work we focus on 3d CFTs with O(N) global symmetry, previously studied using numerical bootstrap techniques in [15,22]. We will show that the CFTs known as the O(N) vector models can be similarly isolated using a system of mixed correlators containing the leading O(N) vector φ i and singlet s, assumed to be the only relevant operators in their symmetry representations. We focus on the physically most interesting cases N = 2, 3, 4 (e.g., see [53]) where the large-N expansion fails. We do additional checks at N = 20. A summary of the constraints on the leading scaling dimensions found in this work are shown in figure 1. We also make precise determinations of the current central charge JJ ∝ C J for N = 2, 3. This coefficient is particularly interesting because it describes conductivity properties of materials in the vicinity of their critical point [54]. The 3d O(2) model (or XY model) has a beautiful experimental realization in superfluid 4 He [55] which has yielded results for ∆ s that are in ∼ 8σ tension with the leading Monte Carlo and high temperature expansion computations [56]. Our results are not yet precise enough to resolve this discrepancy, but we are optimistic that the approach we outline in this work will be able to do so in the near future. More generally, the results of this work give us hope that the same techniques can be used to to solve other interesting stronglycoupled CFTs, such as the 3d Gross-Neveu models, 3d Chern-Simons and gauge theories coupled to matter, 4d QCD in the conformal window, N = 4 supersymmetric Yang-Mills theory, and more. The structure of this paper is as follows. In section 2, we summarize the crossing symmetry conditions arising from systems of correlators in 3d CFTs with O(N) symmetry, and discuss how to study them with semidefinite programming. In section 3, we describe our results and in section 4 we discuss several directions for future work. Details of our implementation are given in appendix A. An exploration of the role of the leading symmetric tensor is given in appendix B. [15]. Further allowed regions may exist outside the range of this plot; we leave their exploration to future work. Crossing Symmetry with Multiple Correlators Let us begin by summarizing the general form of the crossing relation for a collection of scalar fields φ i = (φ 1 , φ 2 , φ 3 , . . .). We take the φ i to have dimensions ∆ i and for the moment we do not assume any symmetry relating them. Taking the OPE of the first two and last two operators, the 4-point function looks like: The subscripts ∆, ℓ refer to the dimension and spin of the operator O. We refer to [51] for details about how to compute the conformal blocks g ∆ ij ,∆ kl ∆,ℓ (u, v) in any dimension and for arbitrary values of ∆ ij . We also have the symmetry property λ ijO = (−1) ℓ λ jiO . Crossing symmetry of the correlation function requires that OPEs taken in different orders must produce the same result. As an example, exchanging ( It is convenient to symmetrize/anti-symmetrize in u, v, which leads to the two equations: The functions F ij,kl ∓,∆,ℓ are symmetric under exchanging i ↔ k and j ↔ l. O(N ) Models We now restrict our discussion to the case where φ i transforms in the vector representation of a global O(N) symmetry. When the fields entering the four-point function are charged under global symmetries, the conformal block expansion can be organized in symmetry structures corresponding to irreducible representations appearing in the OPE φ i × φ j . This gives the equations 1 In what follows, we will use s, s ′ , s ′′ , . . . to refer to the singlet scalars in increasing order of dimension. For example, s is the lowest-dimension singlet scalar in the theory. Similarly, t, t ′ , t ′′ , . . . and φ, φ ′ , φ ′′ , . . . refer to scalars in the traceless symmetric tensor and vector representations, in increasing order of dimension. We would like to supplement the above equations with crossing symmetry constraints from other four-point functions. The simplest choice is to consider all nonvanishing fourpoint functions of φ i with the lowest dimension singlet scalar operator s. Another interesting choice would be the lowest dimension scalar in the traceless symmetric tensor representation t ij . However the OPEs t ij × t kl and t ij × φ k contain many additional O(N) representations, increasing the complexity of the crossing equations. We leave the analysis of external t ij operators to the future. Thus we consider the four-point functions φ i φ j ss and ssss , which give rise to four additional sum rules after grouping the terms with the same index structure. In total this leads to a system of seven equations: Note that the final line represents two equations, corresponding to the choice of ±. We can rewrite these equations in vector notation as where V T , V A , V V are a 7-dimensional vectors and V S is a 7-vector of 2 × 2 matrices: (2.9) A Note on Symmetries We are primarily interested in theories with O(N) symmetry. However, our bounds will also apply to theories with the weaker condition of SO(N) symmetry. This point deserves discussion. The group O(N) includes reflections, so its representation theory is slightly different from that of SO(N). In particular ǫ i 1 ...i N is not an invariant tensor of O(N) because it changes sign under reflections. For odd N = 2k + 1, O(2k + 1) symmetry is equivalent to SO(2k + 1) symmetry plus an additional Z 2 symmetry. For even N = 2k, the orthogonal group is a semidirect product O(2k) ∼ = Z 2 ⋉ SO(2k), so it is not equivalent to an extra Z 2 . Let us consider whether the crossing equations must be modified in the case of only SO(N) symmetry. In theories with SO(2) symmetry, the antisymmetric tensor representation is isomorphic to the singlet representation. (This is not true for O(2) because the isomorphism involves ǫ ij .) However in the crossing equation (2.7), antisymmetric tensors appear with odd spin, while singlets appear with even spin. Thus, the coincidence between A and S does not lead to additional relations in (2.7). For theories with SO(3) symmetry, the antisymmetric tensor representation is equivalent to the vector representation. Thus, antisymmetric odd spin operators appearing in φ × φ may also appear in φ × s. This does not affect (2.7) because there is no a priori relationship between λ φφO and λ φsO . However, it is now possible to have a nonvanishing four-point function φ i φ j φ k s proportional to ǫ ijk . Including crossing symmetry of this four-point function cannot change the resulting dimension bounds without additional assumptions. The reason is as follows. Any bound computed from (2.7) without using crossing of φφφs is still valid. Hence, the bounds cannot weaken. However, because any O(3)-invariant theory is also SO(3)-invariant, any bound computed while demanding crossing of φφφs must also apply to O(3)-invariant theories. So the bounds cannot strengthen. Crossing for φφφs only becomes important if we input that λ φφO λ φsO is nonzero for a particular operator. 2 This would guarantee our theory does not have O(3) symmetry. For SO(4), the new ingredient is that the antisymmetric tensor representation can be decomposed into self-dual and anti-self-dual two-forms. As explained in [10], this leads to an additional independent sum rule where A ± represent self-dual and anti-self-dual operators. By the same reasoning as in the case of SO (3), this crossing equation cannot affect the bounds from (2.7) without additional assumptions. We can also see this directly from (2.10) together with (2.7): in the semidefinite program used to derive operator dimension bounds, we may always take the functional acting on (2.10) to be zero. An exception occurs if we know an operator is present with λ φφO A + = 0 but λ φφO A − = 0 (or vice versa). Then we can include that operator with other operators whose OPE coefficients are known (usually just the unit operator) and the resulting semidefinite program will be different. For SO(N) with N ≥ 5, no coincidences occur in the representation ring that would be relevant for the system of correlators considered here. In conclusion, (2.7) and the semidefinite program discussed below remain valid in the case of SO(N) symmetry. Bounds on theories with SO(N) symmetry can differ only if we input additional information into the crossing equations that distinguishes them from O(N)-invariant theories (for example, known nonzero OPE coefficients). Bounds from Semidefinite Programming As explained in [51], solutions to vector equations of the form (2.7) can be constrained using semidefinite programming (SDP). We refer to [51] for details. Here we simply present the problem we must solve. To rule out a hypothetical CFT spectrum, we must find a vector of linear functionals α = (α 1 , α 2 , ..., α 7 ) such that for all traceless symetric tensors with ℓ even, (2.12) α · V S,∆,ℓ 0, for all singlets with ℓ even. (2.15) Here, the notation " 0" means "is positive semidefinite." If such a functional exists for a hypothetical CFT spectrum, then that spectrum is inconsistent with crossing symmetry. In addition to any explicit assumptions placed on the allowed values of ∆, we impose that all operators must satisfy the unitarity bound Additional information about the spectrum can weaken the above constraints, making the search for the functional α easier, and further restricting the allowed theories. A few specific assumptions will be important in what follows: • The 3d O(N) vector models, which are our main focus, are believed to have exactly one relevant singlet scalar s, O(N) vector scalar φ i , and traceless symmetric scalar t ij . 3 We will often assume gaps to the second-lowest dimension operators s ′ , φ ′ i , t ′ ij in each of these sectors. These assumptions affect (2.12), (2.14), and (2.15). • Another important input is the equality of the OPE coefficients λ φφs = λ φsφ . This is a trivial consequence of conformal invariance. It is important that φ and s be isolated in the operator spectrum for us to be able to exploit this constraint. For instance, imagine there were two singlet scalars s 1,2 with the same dimension. Then (λ fake φφs ) 2 = λ 2 φφs 1 + λ 2 φφs 2 would appear in (2.7). This combination does not satisfy λ fake φφs = λ φs i φ . • We will sometimes assume additional gaps to derive lower bounds on OPE coefficients. For instance, to obtain a lower bound on the coefficient of the conserved O(N) current in the φ i × φ j OPE, we will need to assume a gap between the first and second spin-1 antisymmetric tensor operators. As an example, (2.17) shows a semidefinite program that incorporates symmetry of λ φφs and the assumption that φ i , s are the only relevant scalars in their respective sectors: (2.17) 3 Additional relevant scalars could be present in other representations. The final constraint in (2.17) imposes the appearance of φ i , s in the OPEs and incorporates the equality λ φφs = λ φsφ . 4 It replaces two otherwise independent constraints on V S and V V . As previously mentioned, if we assume no gap between φ i , s and the next operators in each sector, enforcing symmetry of the OPE coefficients will have no effect: indeed each of the terms in this constraint would be independently positive-semidefinite, since the other inequalities imply α · V S,∆s+δ,0 0 and α · V V,∆ φ +δ,0 ≥ 0 for δ arbitrary small. Finally, one might want to enforce the existence of a unique relevant scalar operator, with dimension ∆ t , transforming in the traceless symmetric representation. In this case the symmetric tensor constraint is replaced by 3 Results O(2) To begin, let us recall the bounds on ∆ φ , ∆ s computed in [15] using the correlation function . Like the Ising model bounds computed in [12,50], this single-correlator bound has an excluded upper region, an allowed lower region, and a kink in the curve separating the two. The position of this kink corresponds closely to where we expect the O(2) model to lie, and one of our goals is to prove using the bootstrap that the O(2) model does indeed live at the kink. 5 If we assume that s is the only relevant O(2) singlet, then a small portion of the allowed region below the kink gets carved away, analogous to the Ising case in [51]. Adding the constraints of crossing symmetry and unitarity for the full system of correlators φφφφ , φφss , ssss does not change these bounds without additional assumptions. However, having access to the correlator φφss lets us input information special to the O(2) model that does have an effect. We expect that φ is the only relevant O(2) vector in the theory. One way to understand this fact is via the equation of motion at the Wilson-Fisher fixed point in 4 − ǫ dimensions, This equation implies that the operator φ 2 φ i is a descendent, so there is a gap in the spectrum of O(2)-vector primaries between φ i and the next operator in this sector, which is a linear combination of φ i φ 4 and φ i (∂φ) 2 . The equation of motion makes sense in perturbation theory ǫ ≪ 1. However, it is reasonable to expect gaps in the spectrum to be robust as As explained above, it is natural to impose this gap in our formalism. Another important constraint is symmetry of the OPE coefficient λ φφs = λ φsφ . Including these constraints gives the region in figure 3, which we show for increasing numbers of derivatives Λ = 19, 27, 35 (see appendix A). We now have a closed island around the expected position of the O(2) model, very close to the kink in figure 2. The bounds strengthen as Λ increases. However, the allowed regions apparently do not shrink as quickly as in the case of the 3d Ising CFT [52]. Thus, our determination of (∆ φ , ∆ s ) is unfortunately not competitive with the best available Monte Carlo [56] and experimental [55] results (though it is consistent with both). 6 We conjecture that including additional crossing relations (such as those involving the symmetric tensor t ij ) will give even stronger bounds; we plan to explore this possibility [56]. The red lines represent the 1σ (solid) and 3σ (dashed) confidence intervals for ∆ s from experiment [55]. The allowed/disallowed regions in this work were computed by scanning over a lattice of points in operator dimension space. For visual simplicity, we fit the boundaries with curves and show the resulting curves. Consequently, the actual position of the boundary between allowed and disallowed is subject to some error (small compared to size of the regions themselves). We tabulate this error in appendix A. in future work. In addition to gaps in the O(2)-vector and singlet sectors, we also expect that the O(2) model has a single relevant traceless symmetric tensor t ij . Let us finally impose this condition by demanding that t ′ ij has dimension above D = 3 and scanning over ∆ t along with ∆ φ , ∆ s . The result is a three-dimensional island for the relevant scalar operator dimensions, which we show in figure 4. Our errors for the symmetric-tensor dimension ∆ t are much more competitive with previous determinations. By scanning over different values of (∆ φ , ∆ s ) in the allowed region and computing the allowed range of ∆ t at Λ = 35, we estimate which is consistent with previous results from the pseudo-ǫ expansion approach [57] giving ∆ t = 1.237(4). [56] and the pseudo-ǫ expansion approach [57]. Note that our estimate for ∆ t in (3.2) was computed with Λ = 35, so it is more precise than the region pictured here. O(N ), N > 2 The bounds for N > 2 are similar to the case of N = 2. In figure 5, we show the allowed region of (∆ φ , ∆ s ) for theories with O(3) symmetry, assuming φ and s are the only relevant scalars in their respective O(N) representations, and using symmetry of the OPE coefficient λ φφs . We expect that an additional scan over ∆ t would yield a 3d island similar to figure 4. By performing this scan at a few values of (∆ φ , ∆ s ), we estimate which is consistent with previous results from the pseudo-ǫ expansion approach [57] giving ∆ t = 1.211(3). In figure 6, we show the allowed region of (∆ φ , ∆ s ) for the O(4) model, with the same assumptions as discussed above for O(3). A clear trend is that the allowed region is growing with N. For example, at Λ = 19, the O(4) allowed region isn't even an island -it connects to a larger region not shown in the plot. Increasing the number of derivatives to Λ = 35 shrinks the region, but not by much. The trend of lower-precision determinations at larger N reverses at some point. For example, in figure 1, the allowed region for N = 20 is smaller again than the O(4) region. The relative size of the O(4) region and the O(20) region is Λ-dependent, and we have not studied the pattern for general N in detail. Finally, we remark that all of the constraints on operator dimensions found above can be reinterpreted in terms of constraints on critical exponents. Following standard critical exponent notation (see [53]), the relations are given by Current Central Charges Let J µ ij (x) be the conserved currents that generate O(N) transformations. J µ ij (x) is an O(N) antisymmetric tensor with spin 1 and dimension 2. Its 2-point function is determined by conformal and O(N) symmetry to be (3.5) We call the normalization coefficient C J from Eq. (3.5) the current central charge. 7 The conserved current J µ ij appears in the sum over antisymmetric-tensor operators O A in Eq. (2.7). A Ward identity relates the OPE coefficient λ Jφφ to C J . In our conventions where C free J = 2 is the free theory value of C J [60]. It is well known that the conformal bootstrap allows one to place upper bounds on OPE coefficients, or equivalently a lower bound on C J . To find such a bound, we search for a functional α with the following properties (cf. eq. (2.17)): Notice that compared to (2.17), we have dropped the assumption of the functional α being positive on the identity operator contribution and we chose a convenient normalization for α. It follows then from the crossing equation (2.7) that Therefore, finding a functional α sets a lower bound on C J . To improve the bound, we should minimize the RHS of (3.8). We thus seek to minimize subject to the constraints (3.7). This type of problem can be efficiently solved using SDPB. In this way, we set a lower bound on C J for all allowed values of ∆ φ , ∆ s . We can also set an upper bound on C J , provided we additionally assume a gap in the spin-1 antisymmetric tensor sector. At this point it is not clear what gap we should assume, but to stay in the spirit of our previous assumptions, we will assume that the dimension of the second spin-1 antisymmetric tensor satisfies ∆ J ′ ≥ 3, so that the current J µ ij is the only relevant operator in this sector. We now search for a functional α (different from the one above) that satisfies and is normalized so that The constraints on α coming from the singlet and traceless symmetric-tensor sectors stay the same as in (3.7). An upper bound on C J then follows from (2.7): Our upper and lower bounds on C J , expressed as a function of ∆ φ and ∆ s , are shown in figures 7 and 8 for O(2) and O(3) symmetry, respectively. The allowed region for a given N consists of a 3d island in (∆ φ , ∆ s , C J ) space. This determines the current central charge to within the height of the island. For the two physically most interesting cases, N = 2 and N = 3, we find: (27) . (3.14) Recently, the current central charge attracted some interest in studies of transfer properties of O(N) symmetric systems at criticality [54], where C J can be related to the conductivity at zero temperature. In particular, it was found in [54] that the asymptotic behavior of conductivity at low temperature is given by where σ Q = e 2 / is the conductance quantum. Here, σ ∞ is the (unitless) conductivity at high frequency and zero temperature which is related to C J as Furthermore, C T is the central charge of the theory, C is the JJs OPE coefficient, and γ is one of the JJT OPE coefficients, where T is the energy-momentum tensor. B and H xx are the finite temperature one-point function coefficients: Of all the CFT data that goes into (3.15), we have determined σ ∞ and ∆ s for the O(N) vector models in this work, while C T was estimated using bootstrap methods before in [15]. The OPE coefficients C and γ can not be determined in our setup, but could in principle be obtained by including the conserved current J µ ij as an external operator in the crossing equations. The one-point functions B and H xx are in principle determined by the spectrum and OPE coefficients of the theory [61]. However, to compute them we would need to know the high-dimension operator spectrum. This is still out of the reach of the conformal bootstrap approach. Of particular interest for physical applications is the N = 2 case, which describes superfluid to insulator transitions in systems with two spatial dimensions [62]. Some examples of such systems are thin films of superconducting materials, Josephson junction arrays, and cold atoms trapped in an optical lattice. In these systems the parameter σ ∞ is the high-frequency limit of the conductivity. This quantity has not yet been measured in experiments, but was recently computed in Quantum Monte Carlo simulations [62] and [63] to be 2πσ MC ∞ = 0.355(5) and 0.359(4), respectively. Our result 2πσ Bootstrap ∞ = 0.3554(6) is in excellent agreement with those determinations and is an order of magnitude more precise. Conclusions In this work, we used the conformal bootstrap with multiple correlators to set more stringent bounds on the operator spectrum of 3d CFTs with O(N) symmetry. The multiple correlator approach works in this setting similarly to the case of Z 2 -symmetric CFTs -including mixed correlators opens access to parts of the spectrum that are inaccessible with a single correlator. In this work we considered mixed correlators of an O(N) singlet and an O(N) vector, gaining access to the sector of O(N) vectors. We can then additionally input assumptions about the operator spectrum in that sector. As a result, we exclude large portions of the allowed space of CFTs. This reaffirms conclusions from previous works on the 3d Ising model: it is important and fruitful to consider multiple crossing equations. We believe that including mixed correlators will be rewarding in many other bootstrap studies that are currently ongoing. Specifically, for O(N) symmetric CFTs, we found that the scaling dimensions of the lowest O(N) vector scalar φ and O(N) singlet scalar s are constrained to lie in a closed region in the (∆ φ , ∆ s ) plane. Our assumptions, besides conformal and O(N) symmetry, were crossing symmetry, unitarity, and -crucially -the absence of other relevant scalars in the O(N) singlet and vector sectors. This is completely analogous to the Z 2 -symmetric case where similar assumptions isolate a small allowed region around the Ising model in the (∆ σ , ∆ ǫ ) plane. Our allowed regions represent rigorous upper and lower bounds on dimensions in the O(N) models. In principle, this approach could be used to compute the scaling dimensions of the O(N) models to a very high precision, assuming that the allowed region will shrink to a point with increased computational power. However, our results suggest that the region either does not shrink to a point, or the convergence is slow in the present setup. Therefore, our uncertainties are currently larger than the error bars obtained using other methods. 8 In particular, we have not yet resolved the discrepancy between Monte Carlo simulations and experiment for the value of ∆ s in the O(2) model. in other dimensions (such as in 5d [22,28,30,64,65]) may also help to shed light on these issues. We plan to further explore these questions in the future. In addition to scaling dimensions, it is also important to determine OPE coefficients. Here we presented an example in the computation of the current central charge C J . In the case of O(2) symmetry, this yields the current most precise prediction for the highfrequency conductivity in O(2)-symmetric systems at criticality. It will be interesting to extend these mixed-correlator computations to other OPE coefficients in the O(N) models such as the stress-tensor central charge C T and 3-point coefficients appearing in JJs and JJT which control frequency-dependent corrections to conductivity. Pursuing the latter will require implementing the bootstrap for current 4-point functions, a technical challenge for which efforts are ongoing in the bootstrap community. More generally, the results of this work make it seem likely that scaling dimensions in many other strongly-interacting CFTs can be rigorously determined using the multiple correlator bootstrap. It will be interesting to study mixed correlators in 3d CFTs with fermions and gauge fields -it is plausible that similar islands can be found for the 3d Gross-Neveu models and 3d Chern-Simons and gauge theories coupled to matter. In 4d, we hope that by pursuing the mixed correlator bootstrap we will eventually be able to isolate and rigorously determine the conformal window of QCD. It also be interesting to apply this approach to theories with conformal manifolds to see the emergence of lines and surfaces of allowed dimensions; a concrete application would be to extend the analysis of [14,23] to mixed correlators and pursue a rigorous study of the dimension of the Konishi operator in N = 4 supersymmetric Yang-Mills theory at finite N. The time is ripe to set sail away from our archipelago and explore the vast ocean of CFTs! A Implementation Details As described in [51], the problem of finding α satisfying (2.11) can be transformed into a semidefinite program. Firstly, we must approximate derivatives of V S , V T , V A , and V V as positive functions times polynomials in ∆. We do this by computing rational approximations for conformal blocks using the recursion relation described in [51]. Keeping only the polynomial numerator in these rational approximations, (2.11) becomes a "polynomial matrix program" (PMP), which can be solved with SDPB [52]. Three choices must be made to compute the PMP. Firstly, κ (defined in appendix A of [52]) determines how many poles to include in the rational approximation for conformal blocks. Secondly, Λ determines which derivatives of conformal blocks to include in the functionals α. Specifically, we take Some of these derivatives vanish by symmetry properties of F . The total number of nonzero components of α is Finally, we must choose which spins to include in the PMP. We use Mathematica to compute and store tables of derivatives of conformal blocks. Another Mathematica program reads these tables, computes the polynomial matrices corresponding to the V 's, and uses the package SDPB.m to write the associated PMP to an xml file. This xml file is then used as input to SDPB. Our settings for SDPB are given in table 1. Finally let us conclude with some comments on the precision of the plots presented in the main text. Conformal blocks of correlation functions involving operators of nonequal dimensions depend nontrivially on the difference of the dimensions. Hence, when computing the boundary of various allowed regions, it is convenient to perform a scan over a lattice of points. The vectors generating the lattice points are shown in table 2. The smooth regions shown in figs. 1, 3, 5, and 6 are the results of a least-squares fit, subject to the constraint that allowed lattice points should lie inside the curves while excluded ones lie outside. In table 2 we also show the maximal perpendicular distance of these points to the curves. The bounds on C J shown in figures 7 and 8 were computed for the lattices of points that were found to be allowed in figures 3 precision 448 576 768 896 findPrimalFeasible True True True True findDualFeasible True True True True detectPrimalFeasibleJump True True True True detectDualFeasibleJump True True True True B Symmetric Tensor Scan In this appendix we collect some detailed scans of the allowed region of (∆ φ , ∆ s , ∆ t ) space Qualitatively the picture is the same for each value of N and we expect that the projections of the 3d plot into the (∆ φ , ∆ s ) plane will look similar for even higher values of N. In particular, the lowest allowed values of ∆ t are obtained at the lower left corner of the allowed region in the (∆ φ , ∆ s ) plane, while the greatest values are obtained at upper right corner of the allowed region. This allows us to find general bounds on ∆ t without doing a whole scan over the (∆ φ , ∆ s ) plane; it is enough to find bounds on ∆ t at the corner points. Figure 11: Allowed points in the (∆ φ , ∆ s ) plane for different values of ∆ t in O(4) symmetric CFTs at Λ = 19 (dark blue). The light blue shows the allowed region at Λ = 35 without any assumptions on the symmetric tensor spectrum. The green rectangle is the Monte Carlo estimate [59].
7,775.6
2015-04-29T00:00:00.000
[ "Physics" ]
Remineralization Effect of Zamzam Water on Initial Artificial Carious Lesion of Permanent Teeth Introduction: Chemical testing showed that Zamzam water is completely safe to drink and has health benefits due to its high percentage of sodium, calcium, magnesium, and many other minerals. The purpose of this study was to evaluate the remineralization effect of Zamzam water on extracted premolars using the Vickers Microhardness test. Methods: Teeth samples (N=40) with artificially induced carious lesions were divided randomly into four groups: Study group (I) treated with agitated Zamzam water (n=10), study group (II) treated with non-agitated Zamzam water (n=10), control positive group (III) treated with sodium fluoride (n=10), and control negative group (IV) treated with deionized water (n=10). Teeth were subjected to microhardness testing before and after artificial demineralization and after remineralization treatment within the four groups. Results: Following treatment with different solutions in both study and control groups, there was an increase in microhardness after remineralization but with varying degrees. The highest increase in microhardness was shown after remineralization with sodium fluoride followed by agitated Zamzam water. Conclusion: Zamzam water with agitation causes an increase in the microhardness of the enamel surface after demineralization. Zamzam water is an effective remineralizing agent in initial carious lesions, and its efficacy is comparable to that of sodium fluoride. Introduction Dental caries is one of the most common oral health problems, particularly among children and young adults [1,2]. The primary prevention of dental caries focuses on limiting cariogenic diets, and dental plaque control through individual and professional oral hygiene measures, as well as increasing tooth resistance to acid attack [3]. Increasing tooth resistance to acid attack can be achieved by fluoride which gives hardness and durability to the enamel surface and protects it from caries [4][5][6]. Water is one of the most essential dietary elements, and its consistency has a significant impact on human health. The use of Zamzam water is very popular [7]. Many Muslims believe that the water of the Zamzam Well is divinely blessed, able to satisfy both hunger and thirst, and cure illness. Pilgrims make efforts to drink this water during their pilgrimage. People living nearby the Zamzam Well might drink the water more than others. The Zamzam Well has been in use for about 4,000 years, according to Arab historians. Zamzam Well is located in the holiest mosque of Muslims, in the city of Makkah. It is about 40 meters deep and surrounded by hills of igneous rocks. For Muslims, drinking water from the Zamzam Well has a unique meaning. Every year, millions of Muslims drink this water as sacred water, especially during pilgrimages and Umrah [8]. In 1971, the Ministry of Agriculture and Water Resources sent samples of Zamzam water to European laboratories for analysis, in order to determine the water's potability. According to the findings of the water samples examined by European laboratories, Zamzam water has a unique physique that makes it beneficial water [9]. The amount of calcium, sodium, potassium, and magnesium salts in Zamzam water differs significantly from other water. The percentage of these minerals was slightly higher in Zamzam water. Most importantly, fluoride has an effective germicidal action [10,11]. In Zamzam water, the four toxic elements arsenic (As), cadmium (Cd), lead (Pb), and selenium (Se) have been found below the toxic level of human consumption. Hence, Zamzam water is safe to drink [12]. The study aims to test the effect of agitated and non-agitated Zamzam water on the microhardness of the artificially initiated carious lesion on the buccal enamel surface in comparison to sodium fluoride gel and deionized water. Materials And Methods This study has been approved by Alrass Dental College, Qassim University IRB (IRB-DRC/12M/4-20) in December 2019. The teeth sample consisted of 40 premolars, extracted for orthodontic reasons. Teeth were washed with deionized water to remove any debris. Teeth were stored in 20 ml deionized water in which 0.1% thymol was added, then kept in 37°C incubators [13]. The total sample (N=40) was divided randomly into study group I (n= 10), study group II (n= 10), positive control group (n=10), and negative control group (n=10). The orthodontic resin was used to encircle each tooth till the cementoenamel junction. After the setting of orthodontic resin, a low-speed saw machine was used to cut the roots of the teeth (Figure 1). FIGURE 1: Low-speed saw machine. Then, crowns were embedded in polyvinyl chloride (PVC) which was filled with orthodontic resin as well ( Figure 2). FIGURE 2: Polyvinyl chloride (PVC) filled with orthodontic resin to hold the crown. Teeth grinding was done using 400 grit carbide paper discs in the automatic machine ( Figure 3). This procedure formed a flat surface on each tooth for microhardness testing [14]. Color coding of the specimens from one to ten in each group was done ( Figure 4). As baseline reading, all teeth were subjected to microhardness assessment on the buccal surface. Three readings for each tooth were recorded in order to cover as many areas as possible of the buccal surface. The initial microhardness was determined by Vickers Microhardness Device at a load of 500g ( Figure 5). FIGURE 5: Vickers Microhardness device for microhardness testing. Demineralization protocol to initiate caries-like lesions guided by previous studies with minor modifications [15,16]. The samples were soaked in 20 ml demineralized solution. The potential of Hydrogen (pH) of the solution was adjusted to reach around 4.2 by the addition of potassium hydroxide. The solution was kept at 37°C in the incubator for four days. Then teeth were removed from the demineralized solution and washed with deionized water. After the demineralization process, other readings of the groups were obtained using Vickers microhardness device a at load of 500g. Each group was treated accordingly: For study group (I), each tooth was immersed in 30 mL of agitated Zamzam water for five minutes. Agitation was performed using a brush. Then treated teeth were rinsed with deionized water for two minutes and placed in deionized water at a 37°C incubator till the next day. This procedure of treating teeth with agitated Zamzam water was repeated daily for two weeks. For study group (II), similar procedure was performed using non-agitated Zamzam water instead of agitated. For control positive group (III), teeth were treated with sodium fluoride gel using the same previous method. For control negative group (IV), teeth were treated as in study groups except that deionized water was used instead of Zamzam water. A third reading for each group was obtained after treatment with agitated Zamzam water, non-agitated Zamzam water, sodium fluoride gel, or deionized water. Microhardness was recorded by Vickers Microhardness device at load of 500g. Analysis of variance for calculating the significant differences between the different variables was performed. Statistical parameters, mean and standard deviation were calculated. Statistical testing was performed and p-value <0.05 was used as the level of significance. Table 1 demonstrates the microhardness mean value and standard deviation of the enamel surface when evaluated at baseline, then after demineralization, then finally after treatment with agitated Zamzam water in the study group (I). The highest mean value of microhardness was recorded after remineralization (388.197 ± 2.13), while the lowest mean value was recorded after demineralization (368.097 ± 4.6). Figure 6 shows that following treatment with different solutions in both study and control groups, there was an increase in microhardness after remineralization but with varying degrees. The highest increase in microhardness was shown after remineralization with sodium fluoride followed by agitated Zamzam water. Discussion The primary prevention of dental caries includes increasing the outer enamel surface resistance to acid dissolution and enhancing remineralization. Fluoride is commonly used for the prevention of dental caries since the 1930s [17]. Experimental studies reported that sodium fluoride is successful in remineralizing initial carious lesions and resisting carious attacks [18][19][20]. As a result, sodium fluoride was used as the positive control in this study. On the other hand, deionized water was used as a negative control. In this study, the microhardness of the sound enamel surface was recorded from the buccal surface because it has a higher microhardness value compared to other surfaces. This is due to the differences in the mineral composition of the surfaces [21]. A study found that the differences in the mineral composition were due to variations in crystal orientation between the buccal and lingual surfaces [22]. Accordingly, the buccal surface was chosen to be measured in this study. After soaking the teeth in the demineralized solution, there was a significant reduction in the microhardness of the enamel surface in all groups. This is an indication of enamel surface demineralization. However, an increase in microhardness was observed after the application of sodium fluoride gel, which was statistically significant (p-value <0.005). Another study reported that remineralization of the initial carious lesion occurs as a result of fluoride ions' interaction with the hydroxyl appetite crystals. As a result, a new crystalline substance that differs from the fluorapatite will be formed [23]. Fluoride has been reported to react with hydroxyapatite crystals not only on the surface layer but also on the subsurface layers, giving it a crushing strength greater than the original demineralized material [24]. This explains the rise in microhardness values in teeth treated with sodium fluoride gel observed in this study. In addition, agitated Zamzam water was effective in rising the microhardness value of demineralized surfaces. As mentioned in a previous study, the incorporation of Zamzam water elements (fluoride, magnesium, calcium) in appetite crystals helps in increasing acid dissolution resistance [10]. The end result of the chemical reaction is not well understood; however, the presence of fluoride components in Zamzam water is responsible for the chemical reaction between Zamzam water constituents and appetite crystals [8]. One of the limitations of this study is the number of samples in each group. In addition, teeth were treated for five minutes up to 14 days only to stimulate using Zamzam water as a remineralization agent in the oral cavity. However, increasing the duration of treatment rather than two weeks could increase the microhardness values. Further studies are needed to confirm this. Conclusions In this study, an increase in microhardness after remineralization was noted in both the study and control groups when treated with different solutions. Among the study groups, the highest mean value of microhardness was recorded after treatment with agitated Zamzam water. Therefore, Zamzam water is an effective remineralizing agent in initial carious lesions and its efficacy is comparable to that of sodium fluoride. However, instructing patients to use Zamzam water as a remineralizing agent in dental practice requires further research. Although this study has statistically proven the remineralization ability of Zamzam water, further studies are needed to confirm the exact scientific mechanism involved in the remineralization process. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Alrass Dental Research Center, Qassim University issued approval DRC/12M/4-20. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2,678.2
2022-12-01T00:00:00.000
[ "Medicine", "Materials Science" ]
On resonant mixed Caputo fractional differential equations *Correspondence<EMAIL_ADDRESS>2Department of Mathematics and Institute for Mathematcal Research, Universiti Putra Malaysia, 43400, Serdang, Malaysia Full list of author information is available at the end of the article Abstract The purpose of this study is to discuss the existence of solutions for a boundary value problem at resonance generated by a nonlinear differential equation involving both right and left Caputo fractional derivatives. The proofs of the existence of solutions are mainly based on Mawhin’s coincidence degree theory. We provide an example to illustrate the main result. Introduction and preliminaries Mathematical structures describe the complex systems which involve multiple elements and interact between one another in various forms. These interactions exist in physics, electromagnetic, mechanics, biology, signal processing, finance, economics, and many more. In order to make sense of the data extracted from such elements, the evolution of the data against time is utilized. The immediate observation would be a system of differential equations. Upon solving such differential equations, the obtained function will have some information that can be used to extract and understand the data at hand and further predict the future information related to the data. A special class of differential equations are boundary value problems (BVP) and nonlinear fractional integro-differential equations [1,2]. The fundamental investigation on these types of fractional differential equations is pertinent in order to interpret the related data which evolve into such form. Thus to study the solutions of existence and uniqueness of integro-differential equations might benefit data modeling and formulation via fractional integro-differential equations. In [23], the existence of solutions for integral boundary value problems of mixed fractional differential equations under resonance was studied, and a very recent study [24] in-troduced a new method to convert the boundary value problems for impulsive fractional differential equations to integral equations. Recently much attention has been given to the solvability of such type of differential equations that have left and right fractional derivatives. Further, several works are also devoted to this type of study, for details, see [3,4,7,[12][13][14]. In this study, we consider the existence of solutions for the following type of equation: where f ∈ C([0, 1] × R, R), 0 < θ , and υ < 1 such that θ + υ > 1, while the notations D θ 1and D υ 0 + refer to the right and left fractional derivatives in the Caputo sense, respectively. Note that problem (P) is at resonance since the homogeneous fractional boundary value problem (BVP) In this study we establish sufficient conditions that will help us to show that there is at least one solution for problem (P). Many difficulties will occur when we deal with the presence of mixed type fractional derivatives having order less than one, and there are only a few studies related to this case. Moreover, the current literature on the study of BVP at resonance having mixed type fractional-order derivatives is not satisfactory and the topic has not been extensively studied so far. There are some initial attempts such as the following. In [4], the authors investigated the existence and uniqueness of solution by the use of some fixed point theorems for the following type BVP: Similarly, under certain conditions on f in [14], the authors studied and proved, by using Krasnoselskii's fixed point theorem, the existence of solutions for the following type nonlinear BVPs: which involve the right Caputo and the left Riemann-Liouville fractional derivatives, respectively. In [19,20], some partial treatments were provided for the following hybrid type nonlinear fractional integro-differential equations: where f , g are continuous functions and λ ∈ R + for all t ∈ J = [a, b]. Thus we check for a solution of Eq. (1) subject to u ∈ C 1 (J, R). Next we recall the following definitions and auxiliary lemmas related to fractional calculus theory, for details, see [17,22,25]. Definition 1 The left and right Riemann-Liouville fractional integrals with order θ > 0 on [a, b] of a function y are defined respectively by Definition 2 The left and right Caputo derivatives respectively, where n = [θ ] + 1, and [θ ] is the integer part of θ . In the next lemma we present some properties associated with fractional integrals and derivatives in the Caputo sense. Lemma 3 The homogenous equation and similarly, In addition, the following properties are correct: Next we need the following definitions and a theorem for the development of our results. Let X and Y be two Banach spaces (real), and let us define a linear operator L : dom L ⊂ X → Y . Then we have the following definition. Now if we define P : X → X and Q : Y → Y as continuous projections such that Im P = ker L, ker Q = Im L. Then which leads to is invertible, and we denote its inverse by K P . Definition 5 Let ⊂ X be a bounded open subset and dom L ∩ = ∅. Then the map N : Note that since Im Q is isomorphic to ker L, that is, J : Im Q → ker L isomorphism, the equation Lx = Nx is equivalent to The next theorem is given in [21]. Theorem 6 Let L be a Fredholm operator with index zero and N be L-compact on . Further, the following conditions are satisfied: Lemma 7 Let L be given by Proof Let x ∈ ker L. From Lemma 3, the equation Lx = 0 has a solution By applying the boundary conditions (BC), we can easily get b = 0, then it follows that x(t) = a t υ (υ+1) , a ∈ R. Now, let y ∈ Im L, then there exists a function x ∈ dom L such that Applying the operator I θ 1 -then I υ 0 + to both sides of equation (3), we get Condition x(0) = 0 implies Since Conversely, let y ∈ X and satisfy (4), set x(t) = I υ 0 + I θ 1 -y(t) + I υ 0 + a, a ∈ R, then x ∈ dom L and satisfies Lx = y, thus y ∈ Im L. It follows that the proof is complete. Lemma 8 The operator L : dom L ⊂ X → X is a Fredholm operator with index zero. The linear projection operators P, Q : X → X satisfy , Furthermore, the operator K p : Im L → dom L ∩ ker P defined by K p y = I υ 0 + I θ 1 -y is the inverse of L| dom L∩ker P and satisfies Proof The continuous operator Q is a projector, indeed It is easy to check that Im L = ker Q. Let y = (y -Qy) + Qy, then it follows that y -Qy ∈ ker Q = Im L, Qy ∈ Im Q and Im Q ∩ Im L = {0}, so that X = Im L ⊕ Im Q. Then we obtain dim ker L = 1 = dim Im Q = co dim Im L = 1, that is, L is a Fredholm operator with index zero. Now we claim that the continuous operator P is a projector. In fact Obviously, Im P = ker L. Then, setting x = (x -Px) + Px, we have that X = ker P + ker L. Further this leads to ker L ∩ ker P = {0}, that is, X = ker L ⊕ ker P. Now, we show that a generalized inverse of L is K P . If we let y ∈ Im L, in view of Lemma 3, it yields and for x ∈ dom L ∩ ker P, we obtain Since x(0) = 0 and Px = 0, we get This shows that Applying the definition of K p , we obtain y , that is, y . This completes the proof. Existence of solutions In order to solve problem (P), we assume the following conditions: (H1) There exist some functions α, β ∈ C([0, 1], R + ) such that (H3) There exists a constant M * > 0 such that, for Proof We will show that QN( ) is a bounded operator and K P (I -QN)( ) is a compact operator. Since is a bounded set, then there is a constant r > 0 such that x ≤ r, ∀x ∈ . Let x ∈ , then in view of condition (H1) we have which yields QN( ) is a bounded operator. Next, we prove that K P (I -Q)N( ) is compact. For x ∈ , and by condition (H1), we get On the other hand, using the definition of K P and together with (5), (10), and (11), we get It follows that is actually uniformly bounded. Now we prove K P (I -Q)N( ) is equicontinuous. For this, let x ∈ , and for any t 1 , t 2 ∈ [0, 1], t 1 < t 2 , we have Let us estimate the term |I θ 1 -(I -Q)Nx(s)|. We have thus it is bounded and (13) becomes Thus it follows that K P (I -Q)N( ) is equicontinuous on [0, 1]. Hence, we easily deduce that K P (I -QN) : → X is a compact operator. Proof Suppose that x ∈ 1 , then x = (x -Px) + Px ∈ dom L\ ker L. That is, (I -P)x ∈ dom L ∩ ker P and Px ∈ ker L, i.e., LPx = 0, thus from Lemma 8, we get That means Since Lx = λNx, then then (15) can be estimated as Using (14) and (17) yields which shows that 1 is a bounded set. Theorem 13 Assume that conditions (H1)-(H3) hold. Then problem (P) has at least one solution in X. Proof We can easily prove that using Lemma 8 and Lemma 9, the conditions of Theorem 6 are satisfied. Then the proof follows similar steps as in [15]. We conclude by Theorem 13 that problem (P) has a solution in X. Conclusion Nonlinear fractional integro-differential equations are important and widely applied in many areas. In particular to have mixed fractional terms on both sides, that is, having fractional integrals or fractional derivatives on the left-and right-hand side respectively, is an important class that is not fully studied in the literature. There are some real difficulties to examine the existence and uniqueness of solutions for these types of equations, and further properties for these types of equations have been studied by few researchers using different techniques (see, for example, [4,9,14] for partial treatment). In this work we establish sufficient conditions and prove that there is at least one solution for problem (P): under certain condition.
2,515.8
2020-10-27T00:00:00.000
[ "Mathematics" ]
Effect of the Epoxide Contents of Liquid Isoprene Rubber as a Processing Aid on the Properties of Silica-Filled Natural Rubber Compounds In this study, we examined the feasibility of using epoxidized liquid isoprene rubber (E-LqIR) as a processing aid for truck and bus radial (TBR) tire treads and investigated the effects of the epoxide content on the wear resistance, fuel efficiency, and resistance to extraction of the E-LqIRs. The results confirmed that, compared to the treated distillate aromatic extract (TDAE) oil, the E-LqIRs could enhance the filler–rubber interactions and reduce the oil migration. However, the consumption of sulfur by the E-LqIRs resulted in a lower crosslink density compared to that of the TDAE oil, and the higher epoxide content decreased the wear resistance and fuel efficiency because of the increased glass-transition temperature (Tg). In contrast, the E-LqIR with a low epoxide content of 6 mol% had no significant effect on the Tg of the final compound and resulted in superior wear resistance and fuel efficiency, compared to those shown by TDAE oil, because of the higher filler–rubber interactions. Introduction Significant efforts have been made to increase the fuel efficiency and wear resistance of truck and bus radial (TBR) tires owing to the recent environmental regulations and emergence of electric vehicles [1]. The tire tread is the only part of the vehicle that is in contact with the ground. Therefore, the rolling resistance of the tire tread greatly affects the fuel efficiency of the vehicle, and the wear resistance of the tire tread is important for the long-term use of the tire. Moreover, vehicles such as trucks and buses typically carry heavy loads and are used for long-distance transportation; consequently, it is necessary to increase the fuel efficiency and wear resistance of TBR tires. One such effort involves the replacement of the carbon black that is used in TBR tire treads with silica and silane coupling agent (SCA) [2][3][4]. The TBR tire treads generally use natural rubber (NR) as the base rubber; however, the silica-SCA-NR interactions are weak because of the interference by the proteins and lipids present in NR [5,6]. These weak filler-rubber interactions decrease the wear resistance of the TBR tires and, as a result, the use of silica in TBR tire treads is limited compared to that of carbon black. In addition, with the recent advances in autonomous driving, it is expected that automated cargo trucks can be operated for 24 h a day. As a result, another issue with TBR tire is that processing aids such as TDAE oil migrate to the tire surface over time, resulting in deterioration of the physical properties of the vulcanizates [7,8]. Liquid rubber has received considerable interest as a new processing aid to solve the oil migration problem. Its applicability as a processing aid in passenger car radial tires has been actively studied in recent times, confirming that the use of liquid rubber as a processing aid can improve the processability and alleviate the oil migration problems [9][10][11]. However, studies on the application of liquid rubber in TBR tire treads have not been reported to date; moreover, no quantitative analysis has been conducted on the effect of liquid rubber on the vulcanizate structure of these compounds. In addition, it has been reported that the low filler-rubber interactions of the silicafilled NR compounds can be increased through chemical interactions between the hydroxyl groups of silica and epoxide group of rubber by introducing epoxide groups in the NR [12]. Such increased filler-rubber interaction can be improved wear resistance. In this study, E-LqIRs with different epoxide contents were prepared. E-LqIRs were used as a processing aid, and the properties of the compounds according to the epoxide contents were compared. E-LqIR can improve not only wear resistance but also the oil migration problem. Therefore, E-LqIR can be expected to have excellent properties for the TBR tire tread. However, no study has been conducted on how liquid rubber with high functionality such as E-LqIR acts on the vulcanizate structure of compounds. Therefore, we tried to quantitatively analyze the effect of E-LqIRs on the vulcanizate according to the epoxide contents through vulcanizate structure analysis. Polymerization All the materials were purged with nitrogen. Cyclohexane (99%, Samchun Chemical Co., Seoul, Korea) was used as the organic solvent, and n-butyllithium (2.0 mol/L in cyclohexane, Sigma-Aldrich Corp., Seoul, Korea) was the anionic initiator. Isoprene (99%, Samchun Chemical Co., Seoul, Korea) was used as a monomer, and tetrahydrofuran (THF; 99%, Duksan General Chemical Co., Seoul, Korea) was used as a polar modifier to increase the reaction rate. In addition, n-octyl alcohol (99%, Yakuri Pure Chemicals Co. Ltd., Kyoto, Japan) was used as the termination agent. Differential Scanning Calorimetry The glass-transition temperatures (T g ) of the E-LqIRs were measured by differential scanning calorimetry (DSC; DSC-Q10, TA Instruments, New Castle, DE, USA). The E-LqIR samples (3-6 mg) were analyzed from −80 to 100 • C at a heating rate of 10 • C/min. Payne Effect The degree of filler-filler interactions of the compounds was determined by following the standard procedure described in ASTM D8059, using a rubber processing analyzer (RPA2000, Alpha Technologies, Hudson, OH, USA). The storage modulus (G ) of the compounds after the first stage of mixing was measured at 60 • C within a strain range of 0.01-40.04%. The silica agglomerates did not disintegrate under a low strain; thus, the storage modulus was high in the low strain region and decreased when a higher strain was applied. This is called the Payne effect, which can be quantified by the change in the storage modulus (∆G ), and represents the degree of filler-filler interaction. In this study, ∆G was calculated by subtracting the value at a strain of 40.04% from that at 0.28% and was used as an indicator of the degree of filler dispersion within the rubber compounds. Mooney Viscosity The processability of the rubber compound was evaluated using the standard procedure described in ASTM D164. The sample was preheated to 100 • C for 1 min. Next, a Mooney viscometer (VluChem Ind Co., Seoul, Korea) was used to measure the torque produced by a rotor rotating at 2 rpm for 4 min within a space filled with unvulcanized rubber. Bound Rubber Content After the first mixing stage, a sample of each compound (0.2 ± 0.01 g) was placed on a filter paper and immersed in toluene (100 mL) for 6 days at 25 • C to extract the unbound rubber. Next, the toluene contained in the extracted unbound rubber was cleaned with acetone and dried. The bound rubber content was computed based on the sample weights before and after the experiment using Equation (1) as follows: Here, w fg is the combined weight of the filler and gel, w t is the weight of the specimen, m f is the weight fraction of the filler in the compounds, and m r is the weight fraction of the polymer in the compounds. Curing Characteristics The curing characteristics of the compounds were evaluated using the minimum torque (T min ), maximum torque (T max ), scorch time (t 10 ), and optimal curing time (t 90 ). These characteristics were measured using a moving die rheometer (MDR; Myung Ji Co., Seoul, Korea) at a vibration angle of ±1 • and temperature of 150 • C, over 30 min. Crosslink Density and Vulcanizate Structure Analysis The vulcanized specimens (10 mm × 10 mm × 2 mm) were sequentially immersed in THF (99%, Samchun Chemical Co., Seoul, Korea) and n-Hexane (95%, Samchun Chemical Co., Seoul, Korea) at 25 • C for 1 day to remove any organic additives in the specimens. The weights of these treated specimens were then recorded. Next, the specimens were immersed in toluene at room temperature for 1 day, and the resulting swollen specimens were weighed. The total crosslink density was calculated using the Flory-Rehner equation (Equation (2)) expressed as Here, ν is the crosslink density (mol/g), M c is the average molecular weight between the crosslink points (g/mol), V r is the volume fraction of rubber in the swollen gel at equilibrium, vs. is the molar volume of the solvent (cm 3 /mol), ρ r is the density of the rubber vulcanizates (g/cm 3 ), and χ is the polymer-solvent interaction parameter. Furthermore, the chemical crosslink density of the unfilled compounds was calculated using the Flory-Rehner (Equation (2)) and Kraus equations (Equation (3)). Here, V r0 is the volume fraction of unfilled rubber in the swollen gel at equilibrium, V r is the volume fraction of rubber in the swollen gel at equilibrium, and ϕ is the volume fraction of the filler . Subsequently, the degree of filler-rubber interaction was calculated as the difference between the total crosslink density (chemical crosslink density + filler-rubber interaction) and the chemical crosslink density. Mechanical Properties Dumbbell-shaped vulcanizate specimens (length = 100 mm; width = 25 mm) were tested at a speed of 500 mm/min using a universal testing machine (UTM, KSU-05M-C, KSU Co., Ansan, Korea) to evaluate their mechanical properties, including the tensile strength, modulus, and elongation at break. The sample testing was performed according to the standard procedure described in ATSM D 412. Abrasion Resistance The abrasion resistance was measured using an abrasion tester (DIN: Deutsche Industrie Normen, KSU Co., Ansan, Korea) according to the standard procedure described in DIN 53516. In this process, an abrasive sheet was rotated on the surface of cylindrical specimens (diameter = 16 mm; length = 8 mm) at 40 ± 1 rpm under a load of 5 N, and the subsequent mass loss was measured. Viscoelastic Properties The viscoelastic properties of the compounds were evaluated by measuring the storage modulus (E), loss modulus (E ), and tan δ at 0.2% strain and 10 Hz using a dynamic mechanical analyzer (DMA Q800, TA Instrument, New Castle, DE, USA), between −80 and 100 • C. Synthesis of Epoxidized Liquid Isoprene Rubbers The whole synthesis process is shown in Scheme 1. At first, LqIR was synthesized by anionic polymerization in a nitrogen-purged reactor at 50 • C. Cyclohexane and n-butyllithium were used as solvent and initiator, respectively. THF was added at a molar ratio of 0.25 relative to the initiator to accelerate the reaction [13]. Subsequently, isoprene was introduced into the reactor under nitrogen atmosphere. The polymerization of LqIR lasted 5 h, and then the reaction was terminated by adding n-octyl alcohol (1.2 M; in excess with respect to the initiator) (Scheme 2). After terminating the reaction, the LqIR solution was removed from the reactor and placed in a three-necked round-bottom flask along with aqueous hydrogen peroxide and formic acid. The heterogeneous solution containing the cyclohexane and aqueous phases was stirred at a speed of 1000 rpm using a high-speed stirrer at 30 • C and allowed to react at the suspension interface for 24 h (Scheme 3) [14]. The E-LqIRs were prepared with various epoxide contents by adjusting the ratios of hydrogen peroxide and formic acid. After the epoxidation reaction, the aqueous phase containing hydrogen peroxide and formic acid was removed to prevent residual reaction, and the E-LqIRs were obtained by evaporating cyclohexane from the E-LqIR solution using a vacuum evaporator. The macrostructures and microstructures of the E-LqIRs were analyzed by GPC and 1 H NMR spectroscopy, and T g was measured using DSC. Synthesis of Epoxidized Liquid Isoprene Rubbers The whole synthesis process is shown in Scheme 1. At first, LqIR was synthesized by anionic polymerization in a nitrogen-purged reactor at 50 °C. Cyclohexane and n-butyllithium were used as solvent and initiator, respectively. THF was added at a molar ratio of 0.25 relative to the initiator to accelerate the reaction [13]. Subsequently, isoprene was introduced into the reactor under nitrogen atmosphere. The polymerization of LqIR lasted 5 h, and then the reaction was terminated by adding n-octyl alcohol (1.2 M; in excess with respect to the initiator) (Scheme 2). After terminating the reaction, the LqIR solution was removed from the reactor and placed in a three-necked round-bottom flask along with aqueous hydrogen peroxide and formic acid. The heterogeneous solution containing the cyclohexane and aqueous phases was stirred at a speed of 1000 rpm using a high-speed stirrer at 30 °C and allowed to react at the suspension interface for 24 h (Scheme 3) [14]. The E-LqIRs were prepared with various epoxide contents by adjusting the ratios of hydrogen peroxide and formic acid. After the epoxidation reaction, the aqueous phase containing hydrogen peroxide and formic acid was removed to prevent residual reaction, and the E-LqIRs were obtained by evaporating cyclohexane from the E-LqIR solution using a vacuum evaporator. The macrostructures and microstructures of the E-LqIRs were analyzed by GPC and 1 H NMR spectroscopy, and Tg was measured using DSC. Preparation of Rubber Compounds and Vulcanizates The rubber compounds were synthesized using an internal mixer (300 cc, Mirae Scientific Instruments Inc., Gwangju, Korea) and the formulations presented in Table 1. A fill factor of 80% of the mixer volume was used. The input unit was parts per hundred rubber (phr), and the compounds were added in proportions relative to the amount of rubber. Preparation of Rubber Compounds and Vulcanizates The rubber compounds were synthesized using an internal mixer (300 cc, Mirae Scientific Instruments Inc., Gwangju, Korea) and the formulations presented in Table 1. A fill factor of 80% of the mixer volume was used. The input unit was parts per hundred rubber (phr), and the compounds were added in proportions relative to the amount of rubber. (1) amount of silane coupling agent was calculated as 8 wt% of the weight of silica. (2) with different epoxide contents were used. The mixing procedure is outlined in Table 2. For the first and second stages, the initial temperatures were 100 and 50 • C, respectively, and the dump temperature ranges were 150-155 and 80-90 • C, respectively. After mixing was completed in each stage, the compounds were transformed into sheets using a two-roll mill. Finally, the vulcanizates were prepared by pressing the compounds in a hydraulic press at 150 • C, for the optimal curing time (t 90 ). Synthesis of Epoxidized LqIR To prepare the E-LqIRs as processing aids, a low-molecular-weight LqIR (M n : 3639 g/mol) having excellent flow properties was polymerized [15]. After that, the LqIR was used for the preparation of E-LqIRs. E-LqIRs with different epoxide contents was prepared by varying the contents of formic acid and hydrogen peroxide. Figure 1 and Table 3 show the GPC, 1 H NMR, and DSC results of the E-LqIRs. The GPC results indicate that as the epoxide contents increased, the molecular weight increased due to interactions such as hydrogen bonding and self-crosslink between E-LqIRs [16,17]. In the 1 H NMR spectra, signals were seen at 4.6-4.8, 4.8-5.0, and 5.0-5.2 ppm owing to the olefinic methine protons of the 3,4-, 1,2-, 1,4-addition units of polyisoprene, respectively. The signals originating from epoxy methane protons were observed at 2.7 ppm [18,19]. Additionally, signals arising from the n-octyl alcohol, added as a reaction terminator for LqIR, were observed at 3.6 ppm. The epoxide contents of the E-LqIRs were calculated using the areas of the peaks at 4.6-4.8, 5.0-5.2, and 2.7 ppm (Equation (4)) and found to be 6.0, 22.1, and 34.4 mol%, respectively. The results showed that the epoxide content increased with decreasing 1,4-addition content. Epoxide contents (mol where A represents the area of each peak corresponding to different concentrations. The T g values of the E-LqIRs were measured using DSC and were found to increase with increasing epoxide content as a result of the interactions between the epoxide groups. These interactions limited the chain mobility [20,21]; consequently, the T g values of the E-LqIRs increased by 0.78-0.88 • C for every 1 mol% increase in the epoxide content. The resulting T g values were −72.46, −60.28, and −47.44 • C. The functionality-the number of epoxide groups for every E-LqIR chain-was calculated using the GPC and NMR results. The average unit number of chains was calculated (Equation (5)) using the molecular weights of the isoprene unit (68.12 g/mol) and epoxidized isoprene unit (84.12 g/mol) as well as M n . The average number of epoxide groups per chain was calculated (Equation (6)) using the epoxide content obtained from Equation (4). The Payne effect analysis, shown in Figure 2 and Table 4, revealed the existence of filler-filler interactions in the uncured compounds [22,23]. In general, the storage modulus (G ) reduces with increasing strain amplitude as a result of the breakdown of the fillerfiller network; however, a higher ∆G value represents a stronger filler-filler interaction. Therefore, the degree of silica dispersion was determined based on the ∆G values because a low ∆G implies better silica dispersion within a compound. The results showed that the ∆G value of the E-LqIR compounds were smaller than that of the TDAE oil compound because the epoxide group in the E-LqIR interacts with silanol group on the silica surface in the E-LqIR compounds. Therefore, it was confirmed that the dispersion of silica was improved by increasing the epoxide content of the E-LqIRs. The Tg values of the E-LqIRs were measured using DSC and were found to increase with increasing epoxide content as a result of the interactions between the epoxide groups. These interactions limited the chain mobility [20,21]; consequently, the Tg values of the E-LqIRs increased by 0.78-0.88 °C for every 1 mol% increase in the epoxide content. The resulting Tg values were −72.46, −60.28, and −47.44 °C. The functionality-the number of epoxide groups for every E-LqIR chain-was calculated using the GPC and NMR results. The average unit number of chains was calculated (Equation (5)) using the molecular weights of the isoprene unit (68.12 g/mol) and epoxidized isoprene unit (84.12 g/mol) as well as Mn. The average number of epoxide groups per chain was calculated (Equation (6)) using the epoxide content obtained from Equation (4) filler network; however, a higher ΔG' value represents a stronger filler-filler interaction. Therefore, the degree of silica dispersion was determined based on the ΔG' values because a low ΔG' implies better silica dispersion within a compound. The results showed that the ΔG' value of the E-LqIR compounds were smaller than that of the TDAE oil compound because the epoxide group in the E-LqIR interacts with silanol group on the silica surface in the E-LqIR compounds. Therefore, it was confirmed that the dispersion of silica was improved by increasing the epoxide content of the E-LqIRs. Figure 3 and Table 5 show the results of Mooney viscosity, bound rubber, and curing characteristic measurements. Increasing the epoxide content led to higher silica dispersion and decreased the Mooney viscosity and Tmin values. Furthermore, decreasing the occluded rubber content reduced the bound rubber contents [24]. The curing characteristic measurements revealed that the ΔT value of the E-LqIRs was smaller than that of the Figure 3 and Table 5 show the results of Mooney viscosity, bound rubber, and curing characteristic measurements. Increasing the epoxide content led to higher silica dispersion and decreased the Mooney viscosity and T min values. Furthermore, decreasing the occluded rubber content reduced the bound rubber contents [24]. The curing characteristic measurements revealed that the ∆T value of the E-LqIRs was smaller than that of the TDAE oil compound because of the consumption of sulfur by the double bonds of E-LqIRs. On the other hand, as the number of double bonds of E-LqIRs decreased with increasing epoxide content, the amount of E-LqIR that could have reacted with the NR via sulfur also decreased, resulting in a smaller ∆T value. Figure 4 illustrates the proposed interactions of the E-LqIRs with the silica-filled NR compound vulcanizates. The E-LqIRs that do not interact with silica ( Figure 4a) can act as lubricants. These E-LqIRs can also be extracted during the pretreatment stage of the swelling tests owing to the absence of interactions. In addition, when ring-opening occurs, an interaction between E-LqIRs is formed (Figure 4b), which increases the molecular weight of E-LqIRs. The E-LqIRs that form hydrogen bonds and direct silica-epoxy bonds with the hydroxyl group of silica (Figure 4c) can act as silica-covering agents. These E-LqIRs cannot be extracted during the pretreatment stage of the swelling tests [25]. In addition, some E-LqIRs that interact with silica can become coupled with the NR through crosslinking with sulfur and can act as coupling agents (Figure 4d). Crosslink Density and Vulcanizate Structure Analysis Polymers 2021, 13, x FOR PEER REVIEW 11 of 17 materials in the compound. However, the E-LqIRs were more resistant to extraction because of the interactions of the epoxide groups with silica. Assuming that the TDAE oil was completely extracted, and the amounts of additives extracted is the same, the amounts of E-LqIRs extracted decreased with increasing epoxide content (i.e., 66.6%, 53.2%, and 41.8% for E-06, E-22, and E-34, respectively). This result confirms that increasing the epoxide content of E-LqIRs will enable their use as processing aids in tires by reducing the oil migration problem in tires [8]. To determine the effect of the epoxide content of E-LqIRs on the vulcanizate structures of the rubber compounds having various filler contents, the total crosslink density was calculated as the sum of the filler-rubber interactions and chemical crosslink density via vulcanizate structure analysis [25][26][27][28][29][30][31][32][33]. Figure 6 and Table 6 present the results of the analysis. The chemical crosslink density obtained by adding E-LqIRs was lower than that obtained by adding TDAE oil because of the additional consumption of sulfur by the double bonds of the E-LqIRs. On the other hand, more filler-rubber interactions occurred in the E-LqIR compounds than in the TDAE oil compound because of sulfur, which acted as a coupling agent with the E-LqIRs. A higher epoxide content increased the amount of E-LqIRs covering the silica surface, thereby reducing the absorption of the cure accelerator on the silica surface [25]. Furthermore, additional sulfur consumption decreased because of the decrease in the number of double bonds in the E-LqIRs. As a result, the chemical crosslinking density increased. On the other hand, the higher epoxide content reduced the filler-rubber interactions because the number of double bonds available for crosslinking with sulfur decreased. During the pretreatment stage of the swelling tests, the organic matter present in the vulcanizates was extracted using two different organic solvents: THF and n-hexane. Figure 5 and Table 6 show the amounts of extracted organic matter. The proportion of 10 phr of TDAE oils in the vulcanizate specimen was 5.44 wt%. The TDAE compound showed the highest organic matter extraction amount as 8.48 wt% (10 phr of TDAE oil; 5.44 wt% + some additives such as stearic acid, 6PPD, TMQ, etc.; 3.04 wt%). This is because the oil can easily be extracted by organic solvents, as it does not form a chemical bond with other materials in the compound. However, the E-LqIRs were more resistant to extraction because of the interactions of the epoxide groups with silica. Assuming that the TDAE oil was completely extracted, and the amounts of additives extracted is the same, the amounts of E-LqIRs extracted decreased with increasing epoxide content (i.e., 66.6%, 53.2%, and 41.8% for E-06, E-22, and E-34, respectively). This result confirms that increasing the epoxide content of E-LqIRs will enable their use as processing aids in tires by reducing the oil migration problem in tires [8]. To determine the effect of the epoxide content of E-LqIRs on the vulcanizate structures of the rubber compounds having various filler contents, the total crosslink density was calculated as the sum of the filler-rubber interactions and chemical crosslink density via vulcanizate structure analysis [25][26][27][28][29][30][31][32][33]. Figure 6 and Table 6 present the results of the analysis. The chemical crosslink density obtained by adding E-LqIRs was lower than that obtained by adding TDAE oil because of the additional consumption of sulfur by the double bonds of the E-LqIRs. On the other hand, more filler-rubber interactions occurred in the E-LqIR compounds than in the TDAE oil compound because of sulfur, which acted as a coupling agent with the E-LqIRs. A higher epoxide content increased the amount of E-LqIRs covering the silica surface, thereby reducing the absorption of the cure accelerator on the silica surface [25]. Furthermore, additional sulfur consumption decreased because of the decrease in the number of double bonds in the E-LqIRs. As a result, the chemical crosslinking density increased. On the other hand, the higher epoxide content reduced the filler-rubber interactions because the number of double bonds available for crosslinking with sulfur decreased. Table 7 present the mechanical properties of the vulcanizates. In the stress-strain curve, the 300% moduli show the same trend as that of the total crosslink density of the compounds. The moduli of the E-LqIR compounds are smaller than that of the TDAE oil compound because the consumption of sulfur resulted in a lower total crosslink density in the E-LqIR compounds. In the E-LqIR compounds, the crosslink density slightly decreased as the epoxide contents increased, so the 300% moduli were also slightly lowered, but showed a generally similar value. Table 7 present the mechanical properties of the vulcanizates. In the stress-strain curve, the 300% moduli show the same trend as that of the total crosslink density of the compounds. The moduli of the E-LqIR compounds are smaller than that of the TDAE oil compound because the consumption of sulfur resulted in a lower total crosslink density in the E-LqIR compounds. In the E-LqIR compounds, the crosslink density slightly decreased as the epoxide contents increased, so the 300% moduli were also slightly lowered, but showed a generally similar value. Mechanical Properties and DIN Abrasion Loss The wear resistance of the compounds was evaluated through DIN abrasion tests, and the corresponding results are listed in Table 7. The E-06 compound, with high fillerrubber interactions, demonstrated superior wear resistance, although its total crosslink density was lower than that of the TDAE oil compound [12,34]. However, in the case of E-22 and E-34, as the epoxide content increased, the filler-rubber interactions decreased because the E-LqIRs acted as covering agent instead of coupling agents. Therefore, the E-22 and E-34 compounds showed low wear resistances, compared to that of the TDAE oil compound, owing to the lower total crosslink density in the E-LqIR compounds. The wear resistance of the compounds was evaluated through DIN abrasion tests, and the corresponding results are listed in Table 7. The E-06 compound, with high filler-rubber interactions, demonstrated superior wear resistance, although its total crosslink density was lower than that of the TDAE oil compound [12,34]. However, in the case of E-22 and E-34, as the epoxide content increased, the filler-rubber interactions decreased because the E-LqIRs acted as covering agent instead of coupling agents. Therefore, the E-22 and E-34 compounds showed low wear resistances, compared to that of the TDAE oil compound, owing to the lower total crosslink density in the E-LqIR compounds. Figure 8 and Table 8 show the results of dynamic viscoelastic property analysis. The tan δ value at 60 • C is an indicator of the rolling resistance (RR) of a tire. A lower RR value indicates higher fuel efficiency [35]. In addition, the loss modulus (E ) at 0 • C is an indicator of the wet grip performance of a tire, and a higher value indicates better wet grip performance [36,37]. indicates higher fuel efficiency [35]. In addition, the loss modulus (E″) at 0 °C is an indicator of the wet grip performance of a tire, and a higher value indicates better wet grip performance [36,37]. Typically, rubber compounds exhibit a high tan δ value near Tg because of the hysteresis of the rubber chains [34]. Moreover, the tan δ value at 60 °C is generally more affected by the agglomeration and deagglomeration of silica networks than by the hysteresis of rubber; as a result, the tan δ value decreases as the dispersion of silica increases [34]. However, the Tg of the E-LqIR increases with increasing epoxide content. Thus, the tan δ peak of the E-LqIR compounds shifts to a lower value and broadens with the increasing epoxide content [38,39]. Accordingly, the value of tan δ at 60 °C increased as the peak of tan δ became broader. On the other hand, the value of E″ at 0 °C increased with increasing epoxide content because of the broader rubber hysteresis, indicating a superior wet grip performance. The E-06 compound did not show any significant effect on the Tg because of its low epoxide content; as a result, E-06 compound showed a low value of tan δ at 60 °C compared to that of the TDAE oil compound. These results were obtained because of the superior silica dispersion and higher filler-rubber interactions in the E-06 compound. Thus, compared to the fuel efficiency of the TDAE oil compound, the E-22 and E-34 compounds, owing to the effect of their epoxide contents on the Tg, exhibited low fuel efficiencies, while the E-06 compound exhibited a comparatively higher fuel efficiency. Conclusions In this study, the effects of the epoxide content on the properties of silica-filled NR vulcanizates with E-LqIR as processing aids were confirmed. As the epoxide content of the E-LqIRs increased, the interactions between the epoxide group and the hydroxyl group of silica increased, confirming that the silica dispersion was improved with increasing epoxide content. Increasing the epoxide content also increased the resistance to extrac- Typically, rubber compounds exhibit a high tan δ value near T g because of the hysteresis of the rubber chains [34]. Moreover, the tan δ value at 60 • C is generally more affected by the agglomeration and deagglomeration of silica networks than by the hysteresis of rubber; as a result, the tan δ value decreases as the dispersion of silica increases [34]. However, the T g of the E-LqIR increases with increasing epoxide content. Thus, the tan δ peak of the E-LqIR compounds shifts to a lower value and broadens with the increasing epoxide content [38,39]. Accordingly, the value of tan δ at 60 • C increased as the peak of tan δ became broader. On the other hand, the value of E at 0 • C increased with increasing epoxide content because of the broader rubber hysteresis, indicating a superior wet grip performance. The E-06 compound did not show any significant effect on the T g because of its low epoxide content; as a result, E-06 compound showed a low value of tan δ at 60 • C compared to that of the TDAE oil compound. These results were obtained because of the superior silica dispersion and higher filler-rubber interactions in the E-06 compound. Thus, compared to the fuel efficiency of the TDAE oil compound, the E-22 and E-34 compounds, owing to the effect of their epoxide contents on the T g , exhibited low fuel efficiencies, while the E-06 compound exhibited a comparatively higher fuel efficiency. Conclusions In this study, the effects of the epoxide content on the properties of silica-filled NR vulcanizates with E-LqIR as processing aids were confirmed. As the epoxide content of the E-LqIRs increased, the interactions between the epoxide group and the hydroxyl group of silica increased, confirming that the silica dispersion was improved with increasing epoxide content. Increasing the epoxide content also increased the resistance to extraction. Utilizing this effect can reduce the problem of oil migration in tires. The analysis of the vulcanizate structure exhibited that the chemical crosslink density increased with increasing epoxide content, and the E-LqIR acted as a covering agent; however, the filler-rubber interactions decreased because the crosslinking of NR by sulfur was reduced as a result of the smaller number of double bonds in the E-LqIRs. Consequently, the E-06 compound demonstrated the highest filler-rubber interactions and wear resistance. The addition of E-LqIRs increased the T g , and the corresponding peaks became broader with the increasing epoxide content. As a result, an improvement in the wet grip performance is expected because of the increased E at 0 • C for high epoxide contents; however, a low fuel efficiency is expected as well because of the simultaneous increase in tan δ at 60 • C. On the other hand, the effect of the E-LqIRs on the T g was not significant in the E-06 compounds, owing to the low epoxide content. Furthermore, the higher filler-rubber interactions of the compounds reduced the hysteresis of silica. As a result, the E-06 compound exhibited a lower tan δ at 60 • C compared to that of the TDAE oil compound. The results of this study showed that a high-epoxide-content E-LqIR could solve the oil migration problem because of the higher interactions of E-LqIRs with silica, and a superior wet grip performance is also expected because of the increased T g . However, the E-LqIRs that act as covering agents, rather than as coupling agents, decrease the wear resistance. Additionally, the high T g due to the E-LqIRs leads to low fuel efficiency. Therefore, it was determined that the E-06 compound with a low epoxide content would provide the highest wear resistance and fuel efficiency because it does not affect tan δ at 60 • C and simultaneously exhibits high filler-rubber interactions.
7,693
2021-09-01T00:00:00.000
[ "Materials Science", "Engineering" ]
The Investment Choices to Deal with the Slowdown in Economic Growth — Based on the Analysis of the Effect of Human Capital Investment The Chinese economy had been crippled by a financial crisis in 2008, with the growth rate of Gross Domestic Product (GDP) falling to 6.1% in the first quarter of 2009. In the face of economic downturn, Chinese government adopted the “Four Trillion Investment Plan”1 to stimulate the economy and made GDP gradually pick up (see Figure 1). However by 2010, GDP growth experienced a decline again. The annual growth rate of GDP was less than 8% both in the year of 2012 and 2013, falling to 7.4% in 2014. In 1On November 5, 2008, Premier Wen chaired an Executive meetings of the State Council and put forward a series of measures to expand domestic demand and promote economic growth. The implementation of these measures required about four trillion RMB by the end of 2010. Abstract 2015, the growth rate of the third quarter slipped below 7.0%, and the growth rate of the year was 6.9%.Figures 2-4 manifest that the monthly retail sales of consumer goods, the cumulative investment in fixed assets and import and export volume of the year of 2015 were all lower than the year of 2014, which means the growth rate of consumption, investment and exports all slowed down.Overall, after deducting price factors, compared to the year of 2014, the annual growth rate of retail sales of consumer goods in 2015 maintained the position of 10.6%; investment in fixed assets decreased by 2.9%; exports volume and imports volume fell by 1.8% and 13.2% respectively [1]. For industry, as shown in Figure 5, in 2015 Purchasing Managers' Index (PMI) per month has been in the ups and downs of the 50% line, with the fluctuation by 0.2 percentage points; Producer Price Index (PPI) per month has been negative in the whole year and plummeted to −5.9% from August to December.Industrial Enterprises above Designated Size realized a total profit of 63,554 billion RMB in 2015, 2.3% less than the year of 2014.Only the profit of private enterprises was increasing, 3.7% more than the year of 2014, while the profit of state controlling enterprises declined by 21.9% [1]. GDP, consumption, investment, export, PMI, PPI and other data all show that China is in a difficult economic situation. Behind the Economic Data What is hiding behind the data?In fact, many scholars have carried out research on this problem over the past few years.The government has taken different measures to cope with economic downturn at different stages.We try to classify these following measures and find the problem. Overcapacity In the expansion of investment demand, increasing investment in fixed assets such as infrastructure construction and real estate development, is the approach that government commonly used.Facing the serious setback situation resulted from financial crisis of 2008, the State Council introduced a "Four Trillion Investment Plan" on November 5, 2008, including pump-priming, industrial revitalization, science and technology projects and social security strengthening.As shown in Figure 6, most of the four trillion -5.4 -5.9 -5.9 -5.9 -5.9 -5.9 2009 showed that the excess capacity had appeared in more than 20 industries [2], and by the year of 2013 overcapacity spread to all walks of life.Steel, cement, coal, ship and photovoltaic industry faced extremely serious identical products and sluggish exports. Taking steel industry as an example, at the end of 2012, the excess production of steel was over 2 billion tons and the rate of production capacity utilization was only 72% [3], the lowest level since 2000. In order to solve the overcapacity and minimize its impact on economic growth at the same time, the government suggested an idea "The Belt and Road" 2 and made it as a new drive for economic growth in September 2013.In the first half of 2015, the total volume of import and export between China and countries along the Belt and Road was nearly 3 trillion RMB, accounting for 25% of China's foreign trade gross at the same period [4]."The Belt and Road" is essentially a combination of investment and exports.By encouraging foreign investment, with capital output driving capacity output, excess capacity can be transferred to foreign country.The railway, port construction in these country can resolve China's overcapacity of steel and ship industry.But the problem is that whether the expected return of foreign investment can be achieved. Insufficiency of Effective Demand There were two reasons responsible for insufficiency of effective demand.Firstly, considering the future expectations, more than 60% of the people, cannot or dare not consume.Because of the shrinking consumption demand in 2008, the government introduced a series of policies to spur the consumption demand: expand the scale of fiscal subsidies and perfect social security system, put forward tax policy related to consumption, and improve the consumption environment.To be specific, the measures included increasing the resident's minimum living guarantee standard and pension to improve personal income, implementing the policy of "Home Appliances Going to the Countryside" and "trade ins" to reduce the product price, etc.These measures made contribution of consumption toward economic growth reached 56.8% in 2009, far more than 45% in 2008.But it stumbled badly afterwards: such index in 2013 and 2014 dropped to 48.2% and 51.6%3 .From structure of "Four-Trillion Investment Plan" (Figure 6), the proportion of government expenditure on medical, education and social security was 3.75%.Although it had boosted consumption demand to some extent, it didn't fundamentally change people's purchasing power due to the low expenditure of government. Secondly, people who have purchasing power transferred their consumption demand from domestic to overseas.The data from State Administration of Exchange Control 2The Silk Road Economic Belt and the 21st-Century Maritime Silk Road, or simply The Belt and Road, is a historical symbol of the ancient Silk Road in China.In 2013, President Xi put forward this initiative and desired to develop partnership and promote economic cooperation with neighboring countries.displayed that Chinese tourists' overseas spending reached $1648 billion in 2014, increasing by 28% compared to 2013 [5].The reason why people are willing to consume overseas is to seek products with low prices and high quality. Blocked Export In the aspect of expanding exports, Chinese government instituted some foreign trade policy to reduce the cost of customs and clearance, and measures such as cooperating with equipment manufacturers, promoting the development of cross-border E-commerce.But because of the slowdown in external demand, China's labor cost advantage in decline and export cost in increase, China's export trade volume decreased year by year, which directly caused the economy downturn.In addition, if the international crude oil prices still hovers in the low level in the future and domestic price cannot match, the competitiveness of export enterprises will be affected and foreign trade volume will be further declined. Supply-Side Reform In face of the financial crisis of 2008 and the subsequent economic downturn, the way government stimulated economy was increasing fixed investment.Actually it was a part of Keynesian practices, with an emphasis on demand side management.Although it got certain success, it left a series of problems: overcapacity, structure distorted, environmental pollution and so on.Not giving up fixed investment, the government put forward supply-side structural reforms in November 2015, which means to emphasize four elements in supply side: labor, land, capital, innovation, and focus on resolving the excess capacity, reducing business costs, decreasing inventory in real estate and guarding against financial risks.Specific practices are as follows: improve the quality of products and expand the effective supply to adapt to changes in demand; relax the existing Family Planning Policy and pay attention to education so as to enhance the demographic dividend and human capital investment; build incentive mechanism and create a relaxed environment to guarantee enterprise innovation activities. But to ensure that these measures can achieve the desired effect and that the labor, land, capital, innovation and other factors can play an active role in the market, an important premise is that enterprises and individuals have sufficient funds.However, the current situation was that industrial added value decreased, the growth rate of broad money (M2) increased.To be specific, the added value of Industrial Enterprises above the Designated Size increased by 6.1% throughout 2015; the growth rate of broad money balance was 13.3% at the end of 2015 [1].However in 2014, the two index was 8.3% and 12.2% [6].For enterprises, it means the reduced cash flow in the production process, higher cost of production and the declined profit on sales.If the enterprises don't have enough funds to support operations, it will become more difficult for them to improve product quality and develop R & D activities.For an individual, it means worker's income growth in decline, the money continuously diluted.When individuals lack funds in hands and disposable income cannot catch consumption, neither will individuals have sufficient purchasing power to consume, nor will they have the ability to up-grade consumption and invest in human capital. In the demand side, the government pays too much attention to the fixed investment, especially investment in infrastructure and real estate, which leads to unsustainable economic growth.In the supply side, the government faces the shortage supply of labor, innovation and effective demand.How to make the development of economy sustainable?The Central Economic Working Conference held in Beijing in December 2014 made it clear that economic growth will rely more on the quality of human capital and technological progress."China's 13th Five Year Plan" also showed that government will emphasize on human development and comprehensively improve the education, medical and health level.This suggests that government has introduced human capital investment into a new round of investment.So, what is fixed investment?What is human capital investment?What's the role of these two investment in economic growth?These questions will be discussed from the perspective of the human capital investment theory as follows. Investment Effect Analysis Capital has two categories: physical capital and human capital.Physical capital are plants, equipment, raw materials and other forms of production goods; human capital is knowledge, skills and health embodied in the human body, through the investment (including education, training, health care, and migration) [7].Human capital is a form of capital relative to physical capital.It has close relationship with the physical capital, but also has its own uniqueness.The reason why knowledge, skills and health embodied in people can be regarded as a kind of capital, is that it has the same basic characteristics as physical capital.For example, they are both the result of investment, indispensable factors of production process, with the characteristic of scarcity, a way to seek economic benefits.However, human capital has some characteristics different from physical capital: physical capital depends on the physical products while human capital is dependent on the human body; the formation of human capital and its efficiency is affected by personal preference. Both human capital investment and physical capital investment are the power to boost economic growth, but the effect of them on economic growth differs. Analysis of Physical Capital Investment Effect In the whole process of physical capital investment, investment in fixed assets occupies a dominant position.Therefore, physical capital investment is commonly referred to fixed investment.According to investment multiplier theory, fixed investment, with the function of multistage transmission, can double and redouble GDP increase: it will generate the need for raw materials, production equipment, labor demand, and then an increase in related industry output and consumption demand follows. Fixed investment generates need for production in the investment process and will increase production capacity at the end of the investment, thus having both demand and supply effects on economic growth.Demand effect is created as investment process begins.With the start of the investment activities, required inputs are needed to buy, leading to a large demand of production goods and development of related industries, thereby stimulating the economic growth.Supply effect is created when the investment ends.As fixed assets are delivered to use or put into production, the supply of production goods and end-product will increase.Therefore, physical capital will increase the demand of production at the beginning of investment, and improve capacity at the end of the investment.But if the increased production capacity cannot be digested, overcapacity will appear, especially in case of excessive investment.Excessive investment has created excess demand before the production is completed, which will accelerate inflation.After completion, the sudden increase in production capacity, due to multistage transmission, will lead to overcapacity in the whole and related industry.On the one hand, overcapacity will decrease return on investment of enterprise.To solve the problem, the factories need to amalgamate or close, which will result in unemployment, then the reduction in household income and consumption expectations.On the other hand, large enterprises with overcapacity rely on credit to survive, however, emerging, small or medium sized enterprises get stuck due to difficulties in financing.This mismatch leads to weak innovation in real economy and difficulty in upgrading industrial structure.So economy will be under more and more serious downward pressure. Analysis of Human Capital Investment Effect In the 1980s, Lucas (1988) introduced the human capital theory into Theory of Neo-Economic Growth, emphasizing that continuing to invest in human capital can improve a country's long-term growth rate sustainably [8].Human capital investment is the same as other forms of investment, which includes costs and benefits.Human capital is an economic engine for backward country to take off [9].Schultz (1961) suggested that the contribution of human knowledge, skills, health and other human capital to economic growth, was far more than physical capital and labor quantity to economic growth [7].Heckman (2004) found that in China, the potential return on investment of human capital was higher than that of physical capital, but China has great political distortions, which cannot achieve the potential return on investment.He also put forward that human capital and physical capital investment should have appropriate proportion [10].Sun and Dong (2007), through the empirical analysis, pointed out that in China, besides maintaining the physical capital accumulation, human capital investment should also be increased, thereby to promote sustainable and stable economic development [11].Wang (2011) analyzed economic data of the year of 1978 to 2009 in China to show that physical capital, human capital and GDP existed a long-term co-integration relationship: in shortterm economic growth depended on physical capital investment, the contribution of human capital to economic growth was relatively small; but in long-term, the contribution of human capital had significant and persistent effect [12]. Human capital investment is achieved through education, training, health care and the migration.In the process of human capital investment (the production of human capital), it will generate two aspects of demand: one is the demand for material products and services, such as school and hospital buildings, teaching facilities and medical equipment and other products; the other is the demand for human capital, such as teachers, doctors and the related material production personnel.Not only do these demand promote the development of education, health industries, but also enable the development of construction, high-tech and other related industries, thereby boosting the economic growth.In addition, the products and services required for human capital investment exist in the form of end-product, which has a larger income elasticity and price elasticity.Continuing demand for this kind of products and services will trigger physical capital investment in return. Different from that there is only supply effect when physical capital investment ends, there exists both supply and demand effect when human capital investment ends.The results of human capital investment will lead to a higher level of human capital supply, including mastering more knowledge and skill, having better physical health, so that the marginal productivity of labor can be improved; social production possibility frontier under established resources can move as far as possible; more product and service with high-quality can be provided.This is the supply effect of the human capital.The demand effect of human capital investment has two aspects: Firstly, the improvement of the human capital stock increases personal income; the increase of personal income lead to the growth of consumer demand and the expansion of the consumer market, which brings more opportunities and stronger stimulation of investment.Secondly, as higher human capital stock requires higher physical capital to match, more advanced physical capital will be invested.It is such a recycle that demand and supply effect of human capital investment promotes the sustainable economic growth.Figure 7 compares the different effect of physical capital investment and human capital investment on economic growth. Policy Proposal for Government The problem of overcapacity since 2008 highlights the tough policy issues: if we don't suppress the impulse of fixed investment, the problem of excess capacity will not be solved and will become more serious.Moreover, it may trigger a chain reaction, which eventually leads to decline in economic growth.If we compress the capacity, the economy will encounter downturn immediately. In the face of such a dilemma, this paper compares the different effects of human capital and material capital investment on economic growth, and comes to a conclusion that the government cannot only apply fixed investment policy to boost economy, and should increase human capital investment as well.Fixed investment has stimulated Chinese economy to develop rapidly for decades, and human capital investment will provide Chinese economy with sustainable growth.To this end, the government can make adjustments in the following aspects. Increase Public Investment in Human Capital In education, health, social security and other public human capital investment areas, the expenditure level of Chinese government is far less than other countries.In education, China's education investment did not account for 4.4% of GDP until 2012.It was the first time for China to reach the international standard level of 4%.However, in 2001, for United States, Japan and other high-income countries, their public education expenditure had already accounted for 4.8% of GDP.Moreover, for Colombia, Cuba and other low-income countries, the figure was 5.6% [13].In terms of medical treatment, according to the International Statistical Yearbook in 2013, the data showed that government's health expenditure in China occupied 5.15% of GDP, not only lower than the world average level (10.60%), but also lower than the average level of low-income countries (5.28%) [14].China's public social security level was relatively low, too.Taking the pension fund for instance, Report on the Development of China's Pension in 2012 released that China's pension fund reserves only made up 2% of GDP, while the figure was 15% in United States, 25% in Japan, the highest 83% in Norway [15].Therefore, Chinese government should increase public human capital investment in those key areas. Play the Role of Policy Guidance Apart from improving public human capital investment level, government should guide and encourage other subjects of human capital investment (family and individual) to increase investment in human capital.Firstly, government can actively develop diverse investment subjects.As an example, in education, government can introduce social groups and enterprises by the form of joint ventures, cooperation and others, making them become the part of today's education subject; promote the cooperation between colleges and enterprises; encourage industry associations and enterprises to deliver vocational education and training.In Medicare, the government can encourage social capital to establish medical service institutions, in order to ease such problems as shortage supply of current medical service, unreasonable public health system, low efficiency of resource allocation and nervous doctor-patient relationship.Besides, government needs to reform the distribution system to ensure that enterprises, individual families have sufficient funds for human capital investment.1) Setting up a sound social security system.A good social security system can not only decrease personal expenditure when individuals encounter disease, accident, and become old, but also put the family and individual under life and job security.Therefore, family and individuals dare to consume and invest in human capital for peace of mind.2) Decreasing tax to alleviate the burden on businesses and individuals.With the decrease in individual income tax and increase in disposable income, individuals are willing to use surplus funds for human capital investment, such as acquiring more knowledge and improving health level, after they meet the basic needs.Personal consumption in education, health will spur economic growth.With lower corporate taxes and fees, the operation cost of the enterprise will be reduced, and profits retained within the enterprise will be increased.Such profit can be used in human capital investment as well as the improvement of supply quality, innovation and R & D activities to create more corporate value. Achievements Firstly, we use such data as GDP, consumption, investment, export, PMI, PPI to describe the current economic situation in China and then explain the reasons for the economic downturn from three aspects: consumption, investment and export.Secondly, we compare the different effects of physical capital investment and human capital investment on economic growth and reveal the special effect of human capital investment on promoting sustainable economic growth.Finally, we put forward some proposals of investment in human capital for the government.The contribution of this paper lies in: The Chinese government usually took the policy of increasing fixed investment to stimulate economy.Since the economic declined again from 2010 and many tough economic issues appeared, we try to analyze these issues and find the way to deal with them.Based on human capital investment theory, we believe that the effect of investment on economic growth depends on different investment purposes and investment subjects and that human capital investment can boost sustainable economy growth.The good news is that the government has realized the importance of human capital investment on economic growth.The content of Central Economic Working Conference held in December 2014 and "China's 13th Five Year Plan" suggests that government has introduced human capital investment into a new round of investment.The purpose of this paper is to provide a theoretical basis for the government policy of increasing investment in human capital. Figure 1 . Figure 1.Economic growth in quarters in 2008-2015.(Note: data from national bureau of statistics of the people's republic of China). Figure 2 . Figure 2. Consumer spending by month in 2014-2015.(Note: data from national bureau of statistics of the people's republic of China). Figure 3 . Figure 3.Fixed investment by month in 2014-2015.(Note: data from national bureau of statistics of the people's republic of China). Figure 4 . Figure 4. Total value of imports and exports by month in 2014-2015.(Note: data from national bureau of statistics of the people's republic of China). Figure 5 . Figure 5. Industrial conditions by month in 2015.(Note: data from national bureau of statistics of the people's republic of China). Figure 6 . Figure 6.Structure of "four trillion investment plan" in 2008.(Note: data from the website of national development and reform commission of China). Figure 7 . Figure 7. Different effects of physical capital and human capital investment on economic growth.
5,215
2016-08-17T00:00:00.000
[ "Economics" ]
A Survey of Communications and Networking Technologies for EnergyManagement in Buildings and Home Automation With the exploding power consumption in private households and increasing environmental and regulatory restraints, the need to improve the overall efficiency of electrical networks has never been greater. That being said, the most efficient way to minimize the power consumption is by voluntary mitigation of home electric energy consumption, based on energy-awareness and automatic or manual reduction of standby power of idling home appliances. Deploying bi-directional smart meters and home energy management (HEM) agents that provision real-time usage monitoring and remote control, will enable HEM in “smart households.” Furthermore, the traditionally inelastic demand curve has began to change, and these emerging HEM technologies enable consumers (industrial to residential) to respond to the energy market behavior to reduce their consumption at peak prices, to supply reserves on a as-needed basis, and to reduce demand on the electric grid. Because the development of smart gridrelated activities has resulted in an increased interest in demand response (DR) and demand side management (DSM) programs, this paper presents some popular DR and DSM initiatives that include planning, implementation and evaluation techniques for reducing energy consumption and peak electricity demand. The paper then focuses on reviewing and distinguishing the various state-of-the-art HEM control and networking technologies, and outlines directions for promoting the shift towards a society with low energy demand and low greenhouse gas emissions. The paper also surveys the existing software and hardware tools, platforms, and test beds for evaluating the performance of the information and communications technologies that are at the core of future smart grids. It is envisioned that this paper will inspire future research and design efforts in developing standardized and userfriendly smart energy monitoring systems that are suitable for wide scale deployment in homes. Introduction Residential energy consumption and the amount of pollution emitted from the electric generators create side effects that are not beneficial to public health and well-being, including increased pollution in the air and water (CO 2 and other greenhouse gases, mercury, and other trace elements and particulate matter), and the depletion of finite resources [1]."Green Smart Home Technologies" are aimed at reducing the footprint of greenhouse gases by efficient energy management in residential buildings.Studies have shown that the display of real-time information on consumption can result in reductions of up to 30% by enabling end users to consume responsibly and manage effectively [2].In recent times, more so than ever, the consumer has become more "green" conscious and therefore is looking for realtime visibility of energy consumption [3].Further, the market for residential energy management is poised to grow dramatically due to increased consumer demand and new government and industry initiatives [4].Smart homes have been studied since 1990s, and their primary focus has been resident comfort [5].They employ energy efficiency by occupancy check or adaptability to outside conditions.However, they are not automatically a component of the smart grid.Their integration to smart grid is an active topic [6][7][8].With this in mind, this paper motivates future research in the area of home area networking by revisiting the concepts of smart grids and smart homes and summarizing the state of the art in home energy management (HEM) communications and control technologies. 1.1.Bringing Smart Grids to Green Smart Homes.Smart grid is accelerating the energy value change transformation, and will enable electricity distribution systems to manage alternative energy sources (e.g., solar and wind), improve reliability, facilitate faster response rates to outages, and manage peak-load demands.Building a smart digital meter, the advanced metering infrastructure (AMI) is a first step and would enable processing and reporting usage data to providers and households via two-way communication with the utility offices [9][10][11].In recent years, there have been a lot of initiatives on the part of the government, utilities companies, and technology groups (e.g., standards committees, industries, alliances, etc.) for realizing smart grids for green smart homes [12].Government initiatives include mandating upgrades to the grid and adding intelligence to meters that measure water, gas, and heat.The market for smart home products, such as lighting and HVAC controls, in-home utility monitors, and home security systems, is also on the rise, driven in part by the desire to conserve energy and by the expansion of home automation services and standards-based wireless technologies.Further, energy directives and smart grid initiatives have attracted hundreds of companies with energy management systems including General Electric, Cisco, Google, and Microsoft.Efforts are underway to design new standards, protocols, and optimization methods that efficiently utilize supply resources (i.e., conventional generation, renewable resources, and storage systems) to minimize costs in real time.In other words, smart grid technologies so far focus on integrating the renewable energy resources to the grid to reduce the cost of power generation and integrating these resources requires storage systems.Smart grids can be potent tools in helping consumers reduce their energy costs, but consumers have several concerns that could inhibit rapid adoption.In order to maximize smart grids, utilities and suppliers of energy management solutions must first educate consumers about the benefits of these advanced systems and then package these solutions so that capabilities and advantages are obvious to consumers and easily integrated into their lifestyles. Home Energy Management and Home Area Networks. The term home area networks (HANs) has been used loosely to describe all the intelligence and activity that occurs in HEM systems, and this section describes the concepts of HEM systems and HANs.Stated simply, HANs are extensions of the smart grid and communications frameworks, much like the familiar local area networks (LANs), but within a home [10].Instead of a network of servers, printers, copiers, and computers, the HAN connects devices that are capable of sending and receiving signals from a meter and/or HEMS applications.Wired or wireless, there are tradeoffs that involve power consumption, signaling distance, sensitivity to interference, and security.The main point here is that HANs are not energy management applications, but enable energy management applications to monitor and control the devices on the home network. With limited data input and display capabilities, inhome displays (IHDs) function as a visual indicator of the electricity rates at any point in time.Moreover, IHDs are one-way communication devices, meaning the user can only monitor, but not take, real-time actions and provide feedback to the HAN like the HEM systems.So, HANs and IHDs still need an energy management application, an HEM solution [13,14], in order to gain the most benefit from these smart grid components.A web-based portal for an HEM system is the best interface to the utility billing and demand response programs, because it enables the easiest execution and control of intelligent appliances that can be "enrolled" into such programs.A HEM solution would enable the user to recall the optimized presets for sustainable energy-saving, get suggestions on energy efficiency improvements, and see how ones' energy management compares to others in ones' peer group or neighborhood [10].A basic representation of a smart grid-smart home interface that uses a variety of different networking topologies across the different domains and subdomains is illustrated in Figure 1, and the focus of this paper is HEM systems and the HAN technologies. Benefits of HEM (1) Minimize Energy Wastage.Home automation and realtime energy monitoring makes energy savings feasible.For example, lighting control is not about reducing light, but facilitating the correct light when and where required, while reducing wastage.Energy savings can also be realized according to occupancy, light level, time of day, temperature, and demand levels, for example, opening and closing blinds and shutters automatically, based on the time of day and amount of light to optimize the mix of natural light and artificial light or according to the temperature difference between indoor and outdoor to optimize heating, ventilating, and air conditioning (HVAC) power consumption.(2) Peace of Mind.Home energy management is important because it provisions time scheduling and predictive scheduling that ensures peace of mind while yielding energy savings.With preset scheduling, the user does not need to turn them on all the time and thus minimize energy consumption.Further, lights and TV turned on will help to discourage potential intruders while you are away from home.Safety locks and security systems can be enabled as well; lighting and sound/motion sensors can be connected to the HEM that track activity 24 hours a day and alerts the user and the local police or fire department if and when needed. (3) Eco-Friendly.As climate change becomes an increasingly real concern, energy efficiency has become top priority in homes and businesses alike.When describing a green home, energy efficiency refers to every aspect of energy consumption, from the source of electricity to the style of light bulbs.Reducing energy consumption requires a longterm behavioral change, the first step being an investigation of the current carbon footprint of residential and office buildings.HEMs aid in this change by helping the user monitor the usage (e.g., heat, light, and power in homes) and by offering suggestions on how to cut down CO 2 emissions, a primary cause of global climate change.Continuing with the same example from [2], and using the "Terra Pass Carbon Footprint Calculator," a reduction in the CO 2 emissions (from the household) by a factor of more than 60% would be possible. (4) Well-Being of Residents.With the average family spending a huge amount annually on gas and electricity supplies alone, it certainly makes sense to do everything possible to reduce household utility bills.Good energy management within the home brings about this reduction, thereby increasing available capital.HEM systems also increase the transparency and improve the billing service.Such systems make life easy by providing the user with control and management, which will help manage ones time better and thus help to reduce stress.Reducing the energy consumption in a household by about 23% cut the monthly bills by over a third [2]. (5) Public Good.In terms of public good, four things can occur simultaneously when homes are energy efficient: (i) finite energy supplies are not depleted as quickly, (ii) emissions are reduced (including all the corresponding benefits associated with reduced emissions), (iii) consumers save money, and (iv) consumers increase net disposable income.With low-to-moderate income residents, saving money on utilities and spending those savings elsewhere can be a significant quality of life factor.An additional public benefit can result from energy-efficient housing.When government agencies serve as the housing provider for low-income residents, energy efficiency can contribute to taxpayer savings.Money can be saved when the government does not have to finance wasteful energy practices with public housing.An example of a governmental agency collaboration designed to reduce energy use in public housing is the partnership between local housing agencies (LHAs) (agencies who receive program funding from the department of housing and urban development (HUD)) and the DOE Rebuild America program.In summary, HEM systems are a step more advanced than previous energy-saving appliances that provides even more eco-friendly performance through the use of sensor technology.HEM systems allow energy monitoring, automation of appliances, and control system settings to respond to demand response levels.Thus, planning advance personal energy consumption plans is encouraged to leverage from rebates/incentives for green homes and to benchmark oneself on a community level.To the best knowledge of the authors, this is the first comprehensive tutorial on the state of the art in home area communications and networking technologies for energy and power management.This paper also presents a classification of the many affordable smart energy products offered by different companies that are available in the market. Background on Demand Response (DR) and Demand Side Management (DSM) Programs In support of "smart grid" initiatives, several emerging technologies and techniques have been presented in the past decade.These techniques include, among others, advanced metering infrastructure (AMI) and two-way communication, integration of home area network (HAN) and home automation, and a push to invest in renewable microgeneration.The traditionally inelastic demand curve has began to change, as these technologies enable consumers (industrial to residential) to respond to the energy market behavior, reducing their consumption at peak prices, supplying reserves on a as-needed basis, and reducing demand on the electric grid [15].Therefore, the development of smart grid-related activities has resulted in an increased interest in demand response and demand side management programs.Demand response programs are used to manage and alter electricity consumption based on the supply, for example, during a reliability event (Emergency DR), or based on market price (Economic DR) (e.g., [16,17]).These programs can involve curtailing electric load, as well as utilizing local microgeneration (customer owned Distributed Generation).DR programs can be incentive-based programs (IBPs), classical and market-based, or priced based-programs (PBP) [17]. Demand side management (DSM) refers to planning, implementation, and evaluation techniques, including policies and measures, which are designed to either encourage or mandate customers to modify their electricity consumption, in terms of timing patterns of energy usage as well as level of demand.The main objective is to reduce energy consumption and peak electricity demand. DR and DSM initiatives can benefit customers, utilities, as well as society as a whole.From the customer perspective, these programs can help reduce the electric bill and is possibly incentivized by the utility (e.g., through tax credits).From a utility perspective, in addition to reducing supply costs (generation, transmission, and distribution), benefits also include deferral of capital expenditure on increasing system capacity, improved system operating efficiency and reliability, and better/more data to be used for planning and load forecasting.Society as a whole benefits also through the reduction of greenhouse gas emissions, due to the decrease (or nonincrease) in energy consumption and peak demand and the avoided expansion of grid generation capacity.Major benefits of DSM are summarized in Table 1. As part of DSM initiatives, several objectives are included, mainly load management and energy efficiency [18] (refer to Figure 2).Under the load management objectives, we have peak clipping, valley filling, and load shifting.Energy efficiency, or conservation, involves a reduction in overall electricity usage.Electrification and flexible load shape, also shown in Figure 2, involve, respectively, programs for customer retention and development of new markets and programs that utilitie setup to modify consumption on an as-needed basis (i.e., customers in these programs will be treated as curtailable loads).DSM concepts have been studied since the 1980s and early 1990s; reports and survey on the subject were published by the Electric Power Research Institute (EPRI) and the North American Electric Reliability Corporation (NERC) [19][20][21], among others. In the past decade, the focus on smart grid applications and progress in communication protocols and technologies has improved the communication ability between electricity suppliers and end-use consumers, which would allow active deployment of DR at all times (demand dispatch [22]), not just event-based DR.Customers are then able to monitor and control their load in real time and to possibly trade in the energy market.This requires the use of sophisticated energy management system (EMS) to control equipment and appliances [23]. In [24], an optimized operational scheme for household appliances is introduced through the use of a demandside management-(DSM-) based simulation tool.The tool uses a particle swarm optimization algorithm to minimize customers cost and determine a source management technique.In the 1989 paper [25], the authors describe a system used to control electricity usage in homes or small businesses, by shifting some of the load from the peak to the valley and using a real-time variable pricing scheme.The proposed system uses a telephone to power line carrier (PLC) interface, a meter that measures energy with variable pricing, and a controller that adjusts energy utilization based on price.In [26], the authors developed, using mixed integer linear programming, a home energy management system (HEMS), which provides optimum scheduling for operation of electric appliances and controls the amount of power provided back to the grid from the excess local photovoltaic generation.In [27], the authors present a common service architecture developed to allow end-users interaction with other consumers and suppliers in an integrated energy management system.The architecture would facilitate users with renewable micro-generation to integrate with the electric grid, through the use of a central coordinator inside their home gateway.In [28], a multiscale optimization technique for demandside management is presented.A home automation system is proposed, which dynamically takes into account user comfort level as well as limits on power consumption.In [29], a novel strategy for control of appliances is proposed and utilizes a home automation and communication network.The goal of the proposed technique is to provide continued service, at possibly reduced power consumption levels, during power shortages.In [30], communication methodologies amongst control devices in home automation systems are demonstrated.Specifically, communication over a power line is presented to enable control of appliances in building/home energy management systems.In [31], a home automation system which controls household energy consumption is proposed.The system takes into account predicted/anticipated events and uses a tabu search to maximize user comfort and minimize consumption costs.In [32], control mechanisms to optimize electricity consumption within a home and across multiple homes in a neighborhood are presented and evaluated.Energy management controllers (EMCs) are assumed to control appliances operation based on energy prices and consumers preset preferences.The authors first show that a simple optimization model used for determining appliance time of operation purely based on energy price may actually result in higher peak demand.An EMC optimization model, based on dynamic programming, which also accounts for electricity capacity constraints, is then presented.A distributed scheduling mechanism is also proposed to reduce peak demand within a neighborhood. To summarize, DSM refers to planning, implementation, and evaluation techniques, including policies and measures, which are designed to either encourage or mandate customers to modify their electricity consumption, in terms of timing patterns of energy usage as well as level of demand.The main objective is to reduce energy consumption and peak electricity demand.Potential research in this area should focus on identifying optimized system level hardware-software codesigned solutions to implement the DSM functionalities in the most energy-efficient manner to respond to the dynamically changing operating environment of the HAN under real-time constraints. HAN Communications and Network Technologies The energy management system is at the heart of green buildings and enables home energy control and monitoring, providing benefits to both consumers and utilities.The HEM system intelligently monitors and adjusts energy usage by interfacing with smart meters, intelligent devices, appliances, and smart plugs, thereby providing effective energy and peak load management.The platform for this communication is the HAN, and this section reviews the communications and network technologies for HAN for interworking the HEM to end points and smart meters [33].The cost associated with HEM applications, as can be seen from Figure 3, is significantly lower compared to other home applications because of the differing functionalities.For example, HANs comprise command-based systems that require very short acquisition time for sending data to multiple destinations, and this cuts down the data rate and the bandwidth requirements compared to link-based systems (e.g., communication and entertainment systems) that need a reliable point-to-point communication link for longer periods of time. Internet protocol (IP) is a protocol used for communicating data within a packet-switched internetwork and is responsible for delivering data from source to destination based on an IP address.Being the foundation on which the Internet is built and communication is achieved, IP is a single layer within a multilayer suite known as the TCP/IP stack.Due to this abstraction, IP can be used across a number of different heterogeneous network technologies.Due to the ease of interoperability, ubiquitous nature, widespread adoption, and work being performed to create a lightweight interface, IP is being seen as essential to the success of HAN and smart grid development.As the significance of devices communicating within the HAN increases, so does the requirement for usable IP addresses.Very broadly, the different technologies (comprising specifications for the physical and network layers) can be classified based on the transmission medium into wired and wireless, as is shown in Figure 4. that uses the existing home electricity wiring to communicate, is widely adopted for high-speed wired communication applications (e.g., high-quality, multistream entertainment networking) with a mature set of standards.Ethernet is a very common technology and supports a range of data rates using either unshielded twisted pairs (10 Mbps-1 Gbps) or optical fibers (as high as 10 Gbps).It utilizes a common interface found in a number of household equipment, including computer, laptops, servers, printers, audio video (AV) equipments, media, and game consoles.Ethernet may not be appropriate for connecting all devices in the HAN (especially appliances) due to the high cost and power requirements plus the need for separate cabling back to a central point.X10 is a technology (and an international and open industry standard) that uses power line wiring for signaling and control of home devices, where the signals involve brief radio frequency bursts representing digital information.However, it suffers from some issues such as incompatibility with installed wiring and appliances, interference, slow speeds, and lack of encryption.Insteon addresses these limitations while preserving the backward compatibility with X10 and enables the networking of simple devices such as light switches using the powerline (and/or radio frequency (RF)).All Insteon devices are peers, meaning each device can transmit, receive, and repeat any message of the Insteon protocol, without requiring a master controller or routing software.All the previously described technologies support popular protocols like the IP and hence can easily be integrated with IP-based smart grids.More recently, ITU G.hn has been developed by the International Telecommunication Union (ITU) and promoted by HomeGrid Forum.It supports networking over power lines, phone lines and coaxial cables, and the expected data rates up to 1 Gbps.ITU G.hn provides secure connections between devices supporting IP, IPv4, and IPv6 and offers advantages such as the ability to connect to any room regardless of wiring type, self-installation by the consumer, built-in diagnostic information, and selfmanagement as well as multiple equipment suppliers. Wireless HANs.Next, we discuss wireless networking of low-cost, low-power (battery-operated) control networks for applications such as home automation, security and monitoring, device control, and sensor networks.The low-cost ZigBee-based solutions allow wide deployment in wireless control and monitoring applications; the low power usage allows longer life with smaller batteries (up to 10 years) and the mesh networking provides high reliability and broader range. Z-Wave, a proprietary wireless communications technology designed specifically to remote control applications in residential and light commercial environments [34], is popular because of the following reasons.Unlike WiFi and other IEEE 802.11-based wireless LAN systems that are designed primarily for high-bandwidth data flow, the Z-Wave radio frequency (RF) system operates in the sub Gigahertz frequency range (≈900 MHz) and is optimized for low-overhead commands such as ON-OFF-DIM (as in a light switch or an appliance), raise-lower (as in a volume control), and Cool-Warm-Temp (as in a HVAC) with the ability to include device metadata in the communications.As a result of its low power consumption and low cost of manufacture, Z-Wave is easily embedded in consumer electronics products, including battery-operated devices such as remote controls, smoke alarms, and security sensors.More importantly, Z-Wave devices can also be monitored and controlled from outside of the home by way of a gateway that combines Z-Wave with broadband Internet access. WiFi is a popular IP-based wireless technology used in home networks, mobile phones, video games, and other electronic devices.Support is wide spread with nearly every modern personal computer, laptop, game console, and peripheral device provides a means to wirelessly access the network via WiFi.Another IP-based wireless technology is the ONE-NET, also an open-source standard that is not tied to any proprietary hardware or software and can be deployed using a variety of low-cost off-the-shelf radio transceivers and microcontrollers from various manufacturers. 6LoWPAN (also a standard from the Internet Engineering Task Force (IETF)) optimizes IPv6, the next-generation IP communication protocol for internetworks and the Internet [35], for use with low-power communication technologies such as the IEEE 802.15.4-based radios [36], enabling transfer of small packet sizes using low bandwidth.It is primarily aimed at evolving the current IPv4 protocol which is predicted to be exhausted of address space in 2011.Operation of the 6LoWPAN involves compressing 60 bytes of headers down to just 7 bytes.The target for IP networking for low-power radio communication is the applications that need wireless internet connectivity at lower data rates for devices with very limited form factor. 6LoWPAN allows communication with devices across the Internet without having to go through ZigBee-to-IP translation. Finally, EnOcean technology efficiently exploits applied slight mechanical excitation and other potentials from the ambiance (motion, pressure, light, and temperature) using the principles of energy harvesting for networking selfpowered wireless sensors, actuators, and transmitters.In order to transform such energy fluctuations into usable HEM Hardware The imminent penetration of HEM systems in green homes has created a new market segment (HEM) for embedded hardware providers.In June 2010, Cisco System unveiled its home energy controller (HEC), which is part of a much larger smart grid infrastructure that spans solutions for utilities, substation networks, smart meter networks, and the home network.The HEC has a 7-inch user interface Tablet that runs Ubuntu Linux, powered by a 1.1 GHz Intel Atom processor.Supplementing the HEC on the utility side is Ciscos Home Energy Management Solution, which gives utility companies the right tools to enhance customer satisfaction and effectively implement demand management, load shedding, and pricing programs for residential deployments.Figure 5 shows Ciscos HEC architecture. Using the HEC, consumers can take advantage of special energy pricing programs, demand response can be managed, and electric vehicle integration becomes a reality.The HEC provides (1) the user engaging and easy-to-use energy management applications to monitor and budget energy use and control thermostats and appliances, (2) the utility of the ability to provision and manage a home area network (HAN) that monitors and controls energy loads, and (3) highly secure end-to-end data communications across wired and wireless media and networking protocols.The HEC is a networking device that coordinates with the networks in the home and the associated security protocols, such as ZigBee (communication with smart appliances), WiFi (communication with the home network), and PLC and ERT (communications with utilities).To monitor and control energy loads such as heating, ventilating, and air conditioning (HVAC) systems, pool pump, water heaters, TVs, computers, and other devices, consumers will need to wirelessly connect the appropriate compatible, tested peripherals to the HEC.Cisco is currently in trials with utilities for the home energy controller. To scale and support devices implemented in residential deployments, Ciscos Energy Management Software is deployed in utility facilities, and its hosted services help utilities provide personalization and data to increase customer satisfaction for energy programs.These services include (i) provisioning and management capabilities, (ii) unique, customized look and feel for devices, (iii) mass firmware updates to thousands of devices, (iv) integration with utility back-end applications and third party software. During the last quarter of 2010, both Freescale Semiconductor and Intel Corp. have announced reference designs targeting the HEM market.Freescale demonstrated its Home Energy Gateway (HEG) reference platform in September 2010 in Europe.Freescale's, Home Energy Gateway reference platform is based on the i.MX ARM9 SoC that is both flexible and scalable and based on ZigBee Smart Energy 1.0 mesh architecture for bidirectional control Figure 6.The HEGs controller integration allows for a low bill-of-materials cost.Freescale's HEG includes a central hub that links smart meters, smart appliances, and smart devices in the home area network (HAN) and collects and reports power usage data.The Freescale HEG allows every point of the smart home to be connected and controlled from a central point, enabling power efficiency and energy optimization.The HEG links to a WAN for remote control and monitoring by the utility and communications service provider. Functions of the Home Energy Gateway include (1) collecting real-time energy consumption from smart meter and power consumption data from various inhouse objects, (2) controlling activation/deactivation of home appliances, (3) generating dashboard to provide feedback about power usage, (4) providing control menus to control appliances, (5) providing a ubiquitous link to the broadband Internet. Freescales reference platform is available now through its systems integrator partner Adeneo Embedded, which will provide hardware manufacturing and board support package (BSP) customization and support.The HEG uses a four-layer PCB and boasts a low-cost bill of materials. In Europe, Freescale announced this summer a smart grid demonstration project with the Indesit Company, an Italian maker of smart appliances.Indesit's Smart Washer was equipped with a Freescale ZigBee node that enables it to adjust its cycle starting time according to energy cost and availability of green power.The washer retrieves this information from the local utility via a ZigBee-enabled Internet connection to the smart grid. Close on the heels of Freescale, Intel announced its Home Energy Management the acronym HEM reference design earlier in October 2010 Figure 7. Intel's HEM reference design is based on the Atom processor Z6XX series and Intel's Platform Controller Hub MP20.The reference design is manufacturing ready and supports both WiFi and ZigBee.The processor integrates a DDR2 memory controller that can accommodate up to 2 Gbytes of memory. Intel is marketing the reference design as providing more than just energy management, with the ability to add new applications as they are available.Embedded apps on the dashboard currently include a family message board, weather reports, and home security. The existing commercial platforms outlined above are the first-generation platforms for HEM.As standardization of control and communication protocols is better adumbrated, and the penetration of HEM use among consumer households increases exponentially in the near future, research into designing the optimally efficient and scalable hardware platforms for the next-generation HEM hardware will be paramount.We believe that the next generation HEM devices will also provide various value-added services to the consumers, such as bill payment and security monitoring, for example, besides the expected DSM.Furthermore, these HEM devices will be truly embedded in the HANs, and as is the case with such platforms, the applications and the operating system (OS) which will run on these platforms should be codeveloped and cooptimized with the emerging HEM device architectures. System Architecture and Challenges in Designing Future HEMs In this section, an architecture for a futuristic HEM system is introduced and the challenges and solutions facing the design and deployment of this system are presented.Figure 8 shows an architecture for a future HEM system.Going forward, it is envisioned that a HEM system will be based on an open, nonproprietary, and standards-based platform.This will facilitate the ability to control and network intelligent appliances manufactured by different vendors.The main HEM system can be classified into three subsystems, namely, the Sensor and control devices, the Monitoring and control system, and the Intelligent energy management platform.What follows is a detailed description of these subsystems.It is remarked that while some of these capabilities are available now from a number of HEMS providers, others are future possibilities and will be for quite some time before they hit the market. Sensor and Control Devices.This subsystem concerns the basic devices in an HEM system.It is envisioned that future smart home architectures will comprise self-powered (energy scavenging) devices that will facilitate generation of power and energy storage management and diagnostics at a microscale.Other than the power detector it also needs to include the environmental sensor.The main goal is that other than detecting power efficiency it can also detect environmental parameters, such as temperature, humidity, whether there are people active around, and using the HAN set in the environment to send to the intelligent management platform to allow people and other processes to be able to use these information.The controller is used to receive the remote controller commands to control home appliances. The main challenges facing the deployment of an HEM system are summarized below. (1) Accuracy.For an HEM system, the power detection device should not only give an approximation of the current value, but also accurately measure the current value in the device to enable the intelligent management platform to perform effective appliance recognition function and to determine whether the appliance is operating efficiently using the appliance power source data. (2) Compatibility.Networking normal home appliances entail integrating the infrared transfer method to the HAN.For example, one can deploy the bridge device discussed in [37] that encodes the received HAN signal into an infrared signal making it compatible with most home appliances.This encoding enables a bidirectional control link between the sensor (on the appliance) and the control device. (3) Low Power Cost.Detecting the power consumption in a house and the surroundings along with the cost of power consumption requires many strategically deployed detection devices.However, it is essential that these detection devices have low power consumptions and costs and good power management standards, thereby avoiding excessive sensing devices to escalate the cost of power consumption in the HEM system. Intelligent Power Management Platform (IPMP). An intelligent power management platform (IPMP) is at the heart of an HEM system Refer Figure 9.This is because it exploits the received sensor data and external internet data (from power company information, regional environment information, social information, to name a few) and transfers the data to the IHD display for the user.Alternatively, the IPMP automates home control after processing the sensor data in accordance with the "recent" historic sensor data or external information.The IPMP provides a middleware conversion software and allows upper level device and service applications to communicate with each other, thereby facilitating the transfer of data and control signals to lower level devices.The three key services offered by an IPMP are as follows. (1) Power Management Service.In addition to recording the power usage of each device/appliance, the power management service includes transmitting the power consumption information to the IHD display, and providing appliance recognition and self-managing functions.Through the power sensing device, the power usage of every appliance during the different states of operation is recorded, thereby generating a personalized power consumption profile for each appliance.This power profile can then be used to track and predict the OFF states of an appliance, and this enables a reduction in the power consumed in the whole system by cutting off the power to the appliance during its OFF state.Furthermore, using the power profile of an appliance, one can perform fault analysis to detect a broken or malfunctioning appliance and report it to the user via the IHD. (2) Context-Aware Service.Context-awareness enables procuring regional environmental information (such as position, climate, and humidity) through the sensor network.This service facilitates recording sensor readings at any time to determine the users' habits, and through further processing and analysis, to automatically control the system under different situations, or using states of the user to prevent wasted power. (3) Social Network Service.A good intelligent managing platform should also be equipped with a social networking function that uses the internet to send the power consumption profiles in a home, and to receive information for the accompanying social networks, including power company data, power costs, and power consumption of each appliance in neighboring homes.Using this information, not only can the user become aware of the power usage in ones' home and, but also the neighborhood using the social network service data to achieve a more detailed power management function.However, there exist security concerns to keep the user information private, and so designing secure and reliable communication links for metering, pricing, control, and billing purposes is areas for future research. Monitor and Control System. The main function of this subsystem is to provide a visual interface (such as displaying on the IHD) for the useful information (e.g., power consumption, costs, etc.) for the user to facilitate timely action and control of the HEM system.The design challenge then is to devise a user-friendly and simple integrated control interface for the numerous networked appliances at home.Even though one would envision that universal control panels (i.e., centralized) could offer a good choice for integrating controls, there are still two key challenges. (1) Integration.Designing an integrated platform that will make the appliances from different vendors operating using different standards interoperable is an open research issue.Using universal controllers entails significant dependence on learning or letting the user record different sets of control signals from different manufacturers to suit each function, thereby limiting convenience and making the deployment of new devices (i.e., scalability of the HAN) harder and expensive. (2) User Friendliness.Trying to incorporate a number of appliance controls and functions on an single control panel may present result in a panel with numerous control buttons.This might not be the optimal design even for normal users and more so for senior citizens or children.The simplicity and intuitiveness of the user interface will be of paramount importance to the success of smart grids and HEMs in homes.Further, the ease of deployment and upgrading when necessary will preserve the customer base for smart home technologies. Conclusions On a concluding note, the need for smart energy management in the residential sector for sustainable energy efficiency and monetary savings was revisited in this paper.As the smart grid extends out to homes and businesses, wireless sensors and mobile control devices become important elements in monitoring and managing energy use.There are several challenges of which smart energy system designers need to be aware.One challenge is the fragmentation of the HAN market.There are several wireless standards that are currently used in HANs including WiFi, ZigBee, Z-Wave, and Bluetooth; however, despite the emergence of many wireless standards for HANs, there is no clear winner at this point, and so it is up to the system designers to select a wireless technology that best fits their application while addressing the potential problem of interoperability with other HAN devices.A comprehensive summary of the state of the art in home area communications and networking technologies for energy management was provided in the paper, followed by a review of the affordable smart energy products offered by different companies.The paper also shed light on the challenges facing the design of future energy management systems, such as the need for interoperability and network security.Our discussions will hopefully inspire future efforts to develop standardized and more user-friendly smart energy monitoring systems that are suitable for widescale deployment in homes. Figure 1 : Figure 1: Realizing smart grids in smart homes. Figure 3 : Figure 3: The cost and use of wired and wireless technologies for different home applications. 3. 1 .Figure 4 : Figure 4: Communications and networking possibilities for a home area network. Table 2 : Summary of communications and networking technologies for home area networks.
8,822.8
2012-03-06T00:00:00.000
[ "Engineering" ]
Liquid Crystal Wavefront Correctors Liquid crystal (LC) was first discovered by the Austrian botanical physiologist Friedrich Re‐ initzer in 1888 [1]. It was a new state of matter beyond solid and liquid materials, having properties between those of a conventional liquid and those of a solid crystal. LC molecules usually have a stick shape. The average direction of molecular orientation is given by the director n. When light propagates along the director n, the refractive index is noted as the extraordinary index ne, no matter the polarization direction (in the plane perpendicular to the long axis). However, the refractive index is different depending upon the polarization direction when light moves perpendicular to the director. When an electric field is em‐ ployed, the LC molecule will be rotated so that the director n is parallel to the electric field. Due to the applied electric field, the LC molecular can be rotated from 0° to 90° and the ef‐ fective refractive index is changed from ne to no. As a result, the effective refractive index of LC can be controlled by controlling the strength of the electric field applied on the LC. The maximum change amplitude of the refractive index is birefringence △n= ne no. Introduction Liquid crystal (LC) was first discovered by the Austrian botanical physiologist Friedrich Reinitzer in 1888 [1].It was a new state of matter beyond solid and liquid materials, having properties between those of a conventional liquid and those of a solid crystal.LC molecules usually have a stick shape.The average direction of molecular orientation is given by the director n ^.When light propagates along the director n ^, the refractive index is noted as the extraordinary index n e , no matter the polarization direction (in the plane perpendicular to the long axis).However, the refractive index is different depending upon the polarization direction when light moves perpendicular to the director.When an electric field is employed, the LC molecule will be rotated so that the director n ^ is parallel to the electric field.Due to the applied electric field, the LC molecular can be rotated from 0° to 90° and the effective refractive index is changed from n e to n o .As a result, the effective refractive index of LC can be controlled by controlling the strength of the electric field applied on the LC.The maximum change amplitude of the refractive index is birefringence △n= n e -n o . The properties discussed above allow LC to become a potential candidate for optical wavefront correction.A liquid crystal wavefront corrector (LCWFC) modulates the wavefront by the controllable effective refractive index, which is dependent on the electric field.As distinct from the traditional deformable mirrors, the LCWFC has the advantages of no mechanical motion, low cost, high spatial resolution, a short fabrication period, compactness and a low driving voltage.Therefore, many researchers have investigated LCWFCs to correct the distortions. Initially, a piston-only correction method was used in LC adaptive optics (LC AOS) to correct the distortion.The maximum phase modulation equals △n multiplying the thickness of the LC layer, and it is about 1μm.As reported [2], the pixel size was over 1mm and the number of pixels was about one hundred at that time.Because of the large pixel size, LCWFC not only loses the advantage of high spatial resolution but also mismatches the microlens array of the detector, which leads to additional spatial filtering in order to decrease the effect of the undetectable pixel for correction [3].Moreover the small modulation amplitude makes it unavailable for many conditions.The thickness and Δn can be increased in order to increase the modulation amplitude.However, this will slow down the speed of the LCWFC. Along with the development of LCWFC, an increasing number of commercial LC TVs are used directly for wavefront correction.Due to the high pixel density, the capacity for wavefront correction has been understood gradually by the researchers and the use of kinoform to increase the modulation amplitude is also possible [4][5][6][7][8].A kinoform is a kind of early binary optical element which can be utilized in a high pixel density LCWFC.The wavefront distortion can be compressed into one wavelength with a 2π modulus of a large magnitude distortion wavefront.The modulated wavefront is quantified according to the pixel position of LCWFC.As discussed above, LCWFC only needs one wavelength intrinsic modulation amplitude to correct a highly distorted wavefront. Many domestic and international researchers have devoted themselves to exploring LCWFCs from th 1970s onwards.In 1977, a LCWFC was used for beam shaping by I. N. Kompanets et al. [9].S. T. Kowel et al. used a parallel alignment LC cell to fabricate a adaptive focal length plano-convex cylindrical lens in 1981 [10].In 1984, he also realized a spherical lens by using two perpendicularly placed LC cells [11].A LCWFC with 16 actuators was achieved in 1986 by A. A. Vasilev et al. and a one dimensional wavefront correction was realized [12].Three years later, he realized beam adaptive shaping through 1296 actuators of an optical addressed LCWFC [13]. The basic characteristics of a diffractive LCWFC are introduced in this chapter.The diffractive efficiency and the fitting error of the LCWFC are described first.For practical applications, the effects of tilt incidence and the chromatism on the LCWFC are Adaptive Optics Progress expounded.Finally, the fast response liquid crystal material is demonstrated as obtaining a high correction speed. Theory A Fresnel phase lens model is used to approximately calculate the diffraction efficiency of the LCWFC.According to the rotational symmetry and periodicity along the r 2 direction, when the Fresnel phase lens is illuminated with a plane wave of unit amplitude, the complex amplitude of the light can be expressed as [64]: where j is an integer and the period isr p 2 . Also, it can be expressed by the Fourier series: The distribution of the complex amplitude at the diffraction order n can be obtained [65]: For the Fresnel phase lens, the light is mainly concentrated on the first order (n=1).The diffraction efficiency of the Fresnel phase lens is defined as the intensity of the first order at its primary focus: If the phase distribution function f (r 2 ) of the Fresnel phase lens can be achieved, the diffraction efficiency can be calculated by Eq. (3) and Eq.(4). To correct the distorted wavefront, the 2π modulus should be performed first to wrap the phase distribution into one wavelength.Then, the modulated wavefront will be quantized. For a example, the wrapped phase distribution of a Fresnel phase lens is shown in Fig. 1(a). To a Fresnel phase lens, the 2π phase is always quantized with equal intervals.Assuming the height before quantization is h, the quantization level is N and the height of each quantized step is h/N.For a quantized Fresnel phase lens, the diffraction efficiency can be expressed as [66]: Figure 2 shows the diffraction efficiency as a function of the quantization level for a Fresnel phase lens.Adaptive Optics Progress Effects of black matrix A LCWFC always has a Black Matrix, which will cause a small interval between each pixel, as shown in Fig. 3.At the interval area, the liquid crystal molecule cannot be driven and then the phase modulation is different to the adjacent area.This will affect the diffraction efficiency of the LCWFC, as shown in Fig. 4. It is seen that the diffraction efficiency decreases by 6.4%, 8.8%, 9.5% and 9.7%, respectively for 4, 8, 16 and 32 levels, while the pixel interval is 1μm and the pixel pitch is 20μm.Consequently, the effect magnitude of the diffraction efficiency increases for a larger number quantified levels while the maximum decrease of the diffraction efficiency is about 10%. Mismatch between the pixel and the period Because the pixel has a certain size P, the period T of a Fresnel phase lens cannot be divided exactly by the pixel, as shown in Fig. 5.This error is similar to the linewidth error caused by the lithography technique.For one period, the integer is n and the remainder is γ after T modulo P. If γ≤0.5P, there are n pixels in one period; on the contrary, there are n+1 pixels.As such, the maximum error is 0.5P for the first period.According to Eq. ( 3), the distribution of the complex amplitude of the first order can be acquired with the known phase distribution function in one period. Then, the diffraction efficiency can be obtained.As shown in Fig. 6, when the error of the first period changes from 0 to 0.5P, the diffraction efficiency decreases from 81% to 78.3%.The pixel number effect on the variation of the diffraction efficiency is also calculated while the error is 0.5P (Fig. 7).The decrease of the diffraction efficiency is 1% when the pixel number is 7. Accordingly, if the pixel number is not less than 7 in one period, the effect of the pixel size can be ignored. Wavefront compensation error The wavefront compensation error always exists due to the finite number of the wavefront correction element used for the correction of the atmospheric turbulence.Hudgin gave the relationship between the compensation error and the actuator size as follows [67]: where r s is the actuator spacing, r 0 is the atmosphere coherence length and α is a constant depending on the response function of the actuator.For continuous surface deformable mirrors (DMs), the response function of the actuator is a Gaussian function and α ranges from 0.3-0.4[68].For a piston-only response function, α is 1.26 [69].Researchers always use a piston-only response function to evaluate a LCWFC and have proved that the actuators need to be 4-5 times as large as that of the DM's [69,70].However, the case is totally different when a diffractive LCWFC (DLCWFC) is used where the kinoform or phase wrapping technique is employed to expand the correction capability [71,72].Therefore, Eq. ( 6) is not suitable any more. The effect of quantization on the wavefront error Firstly, the wavefront error generated during the phase wrapping due to quantization is considered.Since a LCWFC is a two-dimensional device, the quantification is performed along the x and y axes by taking the pixel as the unit.According to the diffraction theory, the correction precision as a function of the quantization level can be deduced [73].If the pixel size is not considered, the root mean square (RMS) error of the diffracted wavefront as a function of the quantization level can be simplified as [73]: Where N is the quantization level and λ is the wavelength.If N=30, then the RMS error can be as small as λ/100.For N=8, RMS=0.036λ and the corresponding Strehl ratio is 0.95. Figure 8 shows the diffracted wavefront RMS error as a function of the quantization level N.As can be seen, the wavefront RMS error reduces drastically at first, and then approaches to a constant gradually when the quantization level becomes greater than 10.The wavefront RMS error can be calculated for a known quantization level on a wavefront.For a DLCWFC, the wavefront compensation error is directly determined by the quantization level without any need to consider the pixel number.Therefore, the distribution of the quantization level on the atmospheric turbulence should be calculated first for a given pixel number, telescope aperture and atmospheric coherence length, and then the wavefront compensation error can be calculated by using Eq. ( 7).Adaptive Optics Progress Zernike polynomials for atmospheric turbulence Kolmogorov turbulence theory is employed to analyse the distribution of the quantization level across an atmospheric turbulence wavefront.Noll described Kolmogorov turbulence by using Zernike polynomials [74].According to him, Zernike polynomials are redefined as: Where: The parameters n and m are integral and have the relationship m≤n andn − | m | = even.An atmospheric turbulence wavefront can be described by using a Kolmogorov phase structure function, as below [74]: By combining the phase structure function and the Zernike polynomials, the covariance between the Zernike polynomials Z j and Z j′ with amplitudes a j and a j′ can be deduced as [75]: ) and D is the telescope diameter.δ mm′ is the Kro- necker delta function.By using Eq. ( 11), the coefficients of the Zernike polynomials can be easily computed.If the first J modes of the Zernike polynomials are selected, the atmospheric turbulence wavefront is represented as: Therefore, the atmospheric turbulence wavefront Ф t can be calculated by using Eqs.( 11) and (12).As the phase wrapping technique is employed, the atmospheric turbulence wavefront can be wrapped into 2π and quantized and thus the distribution of the quantization level across a telescope aperture D can be determined. Calculation of the required pixel number of DLCWFCs In practice, people hope to calculate the desired pixel number of a DLCWFC expediently for a given telescope aperture D, a quantization level N, and an atmospheric coherence length r 0 .Therefore, it is necessary to deduce the relation between the pixel number of the DLCWFC and D, N and r 0 .As shown in Fig. 9, the DLCWFC aperture can be represented by the pixel number across the aperture, which is called P N .The circle represents the atmospheric turbulence wavefront Ф t .Since the atmospheric turbulence wavefront is random, the ensemble average <Ф t > should be used in the calculation.The modulated and quantized atmospheric turbulence wavefront can be expressed as: where mod( ) denotes the modulo 2π.If <Ф t > is known, <P N > can be expressed as a function of the telescope aperture D, the quantization level N, and the atmospheric coherence length r 0 .By using Eqs.( 11) and ( 12), <Ф t > can be calculated and the first 136 modes of the Zernike polynomials are used in the calculation.For the randomness of the atmospheric turbulence wavefront, different quantization levels are used during the quantization, depending upon the fluctuation degree of the wavefront.Here, N is defined as the minimum quantization level so that the sum of those quantization levels greater than N should occupy 95% of the quantization levels included in the atmospheric turbulence wavefront.Fifty atmospheric turbulence wavefronts are used to achieve the statistical results.First, the relation between the pixel number P N and the telescope aperture D is calculated for r 0 =10cm and N=16, as shown in Fig. 10.It can be seen that <P N > is a linear function of D when N and r 0 are fixed.That is to say, the larger the aperture of the telescope, the more the pixel number will be needed if a DLCWFC is used to correct the atmospheric turbulence.Specifically, for a telescope with a diameter of 2 metres, the total pixel number will be 96×96, while for a telescope with a diameter of 4 metres the total pixel number will be 168×168.P N as a function of N is also computed for r 0 =10cm and D =2 m, as shown in Fig. 11.It illustrates that when D and r 0 are fixed, <P N > is a linear function of N.This means that the more that the quantization level is used, the more the pixel number will be needed.The relationship between <P N > and r 0 is also calculated with the variables N and D. Figure 12 shows only three curves with three pairs of fixed N and D. This time, the relationship is not a linear function anymore but an exponential function.With more pairs of N and D fixed, more curves can be obtained, but these are not shown in the figure.The relationship between <P N > and r 0 can be expressed by the following formula: where A and B are the coefficients.A is only related to N and can be expressed as A=6.25N. As <P N > is a linear function of D and N, the coefficient B can be expressed as: where a, b, c and d are the coefficients.By substituting the known value of N, D and the calculated coefficient B, the value of a, b, c and d is determined to be 15, -23, -150 and 91, respectively, by using the least-square method.Thus, <P N > can be expressed as: ( ) 6/ 5 0 6.25 15 1.5 23 0.91 where the units of D and r 0 is centimetres.The total pixel number of the DLCWFC can be calculated by using P N ×P N .By combining Eqs. ( 7) and ( 16), the compensation error of the DLCWFC can be evaluated for the atmospheric turbulence correction.These two formulas are not suitable for modal types of LCWFCs [76] or other types that do not use the diffraction method to correct the atmospheric turbulence.Adaptive Optics Progress Normally, the quantization level of 8 is suitable for the atmospheric turbulence correction for three reasons.Firstly, a higher correction accuracy can be obtained.When N=8, the RMS error can be reduced down to 0.035λ and the Strehl ratio can be increased to 95%.Secondly, a higher diffraction efficiency can be obtained.According to the diffractive optics theory [73], the diffraction efficiency is as large as 95% for N=8.Finally, the total pixel number can be controlled within a reasonable range.Of course, a smaller wavefront RMS error and a higher diffraction efficiency can be achieved with a larger quantization level.But, in that case, the required pixel number of the DLCWFC will be increased drastically, which will lead to a significantly slower computation and data transformation rate of the LCAOS.Fig. 13 shows the relation between P N , D and r 0 for N=8.As can be seen, the desired pixel number apparently increases when the atmospheric coherence length becomes smaller and the telescope aperture becomes larger.For instance, the total pixel number of the DLCWFC is 1700×1700 when r 0 =5 cm and D=20 m.However, if r 0 =10 cm, the total pixel number can be reduced down to 768×768.Therefore, the strength of the atmosphere turbulence is a key factor which must be considered when designing the LCAOS for a ground-based telescope. Tilt incidence Currently, reflective LCWFC devices [77][78][79], such as liquid crystal on silicon (LCOS) devices, are especially attractive because of their small fill factor, high reflectivity and short response time.To separate the incident beam from the reflected beam for a reflective LCWFC, the incident light should go to the LCWFC with a tilt angle.Alternatively, the incident light is perpendicular to the LCWFC and a beam splitter is placed before the LCWFC to separate the reflected and incident beams.However, the second method will result in a 50% loss in each direction, reducing output power to 25% of the input.To avoid the energy loss, the tilt incidence is a suitable method for a LCWFC.However, the tilt incidence will affect the phase modulation and the diffraction efficiency of the LCWFC.A reflective LCWFC model is selected to perform the analysis and the acquired results are suitable for the transmitted LCWFC. Effect of the tilt incidence on the phase modulation of the LCWFC In order to simplify the model of the reflective LCWFC, the border effect is neglected and all of the molecules have the same tilt angle.The simplified model is shown in Fig. where θ represents the tilt angle of the molecule and n e is the off-state extraordinary refractive index.Assume that the LCWFC without the applied voltages and the polarization direction is the same as the LC director.For the tilt incidence as shown in Fig. 14, it is equivalent to the rotation of the LC director with an angle θ′.Hence, although the tilt angle of the molecule is zero, the extraordinary refractive index is changed to n e (θ′) with the tilt incidence.Furthermore, the tilt incidence will change the transmission distance of the light in the liquid crystal cell with a factor of 1/cosθ′.Consequently, the phase modulation with the tilt incidence and no applied voltages can be expressed as: ( ( ) ) . cos If the pre-tilt angle of the liquid crystal molecule is considered, Eq.( 18) can be rewritten as: where θ 0 is the pre-tilt angle, d is the thickness of the liquid crystal cell and λ is the relevant wavelength.For n e =1.714, n o =1.516, λ=633nm and d=1.6μm, the phase modulation as a function of the incident angle is shown in Fig. 15.The simulated results show that the phase modulation is reduced by at most 1% for incident angles under 6°.The measured result is also shown in the figure .The trends of the simulated and measured curves are similar.The difference of in phase shift might be caused by the border effect.For the actual liquid crystal cell, a rubbing polyimide (PI) film is used to align the liquid crystal molecules.The PI layer will anchor the liquid crystal molecules at the border; this causes the tilt angle of the liquid crystal molecule at the interface to be different from the centre.The simulated and measured results indicate that the LCWFC may be used with a small tilt angle. The effect of pixel crossover on the phase modulation For the tilt incidence, shown in Fig. 16, the incident light in one pixel could transmit through an adjacent pixel, which is called pixel crossover.The maximum error of the pixel crossover is W. The pixel crossover will also affect the phase modulation of the LCWFC.Because each pixel is an actuator with a corresponding phase modulation, the light should go through just one pixel so as to control the phase modulation accurately.For a 19μm pixel size and d=1.6μm,W as a function of the incident angle is shown in Fig. 17.The results show that W=0.33μm for a tilt incident angle of 6°.For a pixel with a size of P, the ratio of the light which transmits through adjacent pixels can be expressed with W/P.If the ratio equals to zero, it illustrates that the light is the vertical incidence and that it goes through just one pixel.For an incident angle of 6°, the ratio is only 1.77% and it may be ignored.As such, the LCWFC may be used at the tilt incidence condition with a little tilt angle.Adaptive Optics Progress Diffraction efficiency with tilt incidence Because the phase of each pixel changes with the tilt incidence, the diffraction efficiency will decrease [64].The Fresnel phase lens model [71] is used to calculate the change of the diffraction efficiency and 16 quantified levels are selected.The simulated results show that at an incident angle of 6°, the diffraction efficiency is reduced by 3% (Fig. 18).For the incident angles less than 3°, the reduction in diffraction efficiency is less than 1% -a negligible loss for most applications. Chromatism The chromatism of the LCWFC includes refractive index chromatism and quantization chromatism.Refractive index chromatism is caused by the LC material, and is generally called dispersion.Meanwhile, quantization chromatism is caused by the modulo 2π of the phase wrapping.Theoretically, the LCWFC is only suitable for use in wavefront correction for a single wavelength and not on a waveband due to chromatism.However, if a minor error is allowed, LCWFC can be used to correct distortion within a narrow spectral range. Effects of chromatism on the diffraction efficiency of LCWFC The measured birefringence dispersion of a nematic LC material (RDP-92975, DIC) is shown in Fig. 19.It can be seen that the birefringence Δn is dependent on the wavelength and the dispersion of the LC material is particularly severe when the wavelength is less than 500 nm.Since a phase wrapping technique is used, the phase distribution should be modulo 2π, and it should then be quantized [71].Assuming that the quantization wavelength is λ 0 , the thickness of the LC layer is d, and V max denotes the voltage needed to obtain a 2π phase modulation, such that the maximum phase modulation of the LCWFC can be expressed as: Liquid Crystal Wavefront Correctors http://dx.doi.org/10.5772/54265 For any other wavelength λ, it can be rewritten as: For a quantization wavelength of 550 nm, 633 nm and 750 nm, the variation of the maximum phase modulation as a function of wavelength is shown in Fig. 20.Assuming that the deviation of the phase modulation is 0.1, for λ 0 =550, 633 and 750 nm, the corresponding spectral ranges are calculated as 520-590 nm, 590-690 nm and 690-810 nm, respectively.If a 10% phase modulation error is acceptable, then the LCWFC can only be used to correct the distortion for a finite spectral range. The variation of Δn and λ affects the diffraction efficiency of the LCWFC.Using the Fresnel phase lens model, the diffraction efficiency for any other wavelength λ can be described as [80]: The effects of Δn and λ on the diffraction efficiency are shown in Fig. 21.For λ 0 = 550 nm, 633 nm and 750 nm, and their respective corresponding wavebands of 520-590 nm, 590-690 nm and 690-810 nm, the maximum energy loss is 3%, which is acceptable for the LC AOS.Although only one kind of LC material is measured and analysed, the results are helpful in the use of LCWFCs because almost all the nematic LC materials have similar dispersion characteristics. Broadband correction with multi-LCWFCs The above calculated results show that it is only possible to correct the distortion in a narrow waveband using only one LCWFC.Therefore, to realize the distortion correction in a broadband -such as 520-810 nm -multi-LCWFCs are necessary; each LCWFC is responsible for the correction of different wavebands and then the corrected beams are combined to re- alize the correction in the whole waveband.The proposed optical set-up is shown in Fig. 22, where a polarized beam splitter (PBS) is used to split the unpolarized light into two linear polarized beams.An unpolarized light can be looked upon as two beams with cross polarized states.Because the LCWFC can only correct linear polarized light, an unpolarized incident light can only be corrected in one polarization direction while the other polarized beam will not be corrected.Therefore, if a PBS is placed following the LCWFC, the light will be split into two linear polarized beams: one corrected beam goes to a camera; the other uncorrected beam is used to measure the distorted wavefront by using a wavefront sensor (WFS).This optical set-up looks like a closed loop AOS, but it is actually an open-loop optical layout.This LC adaptive optics system must be controlled through the open-loop method [31,81].Three dichroic beam splitters (DBSs) are used to acquire different wavebands.A 520-810 nm waveband is acquired by using a band-pass filter (DBS1).This broadband beam is then divided into two beams by a long-wave pass filter (DBS2).Since DBS2 has a cut-off point of 590 nm, the reflected and transmitted beams of the DBS2 have wavebands of 520-590 nm and 590-820 nm, respectively.The transmitted beam is then split once more by another long-wave pass filter (DBS3) whose cut-off point is 690 nm.Through DBS3, the reflected and transmitted beams acquire wavebands of 590-690 nm and 690-810 nm, respectively.Thus, the broadband beam of 520-810 nm is divided into three sub-wavebands, each of which can be corrected by an LCWFC.After the correction, three beams are reflected back and received by a camera as a combined beam.Using this method, the light with a waveband of 520-810 nm can be corrected in the whole spectral range with multi-LCWFCs.The broadband correction experimental results are shown in Fig. 23.A US Air Force (USAF) resolution target is utilized to evaluate the correction effects in a broad waveband.Firstly, the waveband of 520-590 nm is selected to perform the adaptive correction.After the correction, the second element of the fifth group of the USAF target is resolved, with a resolution of 27.9 μm (Fig. 23(b)).Considering that the entrance pupil of the optical set-up is 7.7 mm, the diffraction-limited resolution is 26.4 μm for a wavelength of 550 nm.Thus, a near dif-fraction-limited resolution has been achieved.Figure 23(c) shows the resolving ability for a waveband of 590-690 nm.The first element of the fifth group is resolved and the resolution is 31.25 μm, which is near the diffraction-limited resolution of 30.4 μm for a 633 nm wavelength.The corrected result for 520-690 nm is shown in Fig. 23(d).The first element of the fifth group can also be resolved.These results show that a near diffraction-limited resolution of an optical system can be obtained by using multi-LCWFCs. Fast response liquid crystal material In applications of LCWFCs, the response speed is a key parameter.A slow response will significantly decrease the bandwidth of LC AOS.To improve the response speed, dual-frequency and ferroelectric LCs have been utilized to fabricate the LCWFC [82,83].However, there are some shortcomings with these fast materials.The driving voltage of the dual-frequency LCWFC is high and it is incompatible with the very large scale integrated circuit.The phase modulation of the ferroelectric LCWFC is very slight and it is hard to correct the distortions.Nematic LCs have no such problems.However, its response speed is slow.In this section, we introduce how to improve the response speed of nematic LCs. For a nematic LC device, the response time of LC can be described by the following equations when the LC cell is in parallel-aligned mode [84]: where γ 1 is the rotational viscosity, V and V th are turn-on driving and threshold voltage, K 11 is the elastic constant and d is the thickness of the LC cell.Generally, the rise time can be decreased by the overdriving method.However the decay time particularly depends upon the intrinsic parameters of LC devices, which are the key factors for response improvement.From Eq. ( 24), the smaller visco-elastic coefficient (γ 1 /K 11 ) and d is, the shorter the response time is.However, it is necessary to keep the phase retardation (d×Δn) to exceed (or equal) one wavelength for a LCWFC, and then the cell gap can only been reduced to a limited value for a constant birefringence (Δn).The higher birefringence of LC materials enables a thinner cell gap to be used while keeping the same phase retardation and improves the response performance of the LCWFC.Therefore, the LC materials with high Δn and low γ 1 /K 11 are required to have a fast response. In the study of fast response LC materials, a concept of 'figure-of-merit' (FoM) is adopted to evaluate different LC compounds [85], as shown as Eq.(25).A LC material with a high FoM value will provide a short response time: Nematic liquid crystal molecular design In practice, some simple empirical rules together with a trial are usually used to help with the molecular design and mixing, such as LC compounds with a tolane and biphenyl group with a large Δn and a moderate γ 1 .Recently, some computer simulation-based theoretical studies have been performed in order to shed light on the connections between macroscopic properties and molecular structure.A notable advantage of simulation is to predict the properties of a nematic LC material with optimal molecular configurations instead of costly and time-consuming experimental synthesis.In the study of fast response LCs, theoretical methods are used to analyse the rotational viscosity and Δn of a specific chemical structure. In the study of the rotational viscosity (RVC) of nematic liquid crystals, Zhang et al. [86] adopt two statistical-mechanical approaches proposed by Nemtsov-Zakharov (NZ) [87] and Fialkowski (F) [88].The NZ approach is based on the random walk theory.It is a correction of its predecessor in considering the additional correlation of the stress tensors with the director and the fluxes with the order parameter tensor, except for the autocorrelation of the microscopic stress tensor. In Fig. 24, the RVC of the nematic liquid crystal E7 is shown as a function of temperature. The experimental rotational viscosity decreases with temperature, and similar variations from NZ and F's theoretical methods are also obtained.The calculated NZ and F rotational viscosities are in the same order of magnitude as the experimental values.The larger the number of molecules, the longer the simulation time, and the revised force field for liquid crystals is expected to be helpful in improving this prediction. Figure 24.Temperature dependence of the rotational viscosity for E7, ■, the NZ method, •, the F method, ▲, and the experiment. The birefringence and dielectric anisotropy can be calculated by the Vuks equation and the Maier-Meier theory, respectively, and these calculated values have a good correlation with the experimental data in Ref. 89.In all, these approaches comprise a unique molecular design method for fast response LCs. Chemosynthesis of fast response LC materials In order to achieve fast LC material, researchers have synthesized a series of high birefringence LC materials with a linear shape and a long conjugated group.Gauza et al. first synthesized and reported a biphenyl, cyclohexyl-biphenyl isothiocyanato (NCS) LC material in which Δn is 0.2-0.4 and the rotational viscosity is about 10 ms μm -2 .The chemical structures are shown in Fig. 25.Moreover, they perform a comparison with a commercial E7 mixture.At 70°C, the FoM of the NCS mixture has a factor ten higher than that of E7 at 48°C [90]. Liquid Crystal Wavefront Correctors http://dx.doi.org/10.5772/54265The high birefringence isothiocyanato LC with a tolane or terphenyl group can usually be synthesized via a couple reaction; the chemical reaction route was shown in Fig. 27 [92]: In Gauza et al., in subsequent research, a series of fluro-substituted NCS LC materials with a Δn up to 0.5 at room temperature was developed, and some of them show better response performance [93], the chemical structures are shown in Fig. 28: pared to the conventional couple reaction method, this synthesis route improves the total reaction yield [94].It has rarely been reported that LCs with a very low rotational viscosity were mixed to high Δn LCs in order to improve response performance.However, Peng et al. introduce a type of difluorooxymethylene-bridged (CF 2 O) LCs with a very low rotational viscosity so as to improve the response performance of NCS LCs.The chemical structure is shown in Fig. 30. When the material was mixed to NCS LCs with a high Δn, the visco-elastic coefficient of mixture decreased noticeably, the LC mixture approximately maintained high birefringence, and the FoM value increased from 14.8 to 16.9 μm Figure 1 ( b) is a Fresnel phase lens quantized with 8 levels. Figure 2 . Figure 2. Diffraction efficiency as a function of the quantization level. Figure 3 . Figure 3.A Fresnel phase lens quantified by the pixel with a Black Matrix. Figure 4 . Figure 4.The diffraction efficiency as functions of the pixel interval for different quantified levels. Figure 5 . Figure 5.The mismatch between the pixel and the period of the Fresnel lens. Figure 6 .Figure 7 . Figure 6.The diffraction efficiency as a function of the period error. Figure 8 . Figure 8.The wavefront RMS error as a function of the quantization level. Figure 9 . Figure 9.The field of the DLCWFC -the circle represents the wavefront of atmospheric turbulence and P 1 …P N are the pixel numbers of the DLCWFC. Figure 10 . Figure 10. as a function of the telescope aperture D -■ represents the calculated data for r 0 =10 cm and N=16, and the solid curve represents the fitted data. Figure 11 . Figure 11.as a function of the quantization level N -▲ represents the calculated data for D=2 m and r 0 =10 cm, and the solid curve represents the fitted data. Figure 12 . Figure 12. as a function of the atmosphere coherence length r 0 -the line is the fitted curve and ■, • and * represent the computed data with N=16 and D=4 m, N=8 and D=4 m, and N=8 and D=2 m, respectively. Figure 13 . Figure 13.as functions of the atmosphere coherence length r 0 and the telescope aperture D for N=8. Figure 14 . Figure 14.Simplified model of the reflective LCWFC with tilt incidence. Figure 15 . Figure 15.Phase shift as a function of the incident angle --•-is the measured curve and - * -represents the simulated data. Figure 16 . Figure 16.Illustration of the pixel -P1, P2 and P2 are pixels, d is the thickness of the cell. Figure 17 . Figure 17.The pixel crossover W as a function of the incident angle. Figure 18 . Figure 18.Diffraction efficiency as a function of the incident angle. Figure 19 . Figure 19.The birefringence ∆n as a function of the wavelength. Figure 20 . Figure 20.The phase modulation as a function of the wavelength for λ 0 = 550 nm, 633 nm and 750 nm, respectivelythe two horizontal dashed lines indicate the phase deviation range while the four vertical dashed lines illustrate three sub-wavebands of 520-590 nm, 590-690 nm and 690-820 nm, respectively. Figure 21 . Figure 21.The diffraction efficiency as a function of wavelength for λ 0 = 550 nm, 633 nm and 750 nm, respectively. Figure 22 . Figure 22.Optical set-up for a broadband correction -PLS represents a point light source, PBS is a polarized beam splitter, DBS means dichroic beam splitter, DBS1 is a band-pass filter, and DBS2 and DBS3 are long-pass filters. Figure 25 . Figure 25.Chemical structures of biphenyl, cyclohexyl-biphenyl isothiocyanato LC materials.In 2006, Gauza[91] provided one type of NCS LC material with unsaturated groups.The LC chemical structures are shown in Fig.26: the final two NCS LC mixtures show a Δn value of 0.25 and 0.35; a viscosity factor of about 6 ms μm -2 ; FoM values of 10.1 and 18.7 μm 2 s -1 .The response speed of such a LC material can be as low as 640 μs with a LC thickness of 2 μm at 35°C. Figure 26 . Figure 26.Chemical structures of NCS LC materials with unsaturated groups. Figure 27 . Figure 27.The synthesis of isothiocyanato compounds using Suzuki coupling. Figure 28 . Figure 28.Chemical structures of NCS LC materials with high birefringence.In the research of isothiocyanate tolane LC compounds, Peng et al. prepared a NCS LC compound via an electronation reaction.The reaction route is shown in Fig. 29.Com- 14.The former board is glass and the back is silicon.The liquid crystal molecule is aligned parallel to the board.The tilt incident angle is θ′.The liquid crystal material is a uniaxial birefringence material -it has an ordinary index n o and the extraordinary index n e(θ).n e (θ) can be obtained with the index ellipsoid equation [41]:
8,785
2012-12-18T00:00:00.000
[ "Physics" ]
Clamping system for series connected IGBTs to avoid transient break down voltages IGBTs (Insulated Gate Bipolar Transistor) can be used for power DC-DC converters at higher voltages. A series connection is needed due to the blocking capability limit of 6500V. The voltage must be shared among the IGBTs both dynamic when switching and blocking mode. A non-synchronized switching creates transient voltage that might damage the IGBT. The non-synchronized switching comes from different delays in the gate drivers and deviation in the IGBT parameters. This paper investigates in theory and practice a solution contains of clamp circuit, dynamic and static circuit. Turning on process of the stack of three IGBTs are in focus in a buck converter design. An operating voltage closer to the IGBT limit is used in order to press the system. For the simulations (Pspice) and experimental tests, fast IGBTs (1200V /30A IXYH40N120C3) are used to show the good dynamic of the chosen solution. The Lab tests also show that because of the deviation in parameters pre adjustment of the gate signals is needed. The experimental tests have been carried out in different clamp voltage to finally have clamp voltage close to the blocking voltage of the IGBTs and maximum 2A as output current. Examples of wave forms for voltages are given and discussed. Introduction Series connection of the IGBTs is used to avoid costly and complicated methods such as using transformer and multilevel topologies to increase the voltage rate in the converters [1].A circuit should protect the string of IGBTs against overvoltage and ensure voltage sharing between IGBTs.Active gate drivers [2] and [3] can be a solution.In this paper, the method in [4] is used to investigate for three high speed and high voltage IGBTs in series.In contrary with the paper [4] three fast IGBTs with clamp voltage close to the blocking voltage of the IGBTs are tested.Dynamic and static sub circuits contain of RCD and R, respectively, are in the collectoremitter side and clamp sub circuit is in the collectorgate side.Combination of the dynamic sub circuit and clamp sub circuit minimize the disadvantageous of both sub circuits.Dynamic sub circuit can minimize the oscillation of the voltage during switching times and leads to decrease the switching losses of the clamp subcircuit.Protection of the IGBTs against overvoltage can be done by clamp sub circuit therefore it can avoid using big snubber circuit [4].The formulas in [4] was derived for three IGBTs in series and the values of the dynamic and the static sub circuits was calculated by them.The clamp circuit can clamp the voltage across fastest IGBT(s) and prevent overvoltage which comes from different delays in the switching times of the IGBTs and deviation in the IGBT parameters.Figure 1 presents three IGBTs in series which is protected by the clamp sub circuit in the gate-collector side and the static and the dynamic sub circuits in the collector-gate side. can limit the current and protect the zener diodes.Diode in series with the zener diodes avoids creating low pass impedance in the clamp string when IGBT is on [5].Current passing through the gate resistance should pass through the clamp circuit during the switching process.Therefore, the zener diodes should be able to pass the same value of surge reverse current through during this time.Actually, the zener diode with at least surge reverse current equal to the maximum current which is passing through the gate resistance during switching process is needed.In this paper, three series IGBTs are used in a buck converter and tested in different level of voltages.Finally, they are tested by clamp voltage close to the blocking voltage of the IGBT.Maximum 2A current is in the output of the converter.Both switching off and on process are the focus in this paper.Both simulation and experimental results are compared but stray inductances of the circuit and parasitic capacitances of the components have influence on the experiments.Both results of the simulations and the experiments are in the following. Designed buck converter Buck converter which can handle around 3kV was made by three series IGBTs (1.2kV).Figure 2 and 3 shows the designed converter in Pspice and laboratory set-up.Semi active gate driver circuit was used in the laboratory design.Three 1.2kV silicon diodes are used as free-wheeling diodes in the converter structure.The clamp circuit can clamp the IGBTs in around 1kV.The Lab set-up was designed to be compact and have low inductive circuit. Simulations Different delays in the switching times of the IGBTs lead to have transient breakdown voltage across fastest IGBT(s) and slowest IGBT(s) during switching off and on process, respectively.The simulations show the method in [4] can prevent overvoltage across fastest IGBT(s) during switching off process and protect them when there are three IGBTs in series with input voltage 3kV and clamping voltage 1kV. Switching off process The non-synchronized switching leads to have overvoltage across at least one IGBT in the series connection of them.Therefore, using the clamp circuit is an easy and reliable way to protect the IGBTs against overvoltage during switching off process. Figure 5 Figure 6 and 7 shows the voltages across the IGBTs in clamping voltage around 400V and 800V, respectively, during switching off process.In the both case, there is delays between the switching times of the IGBTs. Fastest IGBT is clamped in the desired voltage and prevented overvoltage.The role of zener diode is very important and it is essential to pay attention to the power loss limitation of the zener diode.If there is big delay between switching time of the IGBTs, the zener diodes have to pass current during larger time.Therefore, their losses increase and may lead to damage them.In addition of losses, the zener diodes should be fast enough compare to the IGBTs to be able to clamp the IGBTs without delay. Switching on process Switching on process is as important as switching off process in series connection of IGBTs in a converter design.Similar to the simulations, first fastest IGBT turns on during the switching on process and other IGBTs start to share the voltage during this interval.Therefore, the clamp circuits of these IGBTs start to act and pass current through.Compare to the simulations, the stray inductances and parasitic capacitances of the circuit have influence on the behavior of the converter.Interaction between inductive and capacitive parts in the buck converter structure leads to high frequency oscillations on the voltages of the IGBTs and create transient breakdown.Figure 8, 9 and 10 show voltages across the three IGBTs in different switching delays positions when the clamp voltage is 200V.The voltage across the IGBT module which is nearest to the inductor and the free-wheeling diodes is the green curve.High frequency oscillations is on the voltage curve when this IGBT has the second and third fastest switching delays.Actually, interaction between inductor and parasitic capacitances of the free-wheeling diodes and this IGBT leads to create transients.The transients is dangerous for the system in higher voltages and a solution should be considered.The conducting loss of this IGBT can damp the oscillations when it is the fastest to turn on during switching on process, see figure 10.Therefore, as a solution, always nearest IGBT to the free-wheeling diodes and the inductor should be fastest to turn on during the switching on process in a converter design.Actually, pre-adjustment of the switching delays is needed. Figure 11 and 12 shows the voltages across the IGBTs during switching on process at clamping voltage 400V and 800V, respectively.The switching on process is normal in all levels of voltages when pre-adjustment is considered.In addition, figure 11 shows the process of sharing the voltage between the second and the third fastest IGBTs during the switching on process after turning on of the fastest IGBT.These two IGBTs try to share the voltage but the input voltage is not big enough to their clamp circuits start to act.After a while these IGBTs turn on and the voltages drop. Conclusion In this paper, experimental test of the high voltage converter with the three IGBTs in series which is protected by the method in [4], shows high voltage converter can be made in different levels of voltages with several dozens of IGBTs.Pre-adjustment of the switching delays can protect IGBTs against transient breakdown during switching on process. In addition, power loss and speed of zener diodes are significant factors to be considered to have fast clamp circuit which is able to protect IGBTs against transient breakdown from the beginning of switching off process and handle longer switching delays during this time. Figure 1 . Figure 1.Three IGBTs in series protected by method in reference [1].Clamp sub circuit in the collectorgate side and dynamic and static sub circuits in the collector-emitter side. Figure 2 . Figure 2. Buck converter circuit in Pspice.Three IGBTs in series leads to converter can handle around 3kV in input.Clamp sub circuit and static and dynamic sub circuit protect the string of the IGBTs. Figure 3 . Figure 3. Buck converter set-up with three IGBTs in series in the laboratory.Compact structure leads to have low inductive circuit. Figure 4 shows voltages across the IGBTs in the string.One of the IGBTs turns off and on 2μs faster than the others.During switching off delay between the IGBTs, clamp circuit of the fastest IGBT acts and protect the IGBT.During switching on delay between the IGBTs, fastest IGBT turn on faster and two other IGBTs should share the voltage during this delay.Therefore, clamp subcircuits of them acts and clamp IGBTs in 1kV. Figure 4 . Figure 4. Voltages across IGBTs with 2 delay between fastest IGBT and the others in Pspice. across the IGBTs during switching off process when the clamping voltage is 200V.There are delays between IGBTs and it leads to different level of voltage across them during switching off process.Fastest IGBT turns off first and its clamp circuit starts to act and clamp the voltage across the IGBT.Therefore, maximum current passing through the gate resistance during switching off process (1A in this design) starts to go through the clamp circuit string until the other IGBTs turn off.During this time there is a small spike on the voltage curve which is the voltage drop across the clamp resistor.After a while the second fastest IGBT turns off and its clamp circuit starts to act and pass the current of the gate resistance through.Finally, the third IGBT turns off and the IGBTs try to share the voltage.Static resistor forces the IGBTs to have the same voltages by charging and discharging their parasitic capacitances. Figure 5 . Figure 5. Voltage across the IGBTs in series connection with the clamp voltage 200V during switching off process in the Lab set up of the buck converter design.Fastest and second fastest IGBTs have been clamped in 200V and the clamp circuits protect them in the desired voltage.*IGBT3 is the IGBT near to the inductor and the free-wheeling diodes.IGBT1 is the IGBT near to the input. Figure 6 . Figure 6.Voltage across the IGBTs in series connection with the clamp voltage 400V during switching off process in the Lab set up of the buck converter design.Fastest and second fastest IGBTs have been clamped in 400V. Figure 7 . Figure 7. Voltage across the IGBTs in series connection with the clamp voltage 800V during switching off process in the Lab set up of the buck converter design.Fastest IGBT has been clamped in 800V protected against overvoltage. Figure 8 .Figure 9 .Figure 10 .Figure 11 . Figure 8. Voltages across the IGBTs in series connection during the switching on process when the clamp voltage is 200V.At this time, IGBT3 is the second fastest IGBT during the switching on process.High frequency oscillations create because of the interaction between the inductor and parasitic capacitances of the diodes and IGBT3.*IGBT3 is the IGBT near to the inductor and the free-wheeling diodes.IGBT1 is the IGBT near to the input. Figure 12 . Figure 12.Voltages across the IGBTs in series connection during the switching on process when the clamp voltage is 800V.IGBT3 is the fastest IGBT during the switching on process.
2,786
2017-09-04T00:00:00.000
[ "Engineering", "Physics" ]
Accretion of primordial H-He atmospheres in mini-Neptunes: the importance of envelope enrichment Out of the more than 5,000 detected exoplanets a considerable number belongs to a category called 'mini-Neptunes'. Interior models of these planets suggest that they have some primordial, H-He dominated atmosphere. As this type of planet does not occur in the solar system, understanding their formation is a key challenge in planet formation theory. Unfortunately, quantifying the H-He, based on their observed mass and radius, is impossible due to the degeneracy of interior models. We explore the effects that different assumptions on planet formation have on the nebular gas accretion rate, particularly by exploring the way in which solid material interacts with the envelope. This allows us to estimate the range of possible post-formation primordial envelopes. Thereby we demonstrate the importance of envelope enrichment on the initial primordial envelope which can be used in evolution models. We apply formation models that include different solid accretion rate prescriptions. Our assumption is that mini-Neptunes form beyond the ice-line and migrate inward after formation, thus we form planets in-situ at 3 and 5 au. We consider that the envelope can be enriched by the accreted solids in the form of water. We study how different assumptions and parameters influence the ratio between the planet's total mass and the fraction of primordial gas. The primordial envelope fractions for small- and intermediate-mass planets (total mass below 15 M$_{\oplus}$) can range from 0.1% to 50%. Envelope enrichment can lead to higher primordial mass fractions. We find that the solid accretion rate timescale has the largest influence on the primordial envelope size. Primordial gas accretion rates can span many orders of magnitude. Planet formation models need to use a self-consistent gas accretion prescription. Introduction Currently, more than 5,000 exoplanets have been detected.Many of these planets have sizes larger than Earth but smaller than Neptune (Howard et al. 2012;Fressin et al. 2013;Fulton et al. 2017), and are commonly referred to as mini-Neptunes.Despite the degeneracy in exoplanetary characterization, interior models indicate that mini-Neptunes consist of non-negligible hydrogen and helium (H-He) envelopes (Weiss & Marcy 2014;Rogers 2015;Wolfgang & Lopez 2015;Jin & Mordasini 2018;Otegi et al. 2020a;Bean et al. 2021).These H-He envelopes are thought to be accreted from the protoplanetary disk during the planetary growth.These atmosphere are then retained despite evolutionary atmosphere loss processes such as photoevaporation, and therefore can be considered as primordial envelopes.Constraining the initial mass of primordial envelopes of intermediate-mass exoplanets is a key objective in exoplanet science.For example, constraining the initial mass of the envelopes could provide a solution to the conundrum of the 'radius valley', which is the lack of observed planets with radii between 1.5 R ⊕ and 2 R ⊕ (Fulton et al. 2017).In addition, planets with primordial envelopes could be habitable (e.g.Madhusudhan et al. 2021;Mol Lous et al. 2022), but one of the major concerns is that a planet must accrete a specific amount of a primordial envelope. Calculating the primordial envelope mass for a given exoplanet is extremely challenging.The prevailing exoplanet measuring techniques only yield radii and masses, through transit measurements and radial velocity detection, respectively.Solving the interior composition of a planet knowing only the mean density and irradiation temperature is a highly degenerate problem (Dorn et al. 2015;Shah et al. 2021;Haldemann et al. 2023).Additionally there are large errors in the measurements of radii and masses of exoplanets as these are derived in relation to stellar radii and masses, of which the values are not always well constrained (Otegi et al. 2020b).It is likewise difficult to constrain the size of primordial envelopes from planet formation models.The standard model for planet formation is core accretion (Mizuno 1980;Pollack et al. 1996;Alibert et al. 2005;Helled et al. 2014).In this scenario, planet formation begins with a solid (heavy-element) core and once this reaches ∼0.1 M ⊕ the planet starts to accrete a gaseous envelope.Often, planet formation models predict larger (i.e., more massive) envelopes than the ones inferred for the observed planetary population (e.g., Rogers & Owen 2021).There are several possible explanations, including the large uncertainty in the opacities of planetary envelopes (Ormel 2014;Mordasini 2014), underestimating the role of collisions in atmosphere removal (Denman et al. 2020) or a boil-off phase during disk dispersal (Rogers et al. 2023).Interestingly, three-dimensional models which include gas-exchange with the surrounding disk predict smaller accreted envelopes than one-dimensional models at an orbital distance of 0.1 AU around a sun-like star (Ormel et al. 2015;Cimerman et al. 2017;Moldenhauer et al. 2021) and it is still unknown whether this inefficiency remains significant at further radial distances such as 3 or 5 au.Another physical mechanisms that can greatly alter the accretion of primordial gas in (1D) formation models are the solidenvelope interactions in the planetary envelope during the planetary growth.As they grow, protoplanets can accrete solid material and gas simultaneously.Solid material, in the form of planetesimals or pebbles, travels through the envelope and can fragment or ablate.This can enrich the envelope in heavy elements instead of simply being added to the core (Pollack et al. 1986;Podolak et al. 1988).This process is sometimes referred to as envelope pollution.This heavy-element enrichment can have two competing consequences on the planetary growth timescale and the planetary composition.On the one hand, enrichment can increase the envelope's opacity, and therefore delay the planetary contraction and inhibiting the further accretion of nebular gas.On the other hand, heavy-element enrichment increases the mean molecular weight of the envelope, which enhances the gas accretion rate.A schematic overview of envelope accretion with and without the consideration of solid-envelope interactions is shown in Figure 1.Many previous studies have already demonstrated that envelope enrichment plays an important role in planet formation.Specifically pebbles are quick to ablate and fragment (Ormel & Klahr 2010;Lambrechts et al. 2014;Alibert 2017;Chambers 2017;Brouwers et al. 2018;Valletta & Helled 2019;Brouwers & Ormel 2020), but planetesimals have been shown to interact with the envelope and alter the formation process as well (Stevenson 1982;Hori & Ikoma 2011;Pinhas et al. 2016).Estimates of the maximum core mass that a planet can grow range from ∼ 0.1 M ⊕ to ∼ 5 M ⊕ (Pollack et al. 1986;Mordasini et al. 2006;Lozovsky et al. 2017;Alibert 2017;Brouwers et al. 2018;Steinmeyer et al. 2023).This range is a result of the different assumptions on planetesimal or pebble sizes, composition, and material strength.Valletta & Helled (2020) simulated the formation of Jupiter and Saturn, accounting for envelope enrichment where the heavy elements were represented by water.It was found that including envelope enrichment in a self-consistent way (equation of state and opacity calculation) decreases the growth timescale of Jupiter and Saturn.This result is in line with previous studies focusing on giant planet formation (Stevenson 1982;Hori & Ikoma 2011;Venturini et al. 2015Venturini et al. , 2016;;Venturini & Helled 2017), but it is not entirely clear if this can be accepted as a general result.It remains a possibility that in some cases the planet cannot cool efficiently enough to trigger runaway gas accretion (see Figure 1).For example, Wang et al. (2023) showed that if icy pebbles sublimate outside of the accretion radius and enrich the local gas, this decreases the nebular gas accretion efficiency.Furthermore assumptions on mixing efficiency, the composition of the accreted solid material and the strength of the grain opacity can steer the outcome of a one-dimensional planet formation simulation.The objective of this work is to investigate how the accretion rates of gas depend on the model assumptions when envelope enrichment is considered.We follow Valletta & Helled (2020) and employ a 1-dimensional planet formation model that considers the ablation and fragmentation of the solid material (represented by water ice).We focus on the investigation of the envelope's composition of the forming planets before they reach runaway gas accretion.We consider various formation locations, protoplanetary disk properties, and solid accretion rates.We also investigate the formation timescales to assess whether the planet is expected to reach the runaway gas accretion and become a gas giant planet.Our paper is organized as follows.In Section 2 we present our model setup.In Section 3 we present our results for the gas accretion rates for different planets.We distinguish between gas accretion with and without the enrichment of solid materials.The distribution of possible H-He envelope masses within the explored parameter space is given.In this section we also demonstrate the importance of basic assumptions, such as the mixing of supercritical water with H-He, on our results.In Section 4 we further test assumptions on mixing and the smoothing of the deposition profile.In this section we also address the likelihood that planets form with the required amount of H-He to allow for surface liquid water.The limitations to our model are discussed in Section 5. Finally in Section 6 we summarize our findings. Methods The formation simulations are based on a modified version of the MESA code1 (Paxton et al. 2011(Paxton et al. , 2013(Paxton et al. , 2015(Paxton et al. , 2018(Paxton et al. , 2019) ) which was properly adapted to simulate planet formation and evolution (Valletta & Helled 2020;Müller et al. 2020a,b).The formation model is similar to the one used in Valletta & Helled (2020), with some modifications as discussed below. The initial model has a core mass of 0.1 M ⊕ and an envelope of 10 −6 M ⊕ .The initial envelope metallicity is 0.03, but drops to zero at the beginning of the evolution when pure H-He is accreted and envelope enrichment is not yet significant.Three different solid accretion rate prescriptions are considered to compute the solid accretion rate ṀZ .Based on planetesimal accretion we use rapid growth (Pollack et al. 1996) and oligarchic growth (Fortier et al. 2013).We also simulate pebble accretion (Lambrechts & Johansen 2014).A summary of these accretion rates are given in Appendix A. For planetesimals we assume a radius of 100 km and for the pebbles one of 10 cm.In the case of planetesimal accretion the simulation starts at 10 kyr.In the pebble accretion case we also assume that the solid accretion rate starts at 10 kyr, but with a smaller initial model, namely 0.01 M ⊕ .Integrating the solid accretion rate of pebbles (see Appendix A.4) from a mass of 0.01 M ⊕ at 10 kyr to 0.1 M ⊕ gives us the starting times of our planetary embryo.These starting times (t 0, peb ) depend on the disk conditions and are given in Table 1.In the nominal case we set the lifetime of the disk to 10 Myr.As solar-like stars should form within 5 -10 Myr, this is a long but not unlikely formation time (Pfalzner et al. 2022).We also consider shorter formation in 3 Myr.We stop our simulations before runaway gas accretion starts, namely when the crossover mass is reached, where the envelope and core are of equal mass (Bodenheimer & Pollack 1986;Pollack et al. 1996). Boundary conditions and disk assumptions The outer boundary conditions of our model planetary envelope (P out and T out ) are set equal to the pressure and temperature in the disk.Following Piso & Youdin (2014) these are given by scaling relations in distance: Fig. 1: A schematic overview of pre-runaway envelope accretion.When the solid material does not interact with the envelope the gas accretion is initially determined by the size of the core and the strength of the accretion luminosity (Phase I).The core stops growing when there is no more solid material available, so that the envelope accretion rate is determined by the cooling timescale of the protoplanet (Phase II).If Phase II is efficient the planet can reach the critical mass within the lifetime of the protoplanetary disk.It will go into runaway accretion and become a gas giant.Gas accretion from the nebula is different when the interaction of the solid material with the envelope is considered.Part of the ice and/or silicates will vaporize rather than reaching the core in solid form.This increases the envelope metallicity.The increased metallicity can on the one hand inhibit further gas accretion by increased opacities in the envelope, which hinders cooling.On the other hand the increased mean molecular weight increases the mean density of the envelope which promotes gas accretion. where a is the orbital distance of the protoplanet.The normalization factors that we apply are higher than the fudicial MMSN (which would be 0.0085 and 60 for pressure and temperature respectively).While this is a significant increase, we find that it does not influence the envelope masses when they are above ∼ 0.01 M ⊕ and saves computation time.These boundary conditions do play a significant role in the early stages of the protoplanet and could in theory alter the formation path, for example through the onset of fragmentation.Similarly, P out and T out are assumed to remain constant in time in this work.More accurate gas accretion simulations would thus require an improved disk model, especially for the cases presented with envelope masses below 0.01 M ⊕ after formation.In this work, however, these simplification suffice to demonstrate the importance of envelope enrichment on gas accretion.The planetesimal accretion rates scale linearly with the solid surface density, Σ Z , at the location of formation (see Appendix A.1 and A.2).The initial solid surface density Σ Z, 0 is given by: The solid surface density decreases as solids gets accreted onto the planet.The pebble accretion rate has a linear dependency on the gas surface density at 1 au, β (see Appendix A.4). β is set by an initial gas surface density β 0 and decreases exponentially in time (t): where τ disk is the gas disk lifetime which we fix to 3 Myr.C 1 and β 0 are used as free parameters to account for disks of different masses.We set C 1 to 5, 7.5 or 10 g cm −2 for a light, medium or heavy disk, respectively.Values of β 0 for these three disk types are set to 250, 500 or 750 g cm −2 .The corresponding values of Σ Z,0 and Σ g,0 at 3 or 5 au are listed in Table 1. The inner boundary of the envelope model is the core.The luminosity at the core-envelope interface is determined by the accretion luminosity: where G is the gravitational constant and f abl is the fraction of solid material which is ablated or fragmented in the envelope.M c and R c are the core mass and radius respectively, where the value of R c is calculated assuming a constant core density of 3.2 g cm −3 , regardless of the composition of the accreted material.The significance of this simplification is considered in Appendix C. Enrichment from Planetesimals or Pebbles The interaction between the accreted solids and the envelope is considered via the fragmentation and/or ablation of the solids (planetesimals or pebbles).The calculation of the value of f abl is given in Appendix B. This method also gives the deposition profile m dep (r) at radius r.The amount of water vapor added to the envelope is the product of f abl and the solid accretion rate.We consider two deposition methods.The first is direct deposit. In this method the mass is deposited at the radial locations where ablation and fragmentation occur.For example: if a planetesimal fragments at radius r and there is no prior ablation, the water mass in layer m(r) is enhanced by the solid accretion rate.This increases the metallicity. The second method, homogeneous deposit, is the default in this work.It assumes that the mass deposition is completely smoothed over the envelope which has total mass M env .This means that the amount of added heavy material is distributed over all layers, normalized to the layer's mass, where: As an illustration, Figure 2 shows the difference between direct deposit and homogeneous deposit specifically for a planet growing by oligarchic growth at 5 au after 47 kyr.The core mass is still the initial 0.1 M ⊕ and the envelope mass is 1.5 ×10 −5 M ⊕ . The envelope is too small to cause fragmentation of the planetesimals and thus there is only ablation.The fraction of ablated material increases towards the interior of the envelope.The homogeneous deposit is completely smoothed over all layers, but the total deposited material adds up to the same as for direct deposit. While at every timestep the deposition of heavy material is done homogeneously, this does not necessarily mean that the composition in the envelope is homogeneous.This is for two reasons.First of all because there is a gas accretion of pure H-He with zero metallicity added to the outer layers of the envelope.The inner layers, which are older, will have been exposed to envelope enrichment for longer and thus have a higher metallicity.This Fig. 2: The difference between the two deposit models: direct deposit (blue) and homogeneous deposit (orange).The x-axis gives the normalized mass coordinate of the envelope q.This figure specifically shows the deposit models for Oligarchic growth at 47 kyr when the envelope mass is 1.5 ×10 −5 M ⊕ .There is no fragmentation yet.The ablation enriches the envelope metallicity up to 10% in the most inner region.The homogeneous deposit has the same total deposited mass, but smoothed.However, the actual enrichment is not homogeneous, as shown by the orange dashed line, due to some layers already being saturated.For different formation conditions and at different times the difference between the deposition and the actual enrichment changes. will create a compositional gradient unless the Ledoux criterium is met, in which case the convective region will become homogeneously mixed.Secondly, we consider a maximum metallicity in each layer and if this is already reached there is no enrichment.An example of this is shown in Figure 2. The orange solid line shows the deposit profile, while the dashed orange line shows the actual enrichment.The difference is due to some layers in the envelope already being saturated, or close to saturation, so that it is not possible to deposit all the mass without condensation.There are two criteria that could limit the amount of water that can be deposited in a certain layer.First, we check the material state of H 2 O based on the temperature of the layer and from there calculate the maximum number density of water in layer r: P(r) and T (r) are the pressure and temperature at radius r.T crit is the critical temperature of 647.096K.In the cases where the temperature is below 647 Kelvin, we apply the vapor-liquid phase boundary from Wagner & Pruß (2002) to calculate the saturation pressure of water P sat as follows: a 1 ν + a 2 ν 1.5 + a 3 ν 3 + a 4 ν 3.5 + a 5 ν 4 + a 6 ν 7.5 ). (8) Here P crit is the critical pressure, 220.64 bar.ν = 1 − T T crit .The other variables are a 1 = -7.85951783, a 2 = 1.84408259, a 3 = -11.7866497,a 4 = 22.6807411, a 5 = -15.9618719,a 6 = Rapid accretion Oligarchic accretion Pebble accretion The assumed initial solid surface density and initial gas surface density in the heavy disk, medium and light disk.Σ Z,0 is used for the planetesimal accretion, while Σ g,0 and t 0, peb are used for the pebble accretion. 1.80122502. If the temperature exceeds the supercritical temperature of water we impose a limit to the water enhancement, Z max .Supercritical water and H-He are expected to be highly miscible (Soubiran & Militzer 2015), suggesting that Z max = 1.However in our nominal model we set this value lower to 0.9 to ensure that we do not artificially create a loss of H-He.This would happen if too much H-He inside the envelope is replaced by water without a sufficiently high primordial gas accretion rate.We also want to investigate the significance of this miscibility and additionally consider this maximum metallicity to be to 0.5.The second criterion for water deposition is that the deposited material can only be as massive as the shell in which it is deposited.As MESA uses mass coordinates, this criterion ensures that there is no Rayleigh-Taylor instability created through the deposition.While this an artificial limit, we argue that the deposited material we calculate in a certain layer using our onedimensional model underestimates the smoothing over different layers.Thus, allowing this smoothing of the water deposition profile should better represent the three-dimensional structure.When it is not possible to deposit part of the heavy material in the envelope, we transfer the leftover water mass to the core.Thus the enrichment is only equal to the initial deposition if a layer is not yet saturated, as is demonstrated in Figure 2. In this specific case for the inner 10% of envelope mass (q < 0.1) the critical temperature is exceeded, so that the maximum metallicity is much higher than for q > 0.1.Furthermore the outer envelope (q > 0.4) contains newer gas which has not been exposed to as much enrichment, hence the enrichment increases towards the outside of the envelope.If the envelope is not convective a compositional gradient can be created.Finally, we define the metallicity of the envelope at location r as: The change in the envelope's metallicity alters the opacities and the equation-of-state of the envelope.The total envelope's metallicity is referred to as Z env and is defined by the total water mass fraction in the envelope: The opacities are calculated by adding the molecular opacities from Freedman et al. (2014) and grain opacities from Alexander & Ferguson (1994), following (Valencia et al. 2013) Finally the heavy element deposition in layer r has two influence on the energy.First of all accretion luminosity is added by: with M(r) the cumulative mass at radius r. Secondly the vaporization of water decreases the energy by (Pollack et al. 1986): where c p = 4.2 × 10 7 (erg g −1 K −1 ) is the specific heat of water, E 0 = 2.8 × 10 10 (erg g −1 ) is the latent heat of vaporisation and ∆T is the change of temperature to reach vaporization, which we set to 373 K assuming that the incoming pebble/planetesimal has temperature 0 K. In our simulations we distinguish between five types of solidenvelope interactions which are presented in Table 2.For simplicity, we neglected the thermal ablation or fragmentation of silicates and focus only on water.Therefore, Case-1 is a reference case without any solid-envelope interactions.All solid material directly reaches the core-envelope boundary and the envelope never increases in metallicity.We consider the other extreme in Case-2.We assume that all the solid material is ice and can enrich the envelope.With Case-1 and Case-2 the most extreme, we use Case-3 and Case-4 to investigate other aspects related to our fragmentation and ablation model.Case-3 is a hybrid between Case-1 and Case-2.Half of the solid material are icy planetesimals/pebbles which can enrich the envelope.The other 50% of the solid accretion rate consists of rocky material that directly reaches the core Fig. 3: Core and envelope compositions under the different solidenvelope interaction models presented in Table 2.The dashed black line represents the outer boundary to the envelope and the solid black line the inner boundary.Everything interior to the black solid line is considered as the core.The composition of the core and whether this is mixed is not considered in this work.Rather a constant core density of 3.2 g / cm 3 is used.When envelope enrichment is considered this can either create a compositional gradient or there can (a) mixed convective zone(s), as shown by the two halves. and does not interact.Finally, in Case-4 we limit the maximum allowed metallicity Z max (see Equation 7) to 0.5 if the temperature exceeds the critical temperature.Figure 3 visualizes the effects of these cases on the planet's interior and envelope. Gas accretion Gas accretion can occur at every timestep.Following Valletta & Helled (2019), gas is added to the planet until the outer radius of the envelope is within a factor 1.1 smaller or larger than the accretion radius.We use the accretion radius as in Lissauer et al. (2009).This formulation is based on the common assumption that the planet's accretion radius must be equal to the smallest of either the Bondi radius or the Hill radius: where M p is the mass of the protoplanet, c s is the speed of sound at the location of formation, R Hill is the protoplanet's Hill radius and k 1 and k 2 are reduction factors to account for the limited availability of gas at the formation location of the planet.For the small protoplanets considered in this study k 1 and k 2 can be set to 1.The first 10 kyr are used to relax the envelope mass.The initial model does not automatically satisfy the criterion that the accretion radius equals the radius of the initial model.How much these two values deviate depends on the orbital distance.We smooth this transition by finding k 1 and k 2 values such that the initial model radius is close to the accretion radius.Then we increase k 1 and k 2 linearly in time until these are both 1 at 20 kyr. Results We perform a grid of simulations with the following variations: the solid accretion rate is rapid, oligarchic or pebbles.The formation location is either 3 or 5 au and the disk is either heavy, medium or light as defined in Table 1.An overview of all these results is given in Appendix D. Solid-envelope interaction affecting and H-He gas accretion This subsection highlights the effect of all four solid-envelope interaction models on individual formation cases. Rapid Growth Figure 4 shows the in-situ formation of a planet by rapid growth at 3 au.The initial solid surface density is 17.33 g cm −2 (heavy disk).The upper panel shows the masses of the core (solid line) and envelope (dashed line) as time progresses for the various cases.We find that all the cases include both Phase I and Phase II of gas accretion, where the transition between them occurs after ∼ 10 5 yrs at a core mass between 4.4 -5.2 M ⊕ .For Case-1 and Case-3 there is still a small increase in core mass during Phase II of gas accretion.At this stage the planet grows through envelope accretion which extends the planetary feeding zone and provides more solid material that can be accreted by the growing planet. In Case-2 and Case-4 on the other hand, the maximum core mass is reached, as any newly accreted planetesimals fragment and only add water vapor to the envelope.Another distinction is that Case-2 and Case-4 reach a crossover mass after 3.81 Myr and 2.5 Myr, respectively, while Case-1 and Case-3 do not reach crossover mass within 10 Myr.This is because in the former two cases the ablation-fragmentation transition occurs before the feeding zone is depleted and solid accretion is high.This promotes the gas accretion for several reasons.First, the total accretion luminosity is reduced, as a large fraction of the mass is deposited at larger radii and meanwhile the evaporation of water decreases energy locally.Second, the mean molecular weight of the envelope increases so that a more massive envelope can be bound.Similar to previous work we find that the increased opacities due to an increased envelope metallicity do not counteract the mechanisms promoting gas accretion.As such envelope enrichment promotes total envelope accretion.The lower panel of Figure 4 shows the envelope's growth, where the contributions of H-He are separated from the water vapor.Since Case-3 has a low-metallicity envelope, the total H-He mass in the envelope is similar to that of Case-1.Note that there would be a larger difference between Case-1 and Case-3 if fragmentation occurred before the solid accretion rate decreases.Case-2 and Case-4 have significantly more massive H-He envelopes at a given time.After ∼ 10 5 years it remains a factor 3 higher than Case-1 and Case-3.We find that for a short time the water mass in the envelope exceeds the H-He mass.This occurs during the transition between Phase I and Phase II.However, since subsequently mostly H-He is accreted, the The grey dashes lines indicate where primordial envelope mass fractions would be 0.1%, 1%, and 10% of the total mass. envelopes final atmospheric composition is dominated by H-He. During Phase II accretion we find that small amounts of envelope mass can be lost.For Case-1 and Case-3 this concerns small oscillations in the envelope mass which are a result of an oscillating solid accretion rate.These are in turn due to the changes in capture radius, which depends on the internal structure of the envelope (see Appendix A.1).In other words, when the gas accretion rate is large, the capture radius also increases, promoting a higher solid accretion rate.However, the increase in luminosity from the gas accretion and the solid accretion expand the envelope and increase the radius beyond the accretion radius, which leads to mass loss.While the solid accretion rate remains small (between 10 −8 M ⊕ yr −1 and 0) this is sufficient to influence the envelope.Nonetheless, we do find that smoothing the change in capture radius during Phase II gas accretion (by only allowing it to change with 0.1% every timestep) eliminates these oscillations without altering the final outcome.In Case-2 we find a single instance of mass loss at 1.42 Myr. Similarly to Case-1 and Case-3 this mass loss proceeds from an increase in the capture radius.However, in this case the increased capture radius is due to a change in the internal structure of the envelope, as the size of the convective zone increases. The top panel of Figure 5 shows the envelope's metallicity as a function of the total planetary mass.For all cases the envelope metallicity peaks when the feeding zone is depleted.The maximum envelope metallicity in Case-2 and Case-3 peaks at ∼0.8.This is expected from the maximum metallicity in supercritical states, Z max , set to 0.9.Colder outer layers where water can condense have even lower metallicities which decreases the total envelope metallicity from Z max . The lower panel of Figure 5 shows H-He envelope mass as a function of the total planetary mass.Grey dashed lines give the reference fractions of f H-He = 0.1%, 1% and 10%, where f H-He = M env, H-He / M p .When the planet is smaller than ∼ 2 M ⊕ the primordial envelope masses of all cases are similar.At higher masses the solid accretion rate increases and fragmentation occurs, so that the envelope has a significant amount of water vapor which influences the H-He accretion.At a total mass of 5 M ⊕ , Case-2 has a factor 2 higher M env, H-He than Case-1 and Case-4 has a factor 5 higher than Case-1.However, at masses above 6 M ⊕ the primordial envelope mass of Case-1, Case-2 and Case-4 converge, as Phase II of gas accretion sets in.Interestingly, Case-3 has the transition into Phase II of gas accretion for a lower mass than the other three cases.Compared to Case-1, Case-3 has a lower core mass, as there is always a fraction between 0 and 0.5 of solid material evaporating in the envelope.Also compared to Case-2 and Case-4 the maximum core mass is smaller.This is because Case-2 and Case-4 are more efficient at enhancing the envelope and they reach a stage where f abl equals 1 before the feeding zone is depleted.As a result, envelope accretion accelerates and in this larger envelope planetesimals are captured more easily (i.e. the capture radius as defined in Appendix A.1 increases).Thus, as the solid accretion rate increases, the envelope becomes saturated with water vapor which then allows the core to grow more rapidly as well.This acceleration of core and envelope formation is not evoked in Case-3 because of the later onset of fragmentation. Oligarchic Growth Figures 6 and 7 demonstrate the effect of envelope enrichment on the planetary mass and bulk composition as well as the formation timescale for oligarchic growth at 3 au.Similar to the previously presented rapid growth, the initial solid surface density is 17.33 g cm −2 , corresponding to a heavy disk. The upper panel in Figure 6 shows that in Case-2 the core reaches a maximum of 2.7 M ⊕ .This is notably lower than the core of the planet formed by rapid planetesimal accretion at the same location in a heavy disk.The reason for this difference is that the rapid formation has a high solid accretion rate with a large accretion luminosity.This makes the total envelope mass smaller for a given core mass, so that complete fragmentation is reached for a higher core mass in rapid growth.However, the core mass presented in these results also contain evaporated water that could not be held in the saturated envelope layers.Further discussion on the impact of our core model assumptions on the accretion of H-He is presented in Section C. Case-4 has a larger core accretion rate than any of the other models in the last 3 Myr.This is because Case-4 has a more massive envelope and thus a larger capture radius.Furthermore, because Z max in Case-4 is lower than those in Case-2 and Case-3, the envelope becomes saturated earlier. Since oligarchic growth is much slower compared to rapid growth, it takes longer before the core is massive enough to accrete an envelope with which the solids will interact.As shown in the lower panel of Figure 6, the envelopes in Case-2 and Case-3 become water dominated after 2.7 and 1.8 Myr, respectively, and this compostion persists during the remaining planetary growth.The H-He mass is unchanged for Case-1, Case-2, Case-3 until 7 Myr, while Case-4 always has more H-He.The upper panel in Figure 7 confirms that the envelope metallicity increases at a smaller core mass for oligarchic growth compared to rapid growth.The maximum metallicities also occur at smaller core masses than for rapid growth because there is not enough time to grow larger cores.The lower panel shows that Case-1 and Case-2 have very similar H-He envelope fractions until Case-2 reaches its maximum core mass.Also Case-3 and Case-4 have similar H-He envelope mass fractions.Overall, we find that the H-He mass fractions are larger in oligarchic growth than in rapid growth, since the envelopes for a given core mass are larger due to the slower formation. Pebbles Since pebble accretion is more efficient than planetesimal accretion, we find that most of our pebble simulations reach crossover mass before 3 Myr even when using a later starting time than for the planetesimal accretion.Only in the case of a light disk at 5 au we find planets in a pre-runaway state after 10 Myr.As a result, this is the formation scenario we highlight in Figures 8 and 9. Due to the small size of pebbles, the value of f abl reaches 1 already at the beginning of the simulation.We use the first 3 kyr of the simulation to smooth f abl from 0 to 1 linearly in time. Figure 8 shows that Case-1 and Case-3 do not reach a crossover mass while Case-2 and Case-3 reach it after 7.7 and 9.3 Myr, respectively.In Case-2 the maximum core mass is 3.9 M ⊕ and for Case-4 it is 5.2 M ⊕ .Case-1 leads to a core mass of 6.3 M ⊕ and an envelope of 0.88 M ⊕ after 10 Myr while Case-3 ends with a 5.2 M ⊕ core and an envelope of 2.6 M ⊕ . There are instances of mass loss in Case-2 and Case-3.Contrary to the rapid cases, this is not linked to the coupling between the solid accretion rate and the envelope structure.Instead, this is due to the small size of the pebbles and their immediate fragmentation.In combination with our model set-up, which allows the envelope to be considered 'full' and adds additional water to the core, this can cause the value of f abl to drop when the metallicity is close to saturation.This allows temporary oscillations in the accretion luminosity and possibly, mass loss.These changes in f abl are unphysical and should be modeled more self-consistently in future work.It must be noted, however, that the interaction between icy pebbles and nebular gas can already enhance metallicities at distances further away from the protoplanet than where the gas is bound.In Section 5.3 we discuss this point and argue that this interaction needs to be well understood before it can be incorporated in one-dimensional models.The envelope's metallicity and primordial envelope mass for the pebble cases are shown in Figure 9.The metallicities peak at masses of 2 -6 M ⊕ .We find that all the enrichment models follow roughly the same relation between H-He mass fraction and total mass, as shown in the lower figure.In comparison to the planetesimal accretion models in Figures 5 and 7 however these are less smooth.This is because the instant ablation of the pebbles.The left panels show f H-He when the planet forms at 3 au.All the planetesimal cases (rapid and oligarchic) remain pre-runaway up to 10 Myr with the exception of Case-2 and Case-4 of rapid growth in a heavy disk.Rapid growth leads to larger masses and larger values for f H-He than oligarchic growth.This difference in composition between the two formation models is most visible at 3 Myr.If formation times are longer and there is strong envelope enrichment (Case-2) then oligarchic growth can create planets with total masses and H-He mass fractions that overlap those of rapid growth.Overall rapid growth can create planets of masses 2 -9 M ⊕ with H-He envelope fractions of 0.03 to 0.5 at 3 au.Oligarchic growth creates planets with total masses of 0.5 to 4 M ⊕ with f H-He values of 5×10 −4 to 0.1.At 5 au there are not as many datapoints for rapid growth as the planets are likely to reach the crossover mass quickly.None of the heavy disk cases remain.The planet forming in a medium disk under Case-1 remains pre-runaway at 5 Myr and for Case-3 this is past 3 Myr.Planets forming by rapid growth with Case-2 can only do so in a light disk in 3 Myr.The oligarchic cases all remain pre-runaway and have smaller H-He mass fractions than at 3 au.This is because the planets forming at 5 au have smaller accretion radii due to a smaller Bondi radius.At some point the accretion radius becomes dominated by the Hill radius, which increases with distance.This will lead to planets at 5 au holding more massive envelopes than those at 3 au for the same total planetary mass.However for the oligarchic growth cases this transition happens too late to see reflected in H-He mass fractions at 3, 5 or 10 Myr. Pebble accretion only forms mini-Neptune planets when there is a light disk, which is assumed to coincide with a late formation time (relative to a heavier disk).Furthermore at 3 au a mini-Neptune can only form when Case-3 enrichment applies if formation lasts longer than 5 Myr.In Case-3 the total mass and f H-He stay within the same region as the planetesimal accretion models.Within 3 Myr a 13 M ⊕ planet can also form by pebbles with so that f H-He =0.1 assuming Case-1 or Case-4.In Case-2 there are no pre-runaway planets even after 3 Myr.At 5 au it is easier to form small planets by pebbles, although still exclusively for the light disk.This is contrary to planetesimal formation which favours smaller planets at 3 au.After 3 Myr there is not yet a distinction between any of the enrichment models for pebble formation as they all lead to a planet of 2.5 M ⊕ and a f H-He of 0.004.After 10 Myr only Case-1 and Case-3 have remained pre-runaway. Envelope metallicities after formation In Figure 11 and 12 the maximum envelope metallicities (Z env, max ) are shown for the same set of models as in Figure 10, with the exception of Case-1 which by definition evolves to a zero metallicity envelope.Horizontal lines indicate the maximum imposed metallicity for supercritical layers, Z max .The rapid growth always has the maximum metallicity occuring before 400 kyr, which is significantly shorter than the shortest considered formation time of 3 Myr.It is therefore unlikely that rapid growth at 3 or 5 au can create mini-Neptune planets with very high metallicity envelopes, i.e. envelope metallicities that are close to Z max . On the other hand, oligarchic growth has maximum metallicities occuring very late, between 4.6 and 10 Myr.These correspond to total masses of 0.5 to 3 M ⊕ .The oligarchic cases never reach saturation where the envelope metallicity is that of Z max . The pebble cases show a wider spread in times when Z env, max is reached.For most of the pebble cases, we find that the maximum metallicity occurs before 3 Myr, but could be delayed to 4 Myr or even 7 Myr if the disk is light. The effect of mixing and location of deposit The results presented above assumed a homogeneous composition of the envelope due to convective mixing at layers where 2) the Ledoux criterium is met.While it is expected that there is some mixing in protoplanetary envelopes, it is unclear how efficient mixing is.Simulating the planetary formation with MESA allows to include mixing via the mixing length theory (mlt).This, however, requires the knowledge of a dimensionless parameter α mlt (see Section 2. The effect of mixing on the protoplanets core and envelope mass and composition are shown in Figure 13 for the oligarchic growth at 3 au for Case-2.The initial solid supply is heavy. When mixing is included this increases the efficiency of the deposit of heavy elements.The mixing distributes the H-He through the envelope, so that there are more layers where the maximum metallicity is not met.This allows for a larger overall deposit of heavy elements.As a consequence the mixing model Fig. 12: Same data as in Figure 11 but instead of the time, the total mass of the protoplanet is shown when the maximum envelope metallicity is reached. reaches a point where f abl reaches 1 after 7 Myr.The accretion luminosity becomes zero and gas accretion increases.The final H-He envelop mass is an order of magnitude larger and the core mass is 0.5 M ⊕ smaller. We also investigate the effect of our assumption of a homogeneous deposition of heavy elements in the envelope.We compare the homogeneous deposit to the direct deposit as defined in Section 2.2.Figures 14 shows the differences between these deposit models for Rapid growth at 3 au, Case-2.The nominal, homogeneous deposit is the same as shown in Figure 4.In the case of direct deposit, is takes longer for the envelope to start growing significantly.This is because initially only ablation occurs, which means that in the direct deposit case there are only heavy elements in the lower layers.In the homogeneous deposit case, water is added to the outer layers as well, so that the density increases and that causes fragmentation to occur more quickly (see Equation B.4 and B.5). Due to this fragmentation almost no solids reach the core so that the accretion luminosity from solid accretion decreases and gas accretion increases.The model Fig. 13: Oligachic growth at 3 au in a heavy disk.The light blue line indicates Case-2 without mixing and the dark blue line is for the same conditions but with mixing (α mlt = 0.1).When mixing is included, it distributes the H-He through the envelope.As our method replaces H-He with water, mixing increases the efficiency of water deposition.As a result, in the mixing cases there are cases where all the water can be deposited in the envelope.This reduces the accretion luminosity which leads to acceleration of the envelope's growth. with a direct deposit of solids reaches fragmentation later.Nevertheless, when both models have reached fragmentation the gas accretion is more efficient in the direct deposit.In that case the mass can be deposited high in the envelope and 'trickle down' to the lower layers.The crossover mass is then reached after 1.39 Myr instead of 3.81 Myr.A realistic deposition profile of heavy elements should lie somewhere in between the two extreme cases that we considered in this work.Assessing the physical importance of mixing on planet formation, in combination with envelope enrichment, would require the following improvements.First, the one-dimensional deposit profile needs to be smoothed appropriately to account for the three-dimensional process (Mordasini et al. 2017).In the case of planetesimal accretion this initial deposit profile would also need to be improved upon by using a more realistic size distribution (Kaufmann & Alibert 2023).Second, the treatment of envelope metallicity should be improved.In this work, the accreted heavy-element mass was added by increasing the metallicity and changing the energy in the relevant layers.However, mass in every layer was conserved during enrichment.Future work should treat mass deposition and envelope enrichment self-consistently by allowing this process to directly change the mass of the relevant layers. Primordial envelopes and habitability Planets with a primordial, H-He dominated envelope have received increased attention as potentially habitable candidates.The collision-induced absorption of hydrogen can act as a greenhouse effect and thereby create temperate surface conditions (Stevenson 1982;Pierrehumbert & Gaidos 2011;Madhusudhan et al. 2021;Mol Lous et al. 2022).Madhusudhan et al. (2021) coined the term 'Hycean planets', which host liquid water underneath a hydrogen-dominated atmosphere.It remains uncertain, however, whether any of the currently observed transiting exoplanets orbit in what can be considered the 'Hycean habitable zone'.The role of a runaway greenhouse effect (Pierrehumbert 2023;Innes et al. 2023) and atmospheric escape (Wordsworth 2012;Mol Lous et al. 2022) could move the inner habitable zone boundary in comparison to an Earth-like planet.Another open question regarding Hycean planets is whether such planets can accrete the required amount of H-He to enable these temperate surface conditions in the first place.In Mol Lous et al. (2022) we showed that planets of sizes 1 -10 M ⊕ that orbit around a sun-like star could host temperate conditions if they are beyond 2 au.Their primordial, pure H-He envelope could be of masses 10 −4 -10 −5 M ⊕ at this distance, but more massive when further out.The results presented in Figure 10 show that H-He envelope below 10 −3 are generally difficult to form, and most of the formed planets consist of much larger H-He mass fractions.The smallest H-He envelopes are formed by oligarchic growth at 5 au.After 3 Myr years these planets have values for M env, H-He ranging from 2.7 ×10 −5 M ⊕ ( f H-He = 10 −4 when the total mass is 0.27 M ⊕ ) up to 2.24 ×10 −4 M ⊕ ( f H-He = 4 × 10 −4 when the total mass is 0.56 M ⊕ ).These planets are smaller than those considered in Mol Lous et al. (2022), but still massive enough to hold onto a H-He envelope at 5 au (Mordasini 2020).We therefore conclude that the formation of H-He envelopes which provide temperature surface conditions is probably rare but possible when the planet forms beyond the ice-line and the formation timescale is long. Envelope-core interactions and outgassing Our model does not include interactions between the envelope and the core.We assume that all the nebular hydrogen and helium remain in the envelope and that none is sequestered in the core, to be outgassed at later stages.For (super-)Earth this can play an important role in the development of the atmosphere after the gas disk has disappeared (Elkins-Tanton & Seager 2008;Schaefer & Fegley 2010).As silicates in the core are expected to be in the magma phase there should be a high solubility of hydrogen (Hirschmann et al. 2012).That hydrogen would over time be outgassed and replenish the envelope (Chachan & Stevenson 2018), but that would be accompanied with the atmospheric escape of mostly hydrogen.There could also be a later increase in atmospheric hydrogen if metal-rich impactors oxidate (Genda et al. 2017).Thus, some of the H-He mass calculated in this work could be stored in the core and released gradually.The envelope-core interactions for H 2 O, not considered in this work, should also be mentioned.While we focus on predicting the mass fraction of H-He after disk accretion, the treatment of water can be improved, which could lead to different results.Water can also be stored efficiently in a magma ocean and outgassed later (e.g., Dorn & Lichtenberg 2021;Bower et al. 2022;Sossi et al. 2023), which can increase the envelope's metallicity after formation. Ablation and fragmentation of silicates In this work we only modelled the effect of water enhancement on the envelope.It is clear that rocky material can also ablate and fragment.This is especially the case for pebbles (Brouwers et al. 2018;Brouwers & Ormel 2020;Steinmeyer et al. 2023), but also for planetesimals (Bodenheimer et al. 2018, e.g.).Similar to water enrichment, the enrichment with silicates on the one hand increases the mean molecular weight and promotes gas accretion and on the other hand enhances the opacities in the envelope (Ormel 2014;Mordasini 2014;Menou & Zhang 2023).The enrichment of silicates alone can create a composition gradient which inhibits convection (Ormel et al. 2021;Markham et al. 2022).This silicate enrichment has an effect on the long-term evolution of mini-Neptunes and not considering this effect can lead to overpredictions of H-He mass fractions in observed planets (Misener & Schlichting 2022;Vazan & Ormel 2023).Future work should consider the enrichment of both water and silicates in the envelopes of protoplanets.However, accounting for both species will introduce more free parameters concerning the mixture of ice and silicates in the solid accretion rate. Limitations of a one-dimensional model The assumption of spherically symmetric gas and solid accretion that comes with a one-dimensional model does not accurately reflect the reality and thus there are some limitations.First of all there is the recycling of gas that occurs in the outer regions of the accreting envelope.The nebular gas from the disk has a higher entropy than the already accreted gas in the envelope and will mix (Ormel et al. 2015).This will delay the cooling and contraction, thus prolonging Phase II of gas accretion.Recycling is an important aspect to planet formation.It can significantly delay the formation timescale which can help to explain the presence of super-Earths and mini-Neptunes, when one-dimensional models would have predicted a transition into runaway gas accretion.This could also possibly help with the formation of Uranus and Neptune (Eriksson et al. 2023).Three-dimensional gas accretion simulations remain computationally expensive.Recently Bailey & Zhu (2023) found a more optimistic comparison between three-and one-dimensional models.They suggest that one-dimensional models can improve their accuracy by reducing the accretion radius to 0.4 times the Bondi radius and considering two distinct outer recycling layers.A second limitation revolves around the deposition profile of heavy elements.In this work we have considered two extremes.In the nominal case we deposited the heavy elements homogeneously and alternatively we solved the deposition profile in the one-dimensional case and deposit the solids accordingly.The latter is realistic if the solid accretion rate is isentropic and the timescales of the impacts is shorter than the azimuthal mixing (Mordasini et al. 2017). In-situ formation The effect of migration is neglected in this work although insitu formation is rather unrealistic.One origin theory of mini-Neptunes is that they form around the ice-line and migrate inwards once they reach a critical mass (Kuchner 2003;Venturini et al. 2020;Huang & Ormel 2022;Burn et al. 2024).The pre-cise value of this critical mass remains uncertain (McNally et al. 2019;Paardekooper et al. 2023) though derivations of it can be found, such as in Emsenhuber et al. (2023).This formation scenario would naturally lead to a diversity in mini-Neptunes (Bean et al. 2021).In the rapid growth case some planets in our simulations deplete their feeding zone and enter Phase II gas accretion.We predict that if migration is included this will lead to larger planets.However, our conclusion that different treatments of solid-envelope interactions can heavily influence the outcome of planet formation is robust. Summary and Conclusions We simulated planet formation assuming different solid accretion rates, and calculated the corresponding gas accretion rate self-consistently.The planetary formation locations were set to be outside of the ice-line, where the observed mini-Neptunes could have formed before migrating to shorter orbital distances.Our study clearly shows that the assumptions used by planet formation models play a key role in determining the planetary growth history and therefore also the planetary mass and composition.Our key conclusions can be summarized as follows: -The assumptions of the interaction between solids and the planetary envelope strongly affect the planetary growth and can change the primordial gas mass fraction by up to a factor of 10.Nevertheless, we have also identified cases where this interaction has no or only little influence on the forming planet.In the case of oligarchic growth, the envelope remains small and, although it is metal-rich, it does not alter the rest of the formation process.-Forming mini-Neptunes at 3 au is challenging with pebble accretion due to the high accretion rates.Their formation via pebble accretion becomes more likely at 5 au when the initial disk is relatively light.However, we place more caveats to our pebble result compared to the planetesimals because (1) pebble growth strongly depends on the start of the growth, and (2) pebbles could sublimate before reaching the accretion radius, which might influence the results.-The impact of envelope pollution is complex.The most extreme solid-envelope interaction cases (Case-1 and Case-2) do not automatically lead to the most extreme outcomes.On the contrary, we find that our Case-4, which considers half of the solids to interact, does not necessarily lead to planets in between the more extreme cases.For example, with Rapid growth at 5 au we find that Case-4 leads to the smallest planets for a given time.-Envelopes of protoplanets during Phase I and II of gas accretion can be dominated by heavy elements.-Our results are consistent with the observed diversity of exoplanets (e.g., Jontof-Hutter 2019).Variations in the formation location, solid material in the protoplanetary disk, the composition of the solid material, and the formation timescale determine the final mass and composition of the forming planets.We find that f abl can span a range of several order-of-magnitudes and that the envelope's metallicity can range from 0 to full saturation.Our results clearly imply that small-to intermediate-mass planets should be diverse in terms of mass and composition, depending on the exact formation conditions and growth history. We find that gas accretion models that include envelope-solids interactions can significantly influence planet formation even for small planetary masses.This has important effects on our understanding of the formation of mini-Neptunes.We note that assuming pebble or planetesimal accretion exclusively could lead to an overestimation of cases which enter runaway gas accretion.Kessler & Alibert (2023) showed that giant planet formation can be suppressed when both pebbles and planetesimals are considered. Although the topic is still being investigated, it is often assumed that the observed mini-Neptunes and super-Earths have a large water mass fraction (Venturini et al. 2020;Luque & Pallé 2022).This is further supported by the observation of volatile-rich planets, e.g. in Kepler 138-c and -d (Piaulet et al. 2023).Furthermore, there are observations indicating the presence of atmospheric water vapor (e.g., Mikal-Evans et al. 2023).However, detecting atmospheric water is challenging due to the possible formation of clouds.Water signatures can also overlap with those of methane (Bézard et al. 2022).For K2-18b, which became notorious as the first mini-Neptune detected with water vapor in the atmosphere (Benneke et al. 2019), JWST data have confirmed that the measured signature was due to methane, and not water (Madhusudhan et al. 2023).For highly-radiated planets JWST should be able to better constrain the volatile abundances (Acuña et al. 2023;Piette et al. 2023).If future observations can confirm that mini-Neptunes and super-Earths have water-rich atmosphere, it would support the idea that they have formed beyond the iceline and migrated inward, as we assumed here.Other explanations for water-rich atmospheres of small planets could be a late volatile delivery (Elkins-Tanton & Seager 2008) or in-situ water formation (Kite & Schaefer 2021). Our results demonstrate that primordial gas accretion rates are not simple.Assumptions in the solid-envelope interaction, the solid accretion rate and formation location can greatly influence the fraction of H-He after formation.These assumptions as well as aspects not considered in this work (migration, grain opacities) will need to be included in order to explain the observed mini-Neptune and super-Earth population. We neglect the envelope's mass beyond the location r since the mass is negligible compared to the core mass and lower part of the envelope.The mass loss at location r is calculated using Equation 14 in Valletta & Helled (2019): where m pl is the mass of the planetesimal, A is the area of the planetesimal which naturally decreases as the planetesimal loses mass and ρ(r) is the density at r. C h and ϵ are efficiency factors for which the appropriate values are uncertain.C h is the fraction of kinetic energy transferred to the planetesimal.ϵ is the product of the emissivity of the gas and the planetesimal's impact coefficient.We set both to 0.01.In Valletta & Helled (2019), C h and ϵ were left as free parameters and while their value affects the planetary growth, its effect is smaller in comparison to other assumptions considered in this work, such as the solid accretion rate and envelope mixing.Unlike in Valletta & Helled (2019), here we simplify the calculation of the planetesimal's trajectory by assuming that it moves straight to the core.In other words, we assume an impact parameter of 0 and no angular contributions to the velocity.Q is the latent heat caused upon vaporization and is given by: Here C p is the specific heat of water in the liquid phase, set to 4.2×10 7 erg g −1 K −1 .E 0 = 2.8 × 10 10 erg g −1 is the latent heat of vaporisation in the solid phase.The values are taken from Pollack et al. (1986) (Table 1).T f is the difference between the initial temperature and the present temperature, which is 373 Kelvin.The planetesimal can be completely destroyed when two conditions are met (Pollack et al. 1986).First, if the pressure gradient in the envelope surrounding the planetesimal is larger than the material strength: with S being the strength of the compressive material, set to 1×10 6 Ba (0.1 Mpa) for ice (Pollack et al. 1979).Second, if the planetesimal is sufficiently small so that its self-gravity can not prevent fragmentation. If both these criteria are met, fragmentation occurs and f abl is set to 1. Appendix C: Importance of the core's mass-radius (M-R) relation In our current model the changes in the core's composition are not considered.All the material that reaches the core, whether that is ice or solids, adds to the core mass in the same way.Using the total accreted mass and assuming a constant core density of 3.2 g cm −3 we calculate the core radius and this core radius sets the lower boundary in our atmosphere model.A more realistic model would need to infer the core's radius with an interior model based on the assumed accreted material.We evaluate the influence of the core density on our results by using a gradually decreasing or increasing core density.We do not consider the interaction of rocky material, and for simplicity the rock fraction in the core follows directly from the solid accretion rate.The water/ice mass fraction can have two different sources.First, it was directly accreted, which happens when there is no fragmentation.Second, there could be water excess since not all the water could be deposited in the envelope.We also consider this to be part of the core although in reality this should be added to the envelope mass at the lower layers, creating an ocean.The upper panel in Figure C.1 shows the core's composition for oligarchic growth at 3 au with Case-2.From the assumption that in Case-2 there is no rock in the solid accretion rate, the rock fraction of the core only decreases.The lower panel shows the core composition for Case-3 which by definition always has at least 50% accreted rock directly onto the core.The dark blue represents ice that is directly accreted to the core.The light blue is also water added to the core, but water that did not fit in the envelope and was thus moved to lower layers.For both of these water contributions to the core it is unknown what their thermodynamic properties would be.While the directly accreted water would reach the core in the solid phase, a subsequent impact might still vaporize, therefore also adding water vapor to the envelope.The water which did not fit in the envelope was either , where 50% water and 50% rock is accreted.As all the rock directly reaches the core, but the water can evaporate in the envelop, the core always has a rock mass fraction above 50%.The water which is added the core can either directly reach the core (dark blue regions) or be added because the envelope was saturated (light blue). because outer layers were too cold for water vapor (small envelope) and thus condensed.It is more common that all layers have reached the maximum metallicity for supercritical water.Figure C.2 shows the M-R relation of the core with a constant density of 3.2 g cm −3 .For comparison, we also show the mass-radius relationships of planets taken from Zeng et al. (2019).These M-R relations are for planets including their atmospheres and should thus not be directly compared to the M-R of the core.Regardless we apply another core density subscription based on a pure rocky planet, as our aim is merely to determine whether the core's radius can affect the results of our formation model.We find that scaling the core density as ρ c = 3.4 + 2 √ M c /M ⊕ gives a similar Changing the core's density has two competing effects.On one hand, a higher core density leads to a smaller core radius and this core radius is used as a lower boundary in the atmosphere model.Meanwhile the accretion radius stays the same so that the volume which the envelope occupies is slightly larger, thus the envelope can be more massive.This effect, however, is very small since the core radius is about two orders of magnitude smaller than the accretion radius.On the other hand, a higher core density increases the accretion luminosity, which decreases the amount of gas which can fit within the accretion radius. We apply this increasing core density to the formation with oligarchic growth at 3 au under Case-1, as this is the case that assumes the solid accretion is of pure-rock.The final envelope is indeed negligibly larger when using the higher density core, namely 0.082 M ⊕ rather than 0.079 M ⊕ . We do a similar study for the water-rich core in Case-2 but using an arbitrary scaling to the core's density.Rather than trying to simulate the core's M-R realistically, we simply decrease the density until we see a significant effect in our results.This is achieved when we decrease the density as ρ c = 3.4 -2 (M c /M ⊕ ) 1/3 .As shown in the Figure C.2 this leads to core radii that are more than 1 R ⊕ larger than pure water planets.Applying such a low core density to the oligarchic formation at 3 au with Case-2 leads to a core mass of 3.45 M ⊕ rather than 2.71 M ⊕ and an envelope of 0.3 M ⊕ instead of 1.36 M ⊕ .The primordial envelope mass, M env, H-He also decreases from 0.43 M ⊕ to 0.07 M ⊕ .This occurs because a low density core eventually leads to a slightly smaller envelope and the fraction of ablation always remains low because there is not enough envelope to replace as water.However, the nominal core density model with a slightly larger envelope does reach a point where most of the solids can enrich the envelope and this causes a great reduction in the accretion luminosity, promoting gas accretion for the final 3 Myrs.The core's mean density that we have applied to a waterdominated core is notably lower than interior models predict (see Figure C.2 and e.g.(Haldemann et al. 2020)).We therefore conclude that an extremely low core density can lead to significantly different primordial envelopes.However, we note that our variations in the core's density is merely a parameter study and is not based on realistic interior models.A more realistic core model could be implemented in future work, where it would also need to be considered that rock and water would be mixed in this core (e.g., Vazan et al. 2022).At the same time, it should be noted that the core's M-R relationship is likely to be less important in comparison to the effect of the chemical interactions between the core and envelope (see Subsection 5.1).This would play an important role in determining how the primordial gas is distributed within the planet. Fig. 4 : Fig.4: In-situ formation of a planet at 3 au with rapid growth.The simulations are run until the envelope and core are of equal mass or until 10 Myr.The initial solid surface density is 17.33 g cm −2 (heavy disk).The upper panels shows the mass of the core and the mass of the envelope over time.The lower panel shows the same simulations as in the upper panel, but only the envelope masses over time.Solid lines are the mass from primordial H-He.The dashed lines are the envelope mass from water vapor.In Case-1 there is no water vapor as only hydrogen and helium are accreted from the nebula. Fig. 5 : Fig. 5: The same simulations presented in Figure 4 but showing the heavy element fraction of the envelope (upper panel) and the primordial envelope mass, from pure H-He (lower panel).Both are shown as a function of the total mass (M core + M env ).The grey dashes lines indicate where primordial envelope mass fractions would be 0.1%, 1%, and 10% of the total mass. Fig. 6 : Fig. 6: Same as Figure 4 but for a planet forming in-situ at 3 AU by oligarchic growth in a heavy disk. Fig. 7 : Fig. 7: Same as Figure 5 but for a planet forming in-situ growth at 3 AU by oligarchic growth in a heavy disk. Fig. 8 : Fig. 8: In-situ formation of a planet at 5 au by pebbles.The initial disk conditions are light.The upper left panel shows how the core mass and envelope grow in time.The lower left panel shows the envelope mass, with the separated mass contributions of H-He (solid lines) and H 2 O (dashed lines). Fig. 9 : Fig. 9: Same simulations as shown in Figure 8. Dashed grey lines show H-He envelope mass fractions of 0.1, 0.01 and 0.001. Fig. 10 : Fig. 10: H-He mass fraction after 3 Myr (upper panels), 5 Myr (middle panels), and 10 Myr (lower panels).The left panels show in-situ formation at 3 au and the right panels at 5 au.The colors indicate the heavy-element interaction models.Case-1, Case-2, Case-3, and Case-4 are given in red, blue, purple and yellow, respectively.The three different solid accretion rates are distinguished by different symbols.We also show the light initial disk results by a transparent marker, the medium disk result with a 0.5 opacity marker, and the heavy disk result with a full opacity marker.The light, medium and heavy results of the same model are connected by a line, as we would expect that in intermediate disk would produce a final H-He fraction approximately along this line.The total masses in the figure are limited to below 15 M ⊕ , focusing on the distribution for mini-Neptune type planets.Planets which reached the crossover mass (M env = M core ) are not shown, even if their total mass is below 15 M ⊕ . 4 inValletta & Helled (2020)).For the formation of JupiterValletta & Helled (2020) adapted α mlt = 0.1(Vazan et al. 2015;Müller et al. 2020b) and we use the same mixing in our nominal model.In this Subsection we investigate the effect of mixing by considering a model in which mixing is inhibited. Fig. 14 : Fig. 14: Rapid growth at 3 au in a heavy disk.The light blue line shows a homogeneous deposit of heavy elements, which is the default in our model.The purple line shows how the results change when the heavy elements are directly deposited at the relevant radial distance in the envelope. Fig. C. 1 : Fig. C.1: Core composition during oligarchic growth at 3 au with a heavy initial disk.The upper panel shows Case-2, where only water is accreted.As a result, the rock mass fraction, which is the initial model, decreases.The lower panel shows Case-3, where 50% water and 50% rock is accreted.As all the rock directly reaches the core, but the water can evaporate in the envelop, the core always has a rock mass fraction above 50%.The water which is added the core can either directly reach the core (dark blue regions) or be added because the envelope was saturated (light blue). Fig Fig. C.2: The orange, light blue, and dark blue lines are massradius relationships taken from (Zeng et al. 2019), which are (a) 100% made of rock of Earth-like composition (i.e.32.5% iron and 67.5% MgSiO3) (b) 50% made of an Earth-like rocky core and 50 or (c) 100% pure H 2 O.The purple line shows the M-R relationship for a constant density of 3.2 g cm −3, the default used in this work.The Increasing ρ model is a fit to the rocky core composition.The Decreasing ρ is the minimum decrease in core density which affects the gas accretion rate.Since the gas accretion rate is only altered when the core's density is significantly lower than interior models predict, we conclude that our assumption of a constant core density does not significantly influence the results. Table 2 : Solid-envelope interaction models considered in this work.All models assume that silicates do not react with the envelope and directly reach the core. Müller et al. (2020b)of-state that mixes water with hydrogen and helium taken fromMüller et al. (2020b)(see their Appendix A for details). Table D . After 3 Myr or at M core = M env After 10 Myr or at M core = M env 3 au M core = M env ?M core (M ⊕ ) x water M env (M ⊕ ) Z env M core (M ⊕ ) x water M env (M ⊕ ) Z env ⊕ ) x water M env (M ⊕ ) Z env M core (M ⊕ ) x water M env (M ⊕ ) Z env 1: Properties of planets grown by rapid growth at 3 or 5 au.Heavy, Medium and Light refer to the disk models presented in Table1.Oligarchic growthAfter 3 Myr or at M core = M env After 10 Myr or at M core = M env 3 auM core = M env ?M core (M ⊕ ) x water M env (M ⊕ ) Z env M core (M ⊕ ) x water M env (M ⊕ ) Z env ⊕ ) x water M env (M ⊕ ) Z env M core (M ⊕ ) x water M env (M ⊕ ) Z env ×10 −3 0.33Table D.2: Same as Table D.1, but for oligarchic growth.Article number, page 22 of 23 M. Mol Lous , C. Mordasini , R. Helled : Accretion of primordial H-He atmospheres in mini-Neptunes Pebble growth After 3 Myr or at M core = M env After 10 Myr or at M core = M env 3 au M core = M env ?M core (M ⊕ ) x water M env (M ⊕ ) Z env M core (M ⊕ ) x water M env (M ⊕ ) Z env ⊕ ) x water M env (M ⊕ ) Z env M core (M ⊕ ) x water M env (M ⊕ ) Z env Table D . 3: Same as TableD.1, but for pebble growth.
16,573.2
2024-02-16T00:00:00.000
[ "Physics", "Environmental Science" ]
Quantum Gravity, Constant Negative Curvatures, and Black Holes For purposes of quantization, classical gravity is normally expressed by canonical variables, namely the metric $g_{ab}(x)$ and the momentum $\pi^{cd}(x)$. Canonical quantization requires a proper promotion of these classical variables to quantum operators, which, according to Dirac, the favored operators should be those arising from classical variables that formed Cartesian coordinates; sadly, in this case, that is not possible. However, an affine quantization features promoting the metric $g_{ab}(x)$ and the momentric $\pi^c_d(x)\;[\equiv \pi^{ce}(x) \,g_{de}(x)]$ to operators. Instead of these classical variables belonging to a constant zero curvature space (i.e., instead of a flat space), they belong to a space of constant negative curvatures. This feature may even have its appearance in black holes, which could strongly point toward an affine quantization approach to quantize gravity. 1 Basic Canonical and Affine Quantization The essentials of affine quantization A single classical momentum p and coordinate q, with −∞ < p < ∞ but now 0 < q < ∞, and for which the Poisson bracket is still {q, p} = 1, are promoted to operators, p → P and q → Q, where 0 < Q < ∞ is self adjoint, but P can not be self adjoint, i.e., P † = P . Instead of p, we choose to promote pq → (P † Q + QP )/2 ≡ D, a basic operator that is self adjoint, i.e., D † = D, and which satisfies [Q, D] = ih Q. The affine coherent states, chosen for simplicity with q and Q as dimensionless, are given by Stationary variations of normalized vectors |ψ(t) of the quantum action lead to Schrödinger's equation. The enhanced, withh > 0, classical action is given by The connection of H ′ with H ′ is now given by which, whenh → 0, means that H ′ (pq, q) = H ′ (pq, q). Finally, we observe that which does not lead to Cartesian coordinates. Nevertheless. this metric is that of a constant negative curvature, whose value is −2/bh; see, e.g., [3,4]. That means that the affine metric in (8) is just as unique as the constant zero curvature (i.e., flat) metric of canonical quantization (4)! 1 Comparison of canonical and affine quantization The two versions of quantization described above apply to different problems. For example, assuming that −∞ < p < ∞, the Hamiltonian of the harmonic oscillator, e.g., H = (p 2 + q 2 )/2, and −∞ < q < ∞, canonical quantization works and affine quantization fails, while when 0 < q < ∞, affine quantization works and canonical quantization fails [2]. The canonical story involves a constant zero curvature (i.e., a flat space), while the affine story involves a constant negative curvature, whose curvature value is −2/bh. Such spaces are humanly visible only at one point, namely the 'center point' where q = 1 [3,4]. However, when a field is involved, it appears that a visible behavior can occur. The favorable Cartesian coordinates of canonical quantization are different from the favorable affine coordinates of affine quantization. However, extremely close to q = 1 the two versions are effectively the same. For a simple scalar field model, the metrics would be for canonical, while that for affine is given by These curves have the center-point curvature (where −2/B(x) denotes the value of the curvature). It is noteworthy that each point x has a visible center point where ϕ(x) = 1, wherein its negative curvature at that x is given by −2/B(x). Thus, if x is 1 dimensional, this leads to visible spots, or perhaps a line. If x = (x 1 , x 2 ) is 2 dimensional, this leads to a visible set of those points, or a region where ϕ(x 1 , x 2 ) = 1 with curvatures given by This expression captures the central point of the constant negative curvature where it appears nearly flat, which in fact is the only point that can be realized in our sight of the negative curvature. This behavior holds for each point x in space, thus enabling a field whose visible character is determined by the behavior of the non-dynamical term, B(x). Finally, observe that the canonical coherent states involve a single additional term of q in the expression (3), while, for the affine coherent states, the expression for q involves an exponential term in (7), passing from ln(q) to q, which entails a complete series. Thus, q enters as a product term due to D acting to 'dilate' the result (hence the choice of symbol D). A similar exponential expression also emerges in the gravity coherent states; see [5]. An Affine Quantization of Gravity The basic story of affine quantization, and how it applies to gravity, has been presented in [2] and several references therein; but if the reader wants additional foundations, we can recommend that particular article. Here we sketch an affine point of view for gravity. The classical momentric field, π a b (x) [≡ π ac (x) g bc (x)], and the metric field, g ab (x), become the new basic variables, and these two variables have a joint set of Poisson brackets given by Passing to operator commutations, we are led by suitable coherent states [5] to promote the Poisson brackets to the operators [ĝ ab (x),ĝ cd (x ′ )] = 0 . There are two irreducible representations of the metric tensor operator consistent with these commutations: one where the matrix {ĝ ab (x)} > 0, which we accept, and one where the matrix {ĝ ab (x)} < 0, which we reject. The classical Hamiltonian for our models is given [6] by where (3) R(x) is the 3-dimensional Ricci scalar. For the quantum operators we adopt a Schrödinger representation for the basic operators: specificallŷ g ab (x) = g ab (x) and It follows that the Schrödinger equation is given by where {g} represents the g ab (x) matrix. A closer look at the affine gravity metric Based on the affine gravity coherent states [5], the metric of favorable classical variables is given by which involves constant negative curvatures at every point x. To make that point more clearly, we can arrange to rephrase (16) as follows. Let us introduce a strictly diagonal spatial metric given by where C a (x) ≡ cos(θ a (x)), S a (x) ≡ sin(θ a (x)), and 0 ≤ θ a (x) < 2π. The connection now is given by where T denotes transpose, as well as its complete interchange given by exploiting the fact that 0 a (x)0 a (x) T = 1 = 0 a (x) T 0 a (x) for each a. This leads us to an alternative expression for (16) given by dσ a (π, g) 2 (23) We have chosen a common factor b(x) and its inverse for all three terms. This is proper because the difference among each g [aa] (x) is simply their position along the matrix diagonal, which is a distinction with no physical significance. Black Holes Black holes are scattered around the universe and, loosely speaking, they tend to appear roughly similar. The appearance that pictures show is a cylindrical tube capped by a sprouting growth toward a flat topping; for sample pictures, see [7]. They can be visible due to stars being trapped in their presence. This section attempts to show that their configuration can be approximately seen via special points of their constant negative curvatures. Special points on constant negative curvatures Equation (23) involves a wealth of potential constant negative curvatures as part of an affine gravity metric. They appear naturally different than Cartesian variables except at special points where g [aa] (x) = 1 = g [aa] (x) for appropriate x = (x 1 , x 2 , x 3 ). So far we have exploited orthogonal matrices in order to present a more orderly expression for dσ(π, g) 2 a . However, we have not yet explored changes of the underlying space parameterized by x. To consider a change of coordinates, we first suggest a new set of special coordinates, y = (y 1 , y 2 , y 3 ), that approximate a black hole in the equation This equation illustrates a cylindrical tube of approximate radius 1 when, e.g., −6 < y 3 < 0, but this tube blossoms out as y 3 passes through zero and rapidly requires an ever expanding cylindrical radius of (y 2 1 + y 2 2 ) 1/2 as y 3 > 0. 2 We next develop a connection between the x and the y variables. We seek to haveg [aa] (y) = 1 =g [aa] (y), for all [aa], when the three y coordinates satisfy (24). This requirement suggests adopting the three diagonal metric terms, i.e., g [aa] (x) andg [aa] (y), and letting them act as 'three-element vectors' such thatg [aa] (y) = (∂y a /∂x b ) g [bb] (x), with a sum of b with [bb]. Additionally, we change π [aa] to obtainπ [aa] (y) = (∂x b /∂y a ) π [bb] (x), with a sum of b with bb. Likewise, we introduceg [aa] (y) = (∂x b /∂y a ) g [bb] (x). It follows that properly summed. A similar relation is all of which leads to dσ a (π, g) 2 (27) The equation y 2 1 + y 2 2 − e y 3 = C, where 1 ≤ C and −6 < y3, covers more of an idealization of a black hole and its surroundings. We next exam the expressionb(y) ≡ b(x) for which the constant negative curvature is given by −2/b(y)h. Assuming that one point of space, either denoted by x or by y, is the same as any other point of space, we are led to treat this term as similar, therefore, just as a constant, to be called b. Therefore, our final expression for the affine quantum metric is given by dσ a (π, g) 2 (28) The choice of y for the background space has been so as to secure that the values ofg [aa] (y) = 1 =g [aa] (y) when the three coordinate values obey the idealized black hole behavior y 2 1 + y 2 2 − e y 3 = 1, or a portion thereof. If that is so, then the results for (28) have the appearance of being Cartesian coordinates, and thus may be seen as if they were actual Cartesian coordinates. Thus there would seem to be a sliver of nature that permits one to see starlight emitted along this 'crack' in a black hole. Summary The quantization of Einstein's version of classical gravity is possible using affine quantization instead of canonical quantization. Canonical quantization can not employ the promotion of classical Cartesian variables, as Dirac requires [1], because the classical gravity variables of phase space do not contain such variables [8]. Fortunately, affine quantization lends itself toward fundamental affine variables that instead of a constant zero curvature (i.e., a flat space), have a constant negative curvature. An affine quantization of gravity has been partially developed (see [2,5,9]) and it is ripe for additional analysis. In the present paper we have tried to bring the affine gravity metric -Eqs. (16), (23), (27), and (28) -into relation with physical expressions that are principally aimed at assigning such relations to black holes. While that aspect has been modestly treated, our analysis is open for further development.
2,675.4
2020-04-16T00:00:00.000
[ "Physics" ]
The Biosynthesis of Lipooligosaccharide from Bacteroides thetaiotaomicron ABSTRACT Lipopolysaccharide (LPS), a cell-associated glycolipid that makes up the outer leaflet of the outer membrane of Gram-negative bacteria, is a canonical mediator of microbe-host interactions. The most prevalent Gram-negative gut bacterial taxon, Bacteroides, makes up around 50% of the cells in a typical Western gut; these cells harbor ~300 mg of LPS, making it one of the highest-abundance molecules in the intestine. As a starting point for understanding the biological function of Bacteroides LPS, we have identified genes in Bacteroides thetaiotaomicron VPI 5482 involved in the biosynthesis of its lipid A core and glycan, generated mutants that elaborate altered forms of LPS, and used matrix-assisted laser desorption ionization–time of flight (MALDI-TOF) mass spectrometry to interrogate the molecular features of these variants. We demonstrate, inter alia, that the glycan does not appear to have a repeating unit, and so this strain produces lipooligosaccharide (LOS) rather than LPS. This result contrasts with Bacteroides vulgatus ATCC 8482, which by SDS-PAGE analysis appears to produce LPS with a repeating unit. Additionally, our identification of the B. thetaiotaomicron LOS oligosaccharide gene cluster allowed us to identify similar clusters in other Bacteroides species. Our work lays the foundation for developing a structure-function relationship for Bacteroides LPS/LOS in the context of host colonization. or cell wall that are architecturally similar although chemically different among bacterial taxa-lipopolysaccharides (LPSs), lipoteichoic and wall teichoic acids, mycolic acids, and muramyl dipeptides are key examples. Their ubiquity on the cell surface makes them excellent targets for bacterial detection by innate immune receptors, including Toll-like receptors and NOD proteins (2). However, a longstanding question remains: how do innate immune cells "know" whether the bacterial cell that they encounter is a mutualist or a pathogen and "decide" how to respond appropriately? Part of the answer likely involves unique strain-specific chemical signatures within these cellassociated molecules. Lipopolysaccharide (LPS) is a canonical cell-associated glycolipid. The interaction between LPS and host Toll-like receptor 4 (TLR4) is a paradigm for immunologic sensing of Gram-negative bacteria. LPS is generally composed of a lipid anchor (termed lipid A), a core oligosaccharide region, and a polysaccharide repeating unit called the O antigen. The core oligosaccharide and O antigen are typically biosynthesized from separate gene clusters, while lipid A biosynthetic genes are distributed throughout the genome (3). The chemical structure of LPS varies considerably among species, and these differences in structure are relevant to function. For example, Yersinia pestis deacylates its lipid A when infecting humans, thus avoiding detection by TLR4 (4). Helicobacter pylori elaborates its O antigen with Lewis antigens to mimic host glycans (5,6). More drastic changes in overall structure have also been observed. Species of Neisseria produce an LPS variant, known as lipooligosaccharide (LOS), which has a more elaborate core oligosaccharide in place of the conventional O antigen (7,8). Notably, almost everything known about the biosynthesis, structure, and function of LPS comes from studies of "conventional" pathogens. Remarkably little is known about LPS from commensal organisms and its importance to host innate immunity. Among the glycolipids found in the gut microbiome, Bacteroides LPS is of particular interest. Bacteroides and, in~10% of humans, its relative Prevotella are the only high-abundance Gram-negative bacterial genera in the gut. Bacteroides as a genus makes up~50% of the typical Western gut community (9). Notably, the species distribution within that 50% is highly variable between individuals (10). Different Bacteroides species have been reported to produce LPS molecules with distinct architectures based on their banding pattern on an SDS-PAGE gel, suggesting that each Bacteroides species has the potential to influence innate immunity in its own way (11,12). Bacteroides LPS is already known to have a different lipid A structure than "pathogenic" LPS: Bacteroides thetaiotaomicron, Bacteroides fragilis, and Bacteroides dorei produce penta-acylated, monophosphorylated lipid A, in contrast to the hexa-acylated, diphosphorylated lipid A from Escherichia coli (13)(14)(15)(16). With a recent exception reporting a B. thetaiotaomicron lipid A phosphatase, very little is known about the biosynthetic genes involved in Bacteroides LPS biogenesis (17). It takes as little as 50 ng of E. coli LPS injected intravenously into a mouse to cause septic shock (18). In contrast, given our laboratory purification yield of approximately 10 mg B. thetaiotaomicron LPS per 1 liter of confluent culture and assuming~7 ϫ 10 11 bacteria per liter in vitro and 20 trillion Bacteroides cells per individual, we estimate that a typical Western human gut contains~300 mg of Bacteroides LPS, likely making it one of the highest-abundance bacterially derived molecules present (19). We set out to better define the biosynthesis and structure of Bacteroides LPS as a starting point for understanding and manipulating the interaction between Bacteroides and the mammalian immune system. RESULTS AND DISCUSSION Characterization of the Bacteroides lipid A core. In order to identify candidate biosynthetic genes for Bacteroides lipid A, we performed BLAST searches against Bacteroides genomes using, as queries, the E. coli MG1655 lipid A biosynthesis genes (20). E. coli normally produces a lipid A molecule that has six acyl chains and two phosphate groups, as shown in Fig. 1A labeled "Kdo 2 -lipid A." As expected, orthologs of each Raetz pathway enzyme were identified, except that the Bacteroides species had pathway, where starting material UDP-GlcNAc is acylated, glycosylated, and phosphorylated by a series of nine biosynthetic enzymes to produce 3-deoxy-D-manno-octulosonic acid 2 (Kdo 2 )-lipid A. E. coli produces lipid A that is phosphorylated at both the 1 and 4= positions on the diglucosamine backbone, but the lipid A 1-and 4=-phosphatases LpxE and LpxF have been identified in other bacteria and are thought to act after biosynthesis of Kdo 2 -lipid A is complete. (B) Locus tags of (Continued on next page) Bacteroides Lipooligosaccharide Biosynthesis ® only one ortholog of the acyltransferases LpxL and LpxM; we refer to this ortholog as LpxL for simplicity. LpxL and LpxM are responsible for adding the fifth and sixth acyl chains to E. coli lipid A, so the presence of only one of these acyltransferases in Bacteroides genomes is consistent with published reports that B. thetaiotaomicron, B. fragilis, and Bacteroides dorei lipid A is penta-acylated rather than hexa-acylated (13)(14)(15). Bacteroides vulgatus was the only surveyed species to have a second LpxL/LpxM homolog, BVU_1014. Previous work indicates that this gene is part of an aryl polyene gene cluster, indicating that it likely transfers an acyl chain to a non-LPS substrate (21). We next used BLAST to predict lipid A phosphorylation by sequence homology to the lipid A 1-and 4=-phosphatases discovered in Porphyromonas gingivalis (22,23) (Fig. 1). While this search resulted in only one candidate for some species like B. thetaiotaomicron, for others there were multiple candidates, and experimental validation will be necessary to conclude which, if any, perform the predicted function. In order to determine the lipid A profile of each species, we isolated lipid A from five common Bacteroides species using the TRI reagent method and characterized their lipid A profile by matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry (MS) (24). Consistent with previous reports, the structures of B. thetaiotaomicron VPI 5482 and B. fragilis NCTC 9343 lipid A are penta-acylated and monophosphorylated, with their MALDI spectra showing a cluster of peaks around 1,688 m/z (13,14). Moreover, Bacteroides uniformis ATCC 8492, B. vulgatus ATCC 8482, and Bacteroides ovatus ATCC 8483 produce lipid A with virtually identical mass spectra (Fig. 2). Because the bacteria were grown in rich medium under normal anaerobic growth conditions, we cannot be certain that the structure of their lipid A is the same under conditions of host colonization, nor do we know whether it can change in response to stresses encountered in the host. Characterization of late biosynthetic genes in B. thetaiotaomicron lipid A biosynthesis: acylation and dephosphorylation. Because lipid A is typically an essential component of the outer membrane of Gram-negative bacteria, deletion of genes in the lipid A biosynthetic pathway is frequently lethal to bacteria (25)(26)(27)(28)(29)(30). Interestingly, the later biosynthetic genes such as those for the acyltransferases (lpxL and lpxM) and homologs of the Raetz pathway genes in E. coli MG1655 and lpxE and lpxF in Porphyromonas gingivalis W83 from a selection of Bacteroides species. Bacteroides has homologs for every gene in the pathway except that it has only one secondary acyltransferase, suggesting that its lipid A is predominantly penta-acylated. The species vary more in their putative homologs of the P. gingivalis phosphatases. phosphatases (lpxE and lpxF) can often be deleted (31). Working in B. thetaiotaomicron VPI 5482 ⌬tdk, our background strain for genetic knockouts lacking the thymidine kinase gene tdk, we made scarless single deletions of the putative lpxL and lpxF orthologs that we had identified by BLAST search (BT2152 and BT1854, respectively) and isolated lipid A from the resulting mutants. MALDI-TOF analysis showed a loss of 224 Daltons in the ΔlpxL mutant, consistent with the loss of a 15-carbon acyl chain, and a gain of 80 Daltons in the ΔlpxF mutant, indicating the addition of a phosphate group (Fig. 3). The assignment of BT1854 as the B. thetaiotaomicron lipid A 4=-phosphatase represents independent confirmation of a result first reported by Goodman and coworkers (17). Lipid A acylation and phosphorylation are important both for bacterial membrane physiology and for the interaction between LPS and host TLR4. An ΔlpxL ΔlpxM double mutant of E. coli MG1655-which lacks the fifth and sixth acyl chains on lipid A, yielding a tetra-acylated, diphosphorylated molecule referred to as lipid IV A -cannot grow above 32°C (Fig. 1A) (32). The crystal structure of the TLR4/MD-2 complex with E. coli lipid A suggests that the number of acyl chains and the number and position of phosphate groups on the molecule may affect binding affinity to the receptor and possibly receptor dimerization (33). E. coli lipid IV A is capable of inhibiting TLR4 activation by wild-type E. coli lipid A (34). This tetra-acylated diphosphorylated scaffold has been used in the design of eritoran, a lipid A mimic developed as a TLR4 antagonist as a potential treatment for sepsis (35). Additionally, lipid A mutants in E. coli differentially stimulate NF-B production in a THP-1 reporter cell line (36). We hypothesize that identifying lipid A biosynthesis genes in Bacteroides will allow us to make mutants that may have different immunostimulatory abilities and could be used to control innate immune responses in a host. Toward this goal, we have generated a B. thetaiotaomicron ΔlpxL ΔlpxF double mutant that elaborates tetraacylated, diphosphorylated lipid A, which we anticipate will be a TLR4 antagonist (see Bacteroides Lipooligosaccharide Biosynthesis ® O55:B5 and E. coli MG1655 LPS to confirm the previous observation that B. vulgatus produces LPS exhibiting a "laddered" pattern on a gel like E. coli O55:B5 does, whereas B. thetaiotaomicron LPS does not (11,12). The laddering pattern is of note because it indicates the presence of an O antigen; the number of repeating units added to the core oligosaccharide is variable, so the result is a population of LPS molecules of different sizes. As shown in Fig. 4, B. vulgatus LPS appears to have an O antigen based on the laddered pattern of its LPS, but B. thetaiotaomicron instead appears to synthesize a small number of structures that we propose are more likely to be lipooligosaccharides (LOSs) due to their apparent lack of an O antigen. Here, we will refer to the B. thetaiotaomicron outer membrane glycolipid as a LOS rather than LPS. Recent work on the human-associated bacteria H. pylori and Hafnia alvei indicates that the glycan portion of LPS from these strains can interact with C-type lectin receptors DC-SIGN and dectin-2, respectively, to influence a dendritic cell's cytokine output (37,38). Engagement of this class of receptors by a human-associated bacterium lacking an O antigen has not been investigated, however, and so we sought to identify the biosynthetic route for the oligosaccharide component of B. thetaiotaomicron LOS to explore its apparent lack of an O antigen and provide tools for manipulating the glycan structure. Identification of the gene cluster for LOS oligosaccharide biosynthesis in B. thetaiotaomicron. Previous work analyzing biosynthetic gene clusters from the NIH Human Microbiome Project data indicated that the phylum Bacteroidetes contains the largest number of predicted saccharide-producing gene clusters (39). B. thetaiotaomicron alone is known to harbor eight gene clusters responsible for making different capsular polysaccharides (CPSs) (40,41). We first wanted to determine whether any of these CPS gene clusters influenced the assembly of LOS. We isolated LOS from a B. thetaiotaomicron mutant constructed by Martens and coworkers in which all eight of its CPS clusters have been deleted (labeled the ΔCPS strain), as well as eight additional strains that each possess only one CPS cluster (CPS1-only, CPS2-only, and so on) (42). By SDS-PAGE analysis, we determined that neither deletion nor expression of CPS clusters affects the banding pattern of B. thetaiotaomicron LOS, indicating that these gene clusters do not encode the biosynthetic machinery for synthesis of B. thetaiotaomicron LOS (Fig. S2). An independent lead came from a recently published report: by screening a B. thetaiotaomicron transposon library using an antibody that binds the bacterial cell surface of B. thetaiotaomicron, Peterson et al. identified a gene cluster that they predicted might be responsible for the biosynthesis of the B. thetaiotaomicron LPS O antigen, due to a lack of antibody binding when the cluster was disrupted and the annotated functions of several of the genes within the cluster (43). Surprisingly, nine out of the 13 transposon mutants that did not bind the antibody had insertions in genes in the same gene cluster, BT3362 to BT3380 (Fig. 5A). No transposon insertions were obtained in the first three genes of the cluster, indicating that these genes might be essential or their deletion might lead to the accumulation of a toxic intermediate. Intrigued, we obtained a subset of these transposon mutants and analyzed LOS isolated Bacteroides Lipooligosaccharide Biosynthesis from each mutant by SDS-PAGE (Fig. 5B). Each mutant produced LOS with a banding pattern that appeared different from that of the wild type, except for the mutant with an insertion in the final gene in the cluster, BT3380. These data suggest that BT3362 to BT3380 encode the biosynthesis of the B. thetaiotaomicron LOS oligosaccharide. Additionally, they support our hypothesis that bands observed on the SDS-PAGE gel are glycans that do not have a single polymerized repeating unit but rather are variants of a heterogeneous oligosaccharide. Furthermore, the predicted function of the genes within the BT3362 to BT3380 cluster also supports this conclusion. This cluster bears some resemblance to the waa core oligosaccharide gene clusters characterized in E. coli, with the first gene, BT3362, sharing homology with the genes for heptosyltransferases WaaC and WaaQ (44). Overall, the cluster possesses 13 predicted glycosyltransferases (BT3362-BT3363, BT3365 to BT3372, BT3377, and BT3379-BT3380), a putative LPS kinase (BT3363), four tailoring enzymes (BT3373 to BT3376), and a GtrA-like protein (BT3378). It is unclear what specific structural modifications BT3373 to BT3376 might make based solely on sequence homology. Proteins in the GtrA-like family are typically integral membrane proteins that are thought to play a role in the transport of cell surface polysaccharides (45,46). Intact LPS MALDI-TOF MS as a diagnostic tool for LOS mutants. While the SDS-PAGE analysis of LOS from the transposon mutants implicates BT3362 to BT3380 in B. thetaiotaomicron LOS oligosaccharide biosynthesis, we wanted to increase the resolution of our analysis using mass spectrometry (MS). The LPS/LOS bands on an SDS-PAGE gel are approximations of the sizes of molecules in a sample, and using mass spectrometry would allow us to gain a clearer picture of the molecules made by the transposon mutants. Although it is easier to analyze LPS/LOS by MALDI-TOF after removing the O-and/or N-linked acyl chains via hydrazine or hydrogen fluoride treatment, we chose to analyze intact LOS molecules that were not subjected to chemical degradation or derivatization. We reasoned that LOS from B. thetaiotaomicron may contain important functional groups in the oligosaccharide chain that could be removed by these treatments, further complicating our efforts to elucidate detailed structural information about B. thetaiotaomicron LOS. We adapted a previously published strategy for analyzing intact LPS/LOS by MALDI-TOF (47,48). LPS/LOS analysis is typically challenging due to difficulties in inducing the glycolipid to ionize because of its size and polarity. We chose three transposon mutants that appeared by SDS-PAGE analysis to be truncated to various degrees (B. thetaiotaomicron tn3365, B. thetaiotaomicron tn3368, and B. thetaiotaomicron tn3376), along with LOS isolated from the B. thetaiotaomicron ΔCPS strain. The ΔCPS strain has wild-type LOS biosynthetic genes, but its lack of CPS yielded mass spectra with a cleaner background. The transposon mutants were created in the background of wild-type B. thetaiotaomicron, and so CPS is present in those preparations. In LOS from the ΔCPS strain, we observed a cluster of peaks around 5,209 m/z, the largest mass detected for any of the samples (Fig. 6). Although instrument-specific limitations prevented us from obtaining a resolution as high as others have observed, the limited degree of resolution that we achieved was sufficient for confirming the approximate masses of LOS molecules in the sample and comparing them to those of the truncated mutants (8,48). In addition to the peak that we propose corresponds to full-length LOS at 5,209 m/z, the ΔCPS sample has additional peaks at 3,284 and 3,017 m/z, which are likely intermediate species created by partial completion of the biosynthetic pathway. The most truncated transposon mutant in our set, tn3365, has a single predominant peak at 2,961 m/z. LOS from tn3368 does not have this 2,961 m/z peak but instead has peaks at 3,284 m/z (as in the ΔCPS sample) and 3,608 m/z. Interestingly, 2,961, 3,284, and 3,608 m/z are separated from one another by~324 Daltons. The expected mass of a hexose is~162 Daltons, and so we predict that tn3368 makes two LOS molecules that are two and four hexose units longer than the molecule made by tn3365. Finally, LOS from the least truncated transposon mutant that we assayed, tn3376, has its largest peaks around 4,497 and 4,295 m/z, as well as the 3,284 m/z peak that is common to both the tn3368 and the ΔCPS samples. The mass differences between tn3368 and tn3376, as well as between tn3376 and ΔCPS, do not suggest a structural difference as straightforward as the addition of hexoses between tn3365 and tn3368. Given the presence of genes encoding tailoring enzymes in the LOS biosynthetic cluster, we expect that the longer LOS species have modifications like phosphorylation, acetylation, or carbamoylation. All of the transposon mutants have additional peaks between 1,500 and 3,000 m/z that presumably derive from CPS, since these peaks are absent in the ΔCPS sample but present in LOS isolated from the B. thetaiotaomicron ⌬tdk mutant (Fig. S3). Additionally, every sample has a cluster of peaks around 1,688 m/z representing lipid A, which likely derives from in-source fragmentation (48). With the goal of confirming these results using clean deletion mutants rather than the transposon mutants, we deleted as much of the cluster as we could-BT3363 and BT3365 to BT3380 (we could not obtain mutants of BT3362 and BT3364, consistent with the lack of transposon insertions in these genes) (43). Surprisingly, when we compared LOSs purified from this strain and from the tn3365 mutant on an SDS-PAGE gel, the banding pattern of B. thetaiotaomicron ΔBT3363 ΔBT3365-BT3380 LOS appeared to resemble that of wild-type LOS. However, when we subjected the B. thetaiotaomicron ΔBT3365 ΔBT3365-BT3380 LOS to MALDI-TOF analysis, the mass of the intact glycolipid was around 4,870 Daltons, smaller than the 5,209 Daltons seen in wild-type LOS (Fig. S4). These data suggest that there might be a compensatory mechanism in which . thetaiotaomicron tnBT3365, B. thetaiotaomicron tnBT3368, and B. thetaiotaomicron tnBT3376 strains were isolated using the large-scale LPS/LOS extraction method. The resulting material was desalted and spotted on a THAP-nitrocellulose matrix for analysis on a Shimadzu Axima Performance MALDI-TOF mass spectrometer in linear negative-ion mode. LOS peaks of interest have been colored, with each color representing a different LOS species present in the different strains. Peaks colored gray are likely derived from capsular polysaccharide (those that do not appear in the B. thetaiotaomicron ΔCPS spectrum) or lipid A in the case of the peak clusters around 1,650 to 1,700 m/z. Bacteroides Lipooligosaccharide Biosynthesis ® the bacterium is able to glycosylate truncated forms of the LOS molecule. Peterson et al. similarly predicted that B. thetaiotaomicron may have the ability to produce an O antigen when this gene cluster is disrupted because the colony morphology of their transposon mutants did not differ from that of wild-type B. thetaiotaomicron (43). However, by our intact LOS MALDI-TOF analysis, we see evidence for new oligosaccharide production only when the cluster is cleanly deleted, rather than when single genes are disrupted by transposon insertion. Our result highlights the limitations of SDS-PAGE analysis in determining structural differences between LOS samples (Fig. S5). Further studies need to be conducted to understand whether B. thetaiotaomicron has an alternative lipid A glycosylation pathway that is unmasked in the absence of most of the LOS oligosaccharide gene cluster. Predicting other Bacteroides LOS oligosaccharide gene clusters. Having identified the probable B. thetaiotaomicron LOS biosynthetic gene cluster, we hypothesized that it could be used to identify candidate LOS and LPS gene clusters in other Bacteroides species. We used the two essential genes in the cluster, BT3362 (a putative heptosyltransferase) and BT3364 (a putative LPS kinase), as queries in BLAST searches against other Bacteroides genomes and were able to identify similar clusters in many Bacteroides species (Fig. 7). Given that a homologous cluster is found in B. vulgatus, which elaborates a laddered LPS, we expect that B. vulgatus harbors an additional cluster encoding the biosynthesis of the O antigen repeating unit. This would likely be attached to the product of the B. thetaiotaomicron-like core oligosaccharide. Our results are a first step in characterizing what we expect will be a large amount of biosynthetic and structural heterogeneity among Bacteroides LPS or LOS molecules. Given our understanding of the privileged role that glycolipids play in communicating with the mammalian immune system and the sheer quantity of LPS in the gut, Bacteroides LPS/LOS molecules are likely to be critical mediators in the interaction between commensal microbes and the host. By understanding how these molecules are made, we gain the possibility of manipulating their structure and by extension the host's immune response. MATERIALS AND METHODS [M-H] Ϫ ion masses of 1,044.5267, 1,756.9175, 3,492.6357, and 5,728.5931 m/z, respectively (51). The standards were dissolved together in a solution of 0.1% trifluoroacetic acid in water. Because the standard mixture was not in the same solvent as the CMBT matrix mentioned above, 1 l of CMBT matrix was spotted on the target and allowed to dry before 1 l of the standard mixture was spotted on top of the matrix. MS data were collected between 400 and 5,000 m/z, and the resulting spectra were smoothed and baseline corrected using MassLynx software. Large-scale LPS/LOS extraction. LPS or LOS was isolated from whole bacteria using the hot phenol-water method (52). Briefly, bacteria were grown overnight in a 10-ml culture and then expanded to 2 liters. Cells were harvested once cultures reached an optical density (OD) of at least 0.7 and pelleted by centrifugation at 6,000 ϫ g for 30 min at 4°C. The entire wet cell pellet from the 2-liter culture was suspended in 20 ml water. Separately, the cell suspension and 20 ml of 90% phenol solution in water were each brought up to 68°C with stirring. Once at temperature, the phenol solution was slowly added to the cell suspension. The mixture was stirred vigorously for 30 min at 68°C and then cooled rapidly in an ice water bath for 10 min. The sample was centrifuged at 15,000 ϫ g for 45 min, and the upper aqueous layer was transferred into 1,000-molecular-weight-cutoff (MWCO) dialysis tubing. The sample was dialyzed against 4 liters of water for 4 days, changing the water twice per day. LPS/LOS was pelleted out of the dialysate by ultracentrifugation at 105,000 ϫ g for 4 h. The pellet was resuspended in water and treated with RNase A (Thermo Fisher), DNase I (New England Biolabs), and proteinase K (Thermo Fisher) before repeating the ultracentrifugation step. The pellet was resuspended in water, lyophilized, and stored at Ϫ20°C. LPS/LOS samples that were prepared by this method include the B. thetaiotaomicron ⌬tdk strain in Fig. 4, all samples in Fig. 6, and all samples in Fig. S3, S4, and S5. Microscale LPS/LOS extraction. The microscale LPS/LOS extraction was used when a large number of samples was needed for SDS-PAGE analysis. In this method, adapted from the work of Marolda and coworkers, bacteria were grown to mid-log phase in 5 ml of medium and pelleted (53). Cell pellets were resuspended in 150 l lysis buffer (0.5 M Tris-hydrochloride, pH 6.8, 2% SDS, 4% ␤-mercaptoethanol) and boiled at 100°C for 10 min. Proteinase K was added to each sample before incubation at 60°C for 1 h. The sample temperature was raised to 70°C, and 150 l prewarmed 90% phenol in water was added. Samples were vortexed three times at 5-min intervals during a 15-min incubation. The samples were immediately cooled on ice for 10 min and centrifuged at 10,000 ϫ g for 1 min. The aqueous layer (~100 l) was pipetted into a clean tube, and 5 volumes of ethyl ether saturated with 10 mM Tris-hydrochloride (pH 8.0) and 1 mM EDTA was added. The samples were vortexed and centrifuged, and the aqueous layer was removed to a clean tube. An appropriate amount of 3ϫ loading dye (0.187 M Tris-hydrochloride, pH 6.8, 6% SDS, 30% glycerol, 0.03% bromophenol blue, 15% ␤-mercaptoethanol) was added, and the samples were stored at Ϫ20°C. LPS/LOS samples that were prepared by this method include B. vulgatus in Fig. 4, all samples in Fig. 5, and all samples in Fig. S2. SDS-PAGE analysis of LPS. To visualize LPS/LOS on an SDS-PAGE gel, we used Novex 16% Tricine protein gels (1.0 mm, 12 wells) and 10ϫ Novex Tricine SDS running buffer (Thermo Fisher) (12). For samples prepared by the LPS/LOS microscale extraction, 15 l of the resulting aqueous layer mixed with 3ϫ loading dye was added to each lane. For samples prepared by LPS/LOS large-scale extraction or purchased from InvivoGen, 2.5 g of material was resuspended in 15 l 1ϫ loading dye and the whole volume was added to a lane. Gels were run at 125 V for 90 min at room temperature, stained with Pro-Q Emerald 300 lipopolysaccharide gel stain (Thermo Fisher) per the manufacturer's instructions, and imaged on a Bio-Rad Gel Doc EZ Imager using the SYBR green filter. MALDI-TOF mass spectrometry analysis of intact LOS. (i) Sample and matrix preparation. To detect intact LPS by MALDI-TOF MS, we closely followed the technique developed by Phillips et al. adapted to study Neisseria lipooligosaccharides (48). One milligram of lyophilized LOS was dissolved in 100 l 1:3 methanol-water with 5 mM EDTA. Cation exchange beads (Dowex 50WX8, 200 to 400 mesh) were converted to the ammonium form and deposited into 1.5-ml tubes before desalting the LOS. Each sample suspension was added to the beads, vortexed, and centrifuged briefly to pellet the beads. The sample was removed to a clean tube and mixed 9:1 with 100 mM dibasic ammonium citrate before spotting on the target. The matrix was made by mixing a 15-mg/ml solution of nitrocellulose membrane in 1:1 isopropanol-acetone with a 200-mg/ml solution of 2=,4=,6=-trihydroxyacetophenone (THAP) in methanol in a 1:3 ratio. The matrix was deposited by pipetting 1 l within an inscribed circle on the target (Shimadzu) and allowed to dry. Once the matrix had dried completely, 1 l of the sample preparation was added on top of the matrix and allowed to dry. (ii) Negative-ion MALDI-TOF MS. MALDI-TOF MS analysis was performed on a Shimadzu Axima Performance mass spectrometer with an N 2 laser in linear negative-ion mode. It was calibrated using the same solution of four standards that was used to calibrate the Waters Synapt G2 for lipid A analysisangiotensin II, renin substrate, insulin chain B, and bovine insulin in 0.1% trifluoroacetic acid-and the standards were spotted as described above except on the THAP-nitrocellulose matrix. MS data were collected between 700 and 7,000 m/z, and the resulting spectra were smoothed and baseline corrected using Shimadzu Biotech Launchpad software. ACKNOWLEDGMENTS We are deeply indebted to Colleen O'Loughlin and members of the Fischbach group for helpful comments on the manuscript, Daniel Peterson for providing the B. thetaiotaomicron transposon mutants, and Nancy Phillips and Constance John for their guidance on mass spectrometry analysis of lipid A and LOS. Intact LOS mass spectrometry analysis was completed using the Shimadzu Axima Performance MALDI-TOF instrument in the laboratory of William DeGrado at the University of California, San Francisco. Lipid A mass spectrometry analysis on the Waters Synapt G2 HDMS 32k instrument was performed at the University of California, San Francisco Sandler-Moore Mass Spectrometry Core Facility.
6,711.6
2018-03-13T00:00:00.000
[ "Biology", "Chemistry" ]
Nature's role in sustaining economic development In this paper, I formalize the idea of sustainable development in terms of intergenerational well-being. I then sketch an argument that has recently been put forward formally to demonstrate that intergenerational well-being increases over time if and only if a comprehensive measure of wealth per capita increases. The measure of wealth includes not only manufactured capital, knowledge and human capital (education and health), but also natural capital (e.g. ecosystems). I show that a country's comprehensive wealth per capita can decline even while gross domestic product (GDP) per capita increases and the UN Human Development Index records an improvement. I then use some rough and ready data from the world's poorest countries and regions to show that during the period 1970–2000 wealth per capita declined in South Asia and sub-Saharan Africa, even though the Human Development Index (HDI) showed an improvement everywhere and GDP per capita increased in all places (except in sub-Saharan Africa, where there was a slight decline). I conclude that, as none of the development indicators currently in use is able to reveal whether development has been, or is expected to be, sustainable, national statistical offices and international organizations should now routinely estimate the (comprehensive) wealth of nations. QUESTIONS AND RESPONSES Are humanity's dealings with nature sustainable? Can we expect world economic growth to continue in the foreseeable future? Should we be confident that knowledge and skills will increase in such ways as to lessen our reliance on nature in relation to humanity's growing numbers and rising economic activity? Contemporary discussions on these questions are now several decades old. If they have remained alive and continue to be shrill, it is because two opposing empirical perspectives shape them. On the one hand, if we look at specific examples of what economists call natural capital (aquifers, ocean fisheries, tropical forests, estuaries, the atmosphere as a carbon sink-ecosystems, generally), there is convincing evidence that at the rates at which we currently exploit them they are very likely to change character dramatically for the worse, with little advance notice. Indeed, many ecosystems have already collapsed, with short notice (M.E.A. 2003;Hassan et al. 2005). On the other hand, if we study historical trends in the prices of marketed resources (e.g. minerals and ores), or improvements in life expectancy, or growth in recorded incomes in regions that are currently rich and in those that are on the way to becoming rich, resource scarcities would not appear to have bitten. Suppose you were to point to the troubled nations of sub-Saharan Africa and suggest that resource scarcities are acute there today. Those with the former perspective (ecologists generally) will tell you that it is because people in the world's poorest regions face acute resource scarcities relative to their numbers that they are so poor, while those with the latter perspective (economists usually) will inform you that people there experience serious resource scarcities because they are poor. When experts disagree over such a fundamental matter as the direction of causation, there is little to go on. Those conflicting intuitions are also not unrelated to an intellectual tension between the concerns people share about carbon emissions and acid rains that sweep across regions, nations and continents and about declines in the availability of firewood, fresh water, coastal resources and forest products in as small a locality as a village in a poor country. That is why 'environmental problems' present themselves in different ways to different people. Some identify environmental problems with population growth, while others identify them with wrong sorts of economic growth. There are those who identify environmental problems with urban pollution in emerging economies, while others view them through the spectacle of poverty. Each of those visions is correct. There is not just one environmental problem. There is a large collection of them, and they manifest themselves at different spatial scales and operate at different speeds (Ehrlich & Ehrlich 1981, 1990Dasgupta 1993Dasgupta , 2001Sachs 2008). In this reckoning, environmental pollutants are the reverse of natural resources. Roughly speaking, 'resources' are 'goods' (many being sinks into which pollutants are discharged); while 'pollutants' (the degrader of resources) are 'bads'. Pollution is the other side of conservation. That is why pollution and conservation can be studied in a unified way (Dasgupta 1982). Despite the conflicting intuitions, most economists would appear to be convinced that scientific and technological advances, the accumulation of reproducible capital (machinery, equipments, buildings and roads), growth in human capital (health, education and skills) and improvements in the economy's institutions (which are also capital assets) can overcome diminutions in natural capital. Otherwise, it is hard to explain why twentieth-century economics has been so detached from the environmental sciences. Judging by the profession's writings, we economists see nature, when we see it at all, as a backdrop from which resources and services can be drawn in isolation. Macroeconomic forecasts routinely exclude natural capital. Accounting for nature, if it comes into the calculus at all, is usually an afterthought to the real business of 'doing economics'. We economists have been so successful in this enterprise, that if someone exclaims, 'Economic growth!', no one needs to ask, 'Growth in what?'-we all know they mean growth in gross domestic product (GDP). The rogue word in GDP is 'gross'. Since GDP is the total value of the final goods and services an economy produces, it does not deduct the depreciation of capital that accompanies production-in particular, it does not deduct the depreciation of natural capital. In the quantitative models that appear in leading economics journals and textbooks, nature is taken to be a fixed, indestructible factor of production. The problem with the assumption is that it is wrong: nature consists of degradable resources. Agricultural land, forests, watersheds, fisheries, fresh water sources, river estuaries and the atmosphere are capital assets that are self-regenerative, but suffer from depletion or deterioration when they are over-used. (I am excluding oil and natural gas, which are at the limiting end of self-regenerative resources.) To assume away the physical depreciation of capital assets is to draw a wrong picture of future production and consumption possibilities that are open to a society. Here is an illustration of what goes wrong in economic accounts when depreciation is ignored. Repetto et al. (1989) and Vincent et al. (1997) estimated the decline in forest cover in Indonesia and Malaysia, respectively. They found that when depreciation is included, national accounts look quite different: net domestic saving rates are some 20-30% lower than recorded saving rates. In their work on the depreciation of natural resources in Costa Rica, Solorzano et al. (1991) found that the depreciation of three resources (forests, soil and fisheries) amounted to about 10 per cent of GDP and over one-third of domestic saving. PLAN OF THE PAPER In this paper, I want to give you a sense of how economics can be reconstructed to include natural capital in a seamless way. I shall do that in three stages. In §3, I show that property rights to natural capital are frequently unprotected or ill-specified. I argue that this typically leads to their overexploitation, and so to waste and inequity. In §4, I illustrate overexploitation in the context of a 'small' problem: the economic failure that can accompany deforestation in a small region. It will not require any stretch of imagination to recognize that every economy faces innumerable such 'small' problems. The performance of the macro-economy depends, of course, on how each of those small problems is tackled there. If good polices are in place to reduce the economic losses that are generated by the small problems, the macro-economy can be expected to function well; but not otherwise. So in §5, I demonstrate that when natural capital is included in economic statistics, the recent economic history of nations looks very different from what we are led to believe when conventional economic indicators, such as GDP per head or the United Nations' Human Development Index (HDI), 1 are used to judge the performance of economies. A LACK OF PROPERTY RIGHTS TO NATURAL CAPITAL Why do not market prices reflect nature's scarcity value? If natural capital really is becoming scarcer, would not their prices have risen, signalling that all is not well? The problem is that if prices are to reveal social scarcities, markets must function well. For many types of natural capital, though, most especially ecological resources, markets not only do not function well, often they do not even exist. In some cases, they do not exist because relevant economic interactions take place over large distances, making the costs of negotiation too high (e.g. the effects of upland deforestation on downstream farming and fishing activities; §4); in other cases, they do not exist because the interactions are separated by large temporal distances (e.g. the effect of carbon emission on climate in the distant future, in a world where forward markets do not exist because future generations are not present today to negotiate with us). Then there are cases (the atmosphere, aquifers, the open seas) where the migratory nature of the resource keeps markets from existing-they are called 'open-access resources', and they experience the tragedy of the commons. Each of the above examples points to a failure to have secure property rights to natural capital. We can state the problem thus: ill-specified or unprotected property rights prevent markets from forming or make markets function wrongly when they do form. By 'property rights', I do not only mean private property rights, I include communal property rights (e.g. over common property resources, such as woodlands, in South Asia and sub-Saharan Africa) and state property rights. At an extreme end are 'global property rights', a concept that is implicit in current discussions on climate change. But the concept is not new. That humanity has collective responsibility over the state of the world's oceans used to be explicit in the 1970s, when politicians claimed that the oceans are a 'common heritage of mankind'. The failure to establish secure property rights to natural capital typically means that the services natural capital offers us are underpriced in the market, which is another way of saying that the use of nature's services is implicitly subsidized. At the global level, what is the annual subsidy? One calculation suggested that it is 10 per cent of annual global income (Myers & Kent 2000). My reading is that the margin of error in that estimate is very large. But it is the only global estimate I have come across. Hassan et al. (2005) contains quantitative information that could be used to generate more reliable estimates of nature's subsidies. International organizations such as the World Bank have the resources to undertake that work. But they appear to be reluctant to do so. NATURE'S SUBSIDIES Being underpriced, nature is overexploited. So, an economy could enjoy growth in real GDP and improvements in HDI for a long spell even while its overall productive base shrinks. As proposals for estimating the social scarcity prices of natural resources remain contentious, economic accountants ignore them and governments remain wary of doing anything about them. Here is an example of how the use of nature is subsidized. An easy way for governments to earn revenue in countries that are rich in forests is to issue timber concessions to private firms. Imagine that concessions are awarded in the upland forests of a watershed. Forests stabilize both soil and water flow. So deforestation gives rise to soil erosion and increases fluctuations in water supply downstream. If the law recognizes the rights of those who suffer damage from deforestation, the timber firm would be required to compensate downstream farmers. But compensation is unlikely when (i) the cause of damage is many miles away, (ii) the concession has been awarded by the state, 2 and (iii) the victims are scattered groups of farmers. Problems are compounded because damages are not uniform across farms: location matters. It can also be that those who are harmed by deforestation do not know the underlying cause of their deteriorating circumstances. As the timber firm is not required to compensate farmers, its operating cost is less than the social cost of deforestation, the latter being the firm's logging costs and the damage suffered by all who are adversely affected. So if the timber is exported abroad, the export contains an implicit subsidy, paid for by people downstream. And I have not included forest inhabitants, who now live under even more straightened circumstances or, worse, are evicted without compensation. The subsidy is hidden from public scrutiny, but it amounts to a transfer of wealth from the exporting to the importing country. Some of the poorest people in a poor country subsidize the incomes of the average importer in what could well be a rich country. That does not feel right. (a) Quantifying economic failure The spatial character of nature's hidden subsidies is self-evident, but getting a quantitative feel involves hard work. So the literature is sparse. As in many other scientific fields, some of the best advances have been made in studies of localized problems. Basing their estimate on a formal hydrological model, Pattanayak & Kramer (2001) reported that the drought mitigation benefits farmers enjoy from upstream forests in a group of Indonesian watersheds are 1 -10% of average agricultural incomes. In another paper, Pattanayak & Butry (2005) studied the extent to which upstream forests stabilize soil and water flow in Flores, Indonesia. Downstream benefits were found to be 2-3% of average agricultural incomes. In a study in Costa Rica on pollination services, Ricketts et al. (2004) discovered that forest-based pollinators increase the annual yield in nearby coffee plantations by as much as 20 per cent. Subsequently, Ricketts et al. (2008) analysed the results of some two dozen studies, involving 16 crops in five continents, and discovered that the density of pollinators and the rate at which a site is visited by them declines at rapid exponential rates with the site's distance from the pollinators' habitat. At 0.6 km (respectively, 1.5 km) from the pollinators' habitat, for example, the visitation rate (respectively, pollinator density) drops to 50 per cent of its maximum. (b) Eliminating nature's subsidies How should societies eliminate nature's subsidies? In the case of the upstream firm and downstream farmers, the state could tax the firm for felling trees. The firm in this case would be the 'polluter', the farmers the 'pollutees'. Pollution taxes are known today as 'green taxes'. They invoke the polluter-pays-principle (PPP). The efficient rate of taxation would be the damage suffered by farmers. What the state does with the tax revenue is a distributional matter, to which I shall return presently. But there is also a 'market-friendly' way to eliminate the subsidies. Lindahl (1958) suggested that the state (or the community) could introduce private property rights to natural capital, the thought being that markets would emerge to price nature's services appropriately. A problem with the proposal, at least as I have presented it here, is that it is not clear who should be awarded property rights. In our example of the upstream firm and downstream farmers, the sense of natural justice might suggest that the rights should be assigned to farmers. Under a system of 'pollutees-rights', the timber firm would be required to compensate farmers for the damage it inflicts on them. Such a property-rights regime also invokes PPP. But the rights could be awarded to the timber firm instead. In that case it would be the farmers who would have to compensate the firm for not felling trees! The latter system of property rights invokes the pollutee-pays-principle (a reverse PPP, as it were), which, in the example we are studying, would seem repellent. But it has been argued by proponents that from the efficiency point of view it is a matter of indifference which system of private property rights is introduced. Market-based systems have attracted much attention among ecologists and development experts in Review. Sustainable economic development P. Dasgupta 7 recent years, under the label payment for ecosystem services or PES (see Daily & Ellison (2002) and Pagiola et al. (2002) for sympathetic reviews of a marketbased PES). The ethics underlying PES are seemingly attractive. If decision makers in Brazil believe that decimating the Amazon forests is the true path to economic progress there, should not the rest of the world pay Brazil not to raze them to the ground? If the lake on my farm is a sanctuary for migratory birds, should not bird lovers pay me not to drain it for conversion into farm land? Never mind that the market for ecosystem services could be hard to institute, if a system involving PES were put in place, owners of ecological capital and beneficiaries of ecological services would be forced to negotiate. The former group would then have an incentive to conserve their assets. Hundreds of new PES schemes have been established round the globe. China, Costa Rica and Mexico, for example, have initiated large-scale programmes in which landowners receive payment for increasing biodiversity conservation, expanding carbon sequestration and improving hydrological services. But although PES may be good for conservation, one can imagine situations where the system would be bad for poverty reduction and distributive justice. Many of the rural poor in poor countries enjoy nature's services from assets they do not own. Even though they may be willing to participate in a system of property rights in which they are required to pay for ecological services (Pagiola et al. (2008) report in their careful study of a silvo-pastoral project in Nicaragua that they do), it could be that in the world we have come to know, the weaker among the farmers are made to pay a disproportionate amount. Some may even become worse off than they were earlier. One could argue that in those situations the state should pay the resource owner instead, using funds obtained from general taxation. Who should pay depends on the context (Bulte et al. 2008). A PES system in which the state plays an active role is attractive for wildlife conservation and habitat preservation. In poor countries, property rights to grasslands, tropical forests, coastal wetlands, mangroves and coral reefs are often ambiguous. The state may lay claim to the assets ('public' property being the customary euphemism), but if the terrain is difficult to monitor, inhabitants will continue to reside there and live off its products. Inhabitants are therefore key stakeholders. Without their engagement, the ecosystems could not be protected. Meanwhile flocks of tourists visit the sites on a regular basis. An obvious thing for the state to do is to tax tourists and use the revenue to pay local inhabitants for protecting their site from poaching and free-riding. Local inhabitants would then have an incentive to develop rules and regulations to protect the site. MEASURING SUSTAINABLE DEVELOPMENT Whenever economists have probed the matter, they have found that all economies subsidize large numbers of economic transactions with nature. Some of those transactions are large (construction of large dams that alter ecosystems), but mostly they are small. How do those subsidies affect overall economic performance? More fundamentally, how should economic performance be measured? A famous 1987 report by an international commission (widely known as the Brundtland Commission Report) defined sustainable development as ' . . . development that meets the needs of the present without compromising the ability of future generations to meet their own needs' (World Commission for Environment and Development 1987). In this reckoning, sustainable development requires that relative to their populations each generation should bequeath to its successor at least as large a productive base as it had itself inherited. Notice that the requirement is derived from a relatively weak notion of justice among the generations. Sustainable development demands that, relative to population numbers, future generations have no less of the means to meet their needs than we do ourselves; it demands nothing more. But how is a generation to judge whether it is leaving behind an adequate productive base for its successor? (a) Shadow prices as social scarcities We noted earlier that neither GDP nor HDI is of help, because neither is a measure of a country's productive base. So, what does measure the productive base? A society's productive base is the stock of all its capital assets, including its institutions. As we are interested in estimating the change in an economy's productive base over a period of time, we need to know how to combine the changes that take place in its capital stocks. Intuitively, it is clear that we have to do more than just keep a score of capital assets (so many additional pieces of machinery and equipment, so many more miles of roads, so many fewer square miles of forest cover and so forth). An economy's productive base declines if the decumulation of assets is not compensated by the accumulation of other assets. Contrary-wise, the productive base expands if the decumulation of assets is more than compensated by the accumulation of other assets. The ability of an asset to compensate for the decline in some other asset depends on technological knowledge (e.g. double glazing can substitute for central heating up to a point, but only up to a point) and on the quantities of assets the economy happens to have in stock (e.g. the protection trees provide against soil erosion depends on the existing grass cover). The values to be imputed to assets are known as their shadow prices. Formally, by an asset's shadow price, we mean the net increase in societal well-being that would be enjoyed if an additional unit of that asset were made available, other things being equal. As shadow prices reflect the social scarcities of capital assets, it is only in exceptional circumstances that they equal market prices. We are trying to make operational sense here of the concept of sustainable development. So we must include in the concept of 'social well-being' not only the well-being of those who are alive today, but also of those who will be here in the future. There are ethical theories that go beyond a purely anthropocentric view of nature, by insisting that certain aspects of nature have intrinsic value. The concept of social well-being I am invoking here includes intrinsic values, if that is demanded. However, an ethical theory on its own will not be enough to determine shadow prices, because there would be nothing for the theory to act upon. We need descriptions of states of affairs too. To add a unit of a capital asset to an economy is to perturb that economy. In order to estimate the contribution of that additional unit to societal well-being, we need a description of the state of affairs both before and after the addition has been made, now and in the future. In short, estimating shadow prices involves both evaluation and description. It should not surprise you that estimating shadow prices is a formidable problem. There are ethical values we hold that are probably impossible to commensurate when they come up against other values that we also hold. That does not mean ethical values do not impose bounds on shadow prices; they do. That is why the language of shadow prices is essential if we wish to avoid making sombre pronouncements about sustainable development that amount to saying nothing. Most methods that are currently deployed to estimate the shadow prices of ecosystem services are crude, but deploying them is a lot better than doing nothing to value them. (b) The wealth of nations The value of an economy's entire stock of capital assets measured in terms of their shadow prices is its wealth. Sometimes, we call it comprehensive wealth, to remind ourselves that the measure is to include all capital assets (building and machinery, roads and rail tracks; health and skills; natural capital and knowledge and institutions), not just reproducible capital such as buildings and machinery, roads and rail tracks. Comprehensive wealth (henceforth, wealth) is a number; expressed, say, in international dollars. It can be shown that an economy's wealth measures its overall productive base (Hamilton & Clemens 1999;Dasgupta & Mäler 2000;Dasgupta 2001). So, if we wish to determine whether a country's economic development has been sustainable over a period of time, we have to estimate the changes that took place over that period in its wealth relative to growth in population. The theoretical result I am alluding to gives meaning to the title of perhaps the most famous book ever written on economics, namely, An inquiry into the nature and causes of the wealth of nations. Observe that Adam Smith did not write about the GDP of nations, nor of the HDI of nations; he wrote about the wealth of nations. It would seem we have come full circle, by identifying sustainable development with the accumulation of (comprehensive) wealth. (c) An empirical exercise In an important paper, Hamilton & Clemens (1999) estimated the change in the wealth of 120 nations during the period 1970 -1996 by defining an economy's wealth as the value of its reproducible capital assets and three classes of natural capital assets (commercial forests, oil and minerals and the quality of the atmosphere in terms of its carbon dioxide content). The shadow prices of oil and minerals were taken to be their market prices minus extraction costs. The shadow price of global carbon emission into the atmosphere is the damage caused by bringing about climate change. That damage was taken to be $20 per tonne, which is in all probability a serious underestimate. Forests were valued in terms of their market price minus logging costs. Contributions of forests to ecosystem functions were ignored. As you can see, the list of natural resources Hamilton and Clemens considered was very incomplete. It did not include water resources, fisheries, air and water pollutants, soil and ecosystems. The authors also ignored improvements in human health and skills, and they did not consider increases in knowledge, nor improvements or deteriorations in the countries' institutions. Moreover, their estimates of shadow prices were very, very approximate. Nevertheless, one has to start somewhere, and theirs was a first pass at what is an enormously messy enterprise. In table 1, I offer an assessment of the character of economic development from 1970 to 2000 that is a lot more comprehensive than the one in Hamilton & Clemens (1999). I consider only the poorest regions in the world. I restrict myself to poor countries because I have studied poor countries more than rich countries. I consider Bangladesh, China (a poor country during much of that period), India, Nepal, Pakistan and sub-Saharan Africa. Economists have discovered ingenious ways to estimate the accumulation of knowledge and changes in the effectiveness of an economy's institutions. Those estimates are published regularly by such international organizations as the World Bank. The first column of figures in the table presents my estimates of the average annual percentage rate of change in wealth in each of the regions in the period 1970 -2000. My estimates are a refinement of those published by Arrow et al. (2004), which in turn were an improvement on those of Hamilton and Clemens: I have added to the Hamilton -Clemens estimates for each region the average annual public expenditure on health and education, the average annual rate of growth in knowledge and changes in the effectiveness of their institutions. Notice that, excepting sub-Saharan Africa, wealth increased in every country in my sample. But in judging whether an economy has experienced sustainable development during a period, we have to discover whether wealth has increased relative to population growth. The simplest thing to do is to ask whether wealth per head has increased. In order to estimate movements in wealth per head, I have collated figures for the average annual population growth rate in each region during the period 1970-2000. They are given in the second column of figures in the table. And in the third column, I present the difference between the figures in the first and second columns, which gives us estimates of the change in wealth per head in each of the regions. Before summarizing the findings, it will be useful to get a feel for what the table is telling us. Consider Pakistan: during the period 1970-2000 (comprehensive), wealth increased at an average annual rate of 1.3 per cent. But take a look at Pakistan's population, which grew at 2.7 per cent annually. The third column shows that Pakistan's per capita wealth declined in consequence, at an annual rate of 1.4 per cent, implying that in year 2000 the average Pakistani was a lot poorer than in 1970. Interestingly, if we were to judge Pakistan's economic performance in terms of growth in GDP per capita, we would obtain a different picture. As the fourth column of the table shows, Pakistan grew at a respectable 2.2 per cent a year. If we now look at the fifth column, we find that the United Nations' HDI for Pakistan improved during the period. From 1970 to 2000, Pakistan enjoyed growth in GDP per capita and an improvement in HDI by running down its natural capital assets. Movements in GDP per capita and HDI tell us nothing about sustainable development. The striking message of the table is that during the period 1970-2000 economic development in all the countries on our list other than China was 'negative'. To be sure, sub-Saharan Africa offers no surprise. Wealth, not just wealth per head, declined at an annual rate of 0.1 per cent. Population grew at 2.7 per cent a year. Even without performing any calculation, we would have known that the productive base in sub-Saharan Africa declined relative to its population. The table confirms that it did, at 2.8 per cent each year. If we now look at the fourth column of numbers in the table, we discover that GDP per capita in sub-Saharan Africa declined at 0.1 per cent annually. But the region's HDI showed an improvement-confirming once again that studying movements in HDI enables us to say nothing about sustainable development. The table shows that Pakistan is the worst performer in the Indian subcontinent. But the remaining countries in South Asia also did not make it. Admittedly, each country became wealthier, but population growth was sufficiently high to more than neutralize the growth in wealth. Relative to their populations, the productive base in each economy declined. Economic development in South Asia was not sustained. China was the single exception in my sample. The country invested so much in reproducible capital assets that its wealth grew at an annual rate of 5.9 per cent. Population grew at a relatively low rate: 1.4 per cent per year, which is why China's wealth per capita expanded at an annual rate of 4.5 per cent. Per capita GDP also grew, at an annual rate of 7.8 per cent and HDI improved. In China, GDP per capita, HDI and wealth per head moved parallel to one another. The figures we have just studied are all very rough and ready, but they show how accounting for natural capital can make a substantial difference to our conception of the development process. We should remember that the figures for several shadow prices I used to arrive at the table are conservative. For example, a price of $20 per tonne of carbon in the atmosphere is almost certainly a good deal below its true global social cost. And the methods I have used to value improvements in health and education are almost certainly defective, but in the opposite direction: I have underestimated them. So one of the most important problems we economists face today is to find more effective ways to quantify the progress and regress of nations. So long as we rely on GDP and HDI and the many other ad hoc measures of human well-being, we will continue to paint a misleading picture of economic performance. Because of their imperfections, the figures in the third column of the table are not be taken literally. Nevertheless, with all the above caveats (and more!) in mind, the overarching moral that emerges from it is salutary: Development policies that ignore our reliance on natural capital are seriously harmful-they do not pass the mildest test for equity among contemporaries, nor among people separated by time and uncertain contingencies. ENDNOTES 1 HDI is a composite measure of GDP per head, life expectancy at birth and education. 2 Colchester (1995) has recounted that political representatives of forest dwellers in Sarawak, Malaysia, have routinely given logging licenses to members of the state legislature.
7,581
2010-01-12T00:00:00.000
[ "Economics", "Environmental Science" ]
$p\Xi^- $ Correlation in Relativistic Heavy Ion Collisions with Nucleon-Hyperon Interaction from Lattice QCD On the basis of the $p\Xi^-$ interaction extracted from (2+1)-flavor lattice QCD simulations at the physical point, the momentum correlation of $p$ and $\Xi^-$ produced in relativistic heavy ion collisions is evaluated. $C_{\rm SL}(Q)$ defined by a ratio of the momentum correlations between the systems with different source sizes is shown to be largely enhanced at low momentum due to the strong attraction between $p$ and $\Xi^-$ in the $I=J=0$ channel. Thus, measuring this ratio at RHIC and LHC and its comparison to the theoretical analysis will give a useful constraint on the $p\Xi^-$ interaction. Introduction The coupled-channel Nambu-Bethe-Salpeter (NBS) wave function measured in lattice QCD [1,2] can now provide "theoretical" information of hyperon-nucleon and hyperon-hyperon interactions through the HAL QCD method [3,4,5,6]. The energy-independent non-local potentials U(r, r ′ ) obtained by the method allow us to calculate the scattering phase shifts and binding energies of two baryons. These potentials are also useful for analyzing the two-particle momentum correlations in relativistic heavy ion collisions [7]. It was recently studied in [8] that the possible spin-2 pΩ − dibaryon state suggested by lattice QCD [9] can be probed by the pΩ − momentum correlation at RHIC and LHC. In particular, the ratio of correlation functions between small and large collision systems, C SL (Q), is shown to be a good measure to extract the strong interaction effect without much contamination from the Coulomb effect [8]. In the present paper, we extend the analysis to the pΞ − system in I = J = 0 channel which was recently predicted to have large attraction by the lattice QCD simulations at physical quark masses [4]. Lattice QCD formulation We start with the normalized four-point function R in channel α defined by where B α 1 ( x, t) and B α 2 ( x, t) are the sink operators for octet baryons. Z α 1 Z α 2 are the corresponding wave-function renormalization factors, and J(0) is a source operator at zero initial-time to create two baryons. The coupled channel potential is obtained through the linear partial differential equation [2]; . D α t is a time-derivative operator whose leading-order term reads −∂/∂t. We introduce a derivative expansion to treat the non-local potential as In the following, we truncate the expansion at the leading order. We employ (2 + 1)-flavor QCD configurations on the L 4 = 96 4 lattice with the lattice spacing a ≃ 0.085fm. This corresponds to the physical size, La = 8.1fm, which guarantees that the finite volume effect on U αβ ( r, r ′ ) is negligible. The quark masses are chosen for the system to be almost at the physical point; m π ≃ 146 MeV and m K ≃ 525 MeV [4]. The total number of configurations is 414 × 4 space-time rotations × 48 wall sources. The baryon masses measured in this setup are listed below. The S = −2 baryon-baryon interactions including the I=0 ΛΛ − NΞ − ΣΣ coupled-channel system have been recently reported in [4]. In particular, one of the diagonal components V NΞ,NΞ (r) in the (I, J) = (0, 0) channel ( 1 S 0 ) was shown to have large attractive well at intermediate distance and relatively weak repulsive core at short distance, while V NΞ,NΞ (r) in the (I, J) = (0, 1) channel ( 3 S 1 ) has weaker attractive well and stronger repulsive core. Also, V NΞ,NΞ (r) in the I = 1 channels do not have appreciable attraction. Motivated by these observations, we parametrize the lattice results of V NΞ,NΞ (r) in the I = 0 channels by a combination of the Gauss and Yukawa functions as shown in Fig.1. Curves with different t correspond to the potentials obtained from R( x, t) for different t, so that the t dependence of V(r) reflects typical magnitude of the systematic error of the lattice data. We found that the strong QCD attraction in Fig.1(Left) together with the Coulomb attraction leads to the 1 S 0 system close to the unitary region where the inverse of the scattering length is close to zero. On the other hand, the 3 S 1 system described by Fig.1(Right) has strong repulsion even with the Coulomb attraction. pΞ − momentum correlation The correlation function of non-identical pair such as pΞ − is given in terms of the two-particle distribution N pΞ (k p , k Ξ ) normalized by a product of the single particle distributions, where relative and total momenta are defined as Q = (m p k Ξ − m Ξ k p )/M and K = k p + k Ξ , respectively, with correspond to the phase space distributions of p and Ξ at freeze-out. The final state interaction after the freeze-out is described by the two-particle wave function Ψ pΞ with a shifted relative coordinate r ′ = x Ξ −x p −K(t p −t Ξ )/M. Here we consider the static source function with spherical symmetry to extract the essential part of physics; where R i is a source size parameter. Assuming the equal-time emission t p = t Ξ , we obtain where [dr] = 1 2 √ πR 3 dr r 2 e − r 2 4R 2 with R = (R 2 p + R 2 Ξ )/2 being the effective size parameter. dΩ is the integration over the solid angle between Q and r. Note that ψ C (r) is the Coulomb wave function characterized by the reduced mass and the Bohr radius of the pΞ − system. Its S-wave component is denoted by ψ C 0 (r). The scattering wave functions obtained by solving the Schrödinger equation with both strong interaction and Coulomb interaction are denoted by χ J=0 sc (r) and χ J=1 sc (r) for the 1 S 0 channel and 3 S 1 channel, respectively. We assume that the I = 1 sector does not contribute substantially to C(Q), which is supported by the fact that the I = 1 pΞ − potential has only short-range repulsion [4]. The factors 1/8 = 1/2 × 1/4 and 3/8 = 1/2 × 3/4 originate from the isospin and spin multiplicities. Also, we assume that the absorptive contribution by the coupling to the ΛΛ channel is negligible since it is reported to be weak due to its short range nature [4]. In [8], the "SL (small-to-large) ratio" was introduced: It is defined as a ratio of C(Q) between the systems with different source sizes, which has good sensitivity to the strong interaction without much contamination from the Coulomb interaction [8]. Shown in Fig.2 is C SL (Q) of the pΞ − system with the Coulomb interaction under the assumption of the static source given in Eq.(4). The large enhancement of this ratio at small Q originates from the fact that the pΞ − system in the 1 S 0 channel is close to the unitary region. The result has rather weak dependence on t, which indicates that the systematic errors of the lattice data do no aftect the final results significantly. We have also checked that taking the expanding source as discussed in [8] does not change the present result. Summary The momentum correlation of the pΞ − system was presented by employing the pΞ − potential extracted from the coupled channel analysis of the (2+1)-flavor lattice QCD data at the physical point. So-called the SL-ratio of the momentum correlation (C SL (Q)) was calculated and was shown to have large enhancement at small Q due to the strong attraction between p and Ξ − in the 1 S 0 channel. Measuring this ratio at RHIC and LHC and its comparison to the present theoretical analysis will give useful constraint on the pΞ − interaction. Such information is particularly important not only for the nature of the possible H-dibaryon coupled to pΞ − [4] but also for the properties of Ξ-hypernuclei [10] and for Ξ − in the central core of the neutron star [11].
1,892.8
2017-04-18T00:00:00.000
[ "Physics" ]
An Experimental Study on Field Spectral Measurements to Determine Appropriate Daily Time for Distinguishing Fractional Vegetation Cover : Remote sensing technology has been widely used to estimate fractional vegetation cover (FVC) at global and regional scales. Accurate and consistent field spectral measurements are required to develop and validate spectral indices for FVC estimation. However, there are rarely any experimental studies to determine the appropriate times for field spectral measurements, and the existing guidelines or references are rather general or inconsistent, it is still not agreed upon and detailed experiments are missing for a local research. In this experiment, five groundcover objects were measured continuously from 07:30 a.m. to 17:30 p.m. local time in three consecutive sunny days using a portable spectrometer. The coe ffi cients of variation (CV) were applied to investigate the reflectance variation at wavelengths corresponding to MODIS satellite channels and the derived spectral indices used to estimate FVC, including photosynthetic vegetation (PV) and non-photosynthetic vegetation (NPV). The results reveal little variation in the reflectance measured between 10:00 a.m. and 16:00 p.m., with CV values generally less than 10%. The CV values of FVC spectral indices for estimating PV, NPV and bare soil (BS) are generally less than 3%. While more experiments are yet to be carried out at di ff erent locations and in di ff erent seasons, the findings so far imply that the in situ spectrum measured between 9:00 a.m. and 17:00 p.m. local time would be useful to discriminate FVC objects and validate satellite estimates-based indices using visible, near-infrared and shortwave infrared channels. Introduction As an integral part of ecosystems, vegetation, including photosynthetic (PV, green leaves) and non-photosynthetic (NPV, aboveground dead biomass, litter and wood), plays an important role in climate regulation, geochemical cycle, and soil and water conservation [1][2][3]. Fractional vegetation cover (FVC) is the ratio of vertical projection area of vegetation to total ground area, which is usually used to evaluate the degree of land degradation [4] and the function of soil and water conservation [5,6], and widely used in various soil erosion prediction models, such as USLE [7], RUSLE [8] and CSLE To reduce the effect of the water content of the leaves surface and soil on the spectral reflectance, the leaves and soil were left in a dry, cool place for a week before being tested. Measurements were conducted on December 8, 9 and 10, 2019. The sky in these three days was clear and sunny, and wind speed was less than 4.5 m/s. The observation was carried out from 7:30 a.m. to 17:30 p.m. local time, Remote Sens. 2020, 12, 2942 4 of 14 the hourly weather information is shown in Table 1. The measurement site was located in Yangling District, Shaanxi Province (108 • 4 33" E; 34 • 16 33" N) at a 482 m a.s.l. Spectral measurements of PV, NPV and BS samples were performed by SVC HR-1024i portable hyper-spectrometer (USA) with a spectral range of 350-2500 nm, the spectral resolution was 3.5 nm for 350-1000 nm, 9.5 nm for 1000-1850 nm and 6.5 nm for 1850-2500 nm. The field of view of the probe was 25 • . To reduce the impact of shelter, the probe was kept vertically downward with 50 cm above the sample center. The target and 95% standard whiteboard were measured in turn. Each object was measured 4 times at each time point, and 3 consecutive spectral curves were measured each time for a total of 12 curves as replicates per day, and the average of a total of 36 repetitions over three days was obtained as the reflection spectrum for each object at that time point. Bands Identification and Spectral Indices Calculation All ground objects have the ability to reflect electromagnetic radiation, and most of the electromagnetic radiation energy reflected by objects on the surface of the earth comes from solar energy. Reflected radiation energy of the object as a percentage of the total radiation energy, known as reflectivity, is dimensionless. The changing law of the reflectance of an object with the incident wavelength is called the reflection spectrum of the object, which is commonly expressed as the reflection spectrum curve [20]. The reflectance curves of PV, NPV and BS from our experiment are shown in Figure 2 The horizontal axis represents the wavelength, and the vertical axis represents reflectance. These wavelengths heavily affected by the atmosphere and the water vapor absorption were excluded, and retaining wavelengths are 350-1300, 1450-1750, and 2000-2300 nm. The spectral characteristics of PV, NPV and BS were here analyzed using the measured spectral curves of the five objects at 12:00 p.m. (noon) as an example ( Figure 2). Due to the influence of chlorophyll, PV had the typical spectral characteristics of green vegetation. At the green band (560 nm) of VIR (visible band, 400-700 nm), there was a small reflection peak, and near the red band (670 nm), there was an absorption valley. In addition, the reflectivity of PV showed a significant high reflectivity in NIR (near infrared, 700-1100 nm). Therefore, PV could be easily distinguished from NPV and BS in VIR-NIR (400-1100 nm). However, NPV and BS not only had no special spectral characteristics in VIR-NIR, but also had similar reflectivity curves, thus it was impossible to distinguish NPV and BS by using VIR-NIR. At the wavelength of SWIR (shortwave infrared, 1100-2400 nm), the reflectivity of NPV and BS was quite different. NPV had obvious reflection peak at 1700 nm, and had diagnostic absorption characteristics affected by cellulose near 2100 nm at the same time. In addition, BS had another absorption characteristic at 2200 nm mainly referring to the lattice of clay minerals. Therefore, it was possible to estimate f NPV using the spectral characteristics of SWIR. impossible to distinguish NPV and BS by using VIR-NIR. At the wavelength of SWIR (shortwave infrared, 1100-2400 nm), the reflectivity of NPV and BS was quite different. NPV had obvious reflection peak at 1700 nm, and had diagnostic absorption characteristics affected by cellulose near 2100 nm at the same time. In addition, BS had another absorption characteristic at 2200 nm mainly referring to the lattice of clay minerals. Therefore, it was possible to estimate fNPV using the spectral characteristics of SWIR. This study selected the most widely used NDVI index for estimating fPV, as well as the SWIR32 for estimating fNPV based on MODIS multispectral image. Both two indices were calculated after resampling the measured data according to band range of the MODIS sensor ( Figure 2). Following the MODIS channels, the combination of absorption band 1 (red band, 640-670 nm) and high reflectance band 2 (NIR, 700-800 nm) for PV could be used to derive NDVI. The reflectance peak of band 6 (SWIR2, 1628-1652 nm) and absorption valley of band 7 (SWIR3, 2105-2135 nm) could be designed (SWIR32) for NPV estimation. In our experiment, the 36 reflectance curves of PV at each time point were used to calculate NDVI following Equation (1). The 36 reflectance curves of the four objects of NPV1, NPV2, NPV3 and BS at each time point were used to calculate SWIR32 following Equation (2). The index value for each time point was calculated by averaging 36 replications. The calculation formula of each indices was referenced in Guerschman et al. [21]. MODISX indicated the band number of the MODIS satellite sensor. This study selected the most widely used NDVI index for estimating f PV , as well as the SWIR32 for estimating f NPV based on MODIS multispectral image. Both two indices were calculated after resampling the measured data according to band range of the MODIS sensor ( Figure 2). Following the MODIS channels, the combination of absorption band 1 (red band, 640-670 nm) and high reflectance band 2 (NIR, 700-800 nm) for PV could be used to derive NDVI. The reflectance peak of band 6 (SWIR2, 1628-1652 nm) and absorption valley of band 7 (SWIR3, 2105-2135 nm) could be designed (SWIR32) for NPV estimation. In our experiment, the 36 reflectance curves of PV at each time point were used to calculate NDVI following Equation (1). The 36 reflectance curves of the four objects of NPV1, NPV2, NPV3 and BS at each time point were used to calculate SWIR32 following Equation (2). The index value for each time point was calculated by averaging 36 replications. The calculation formula of each indices was referenced in Guerschman et al. [21]. MODIS X indicated the band number of the MODIS satellite sensor. Variation Test In order to determine the appropriate daily time for field spectral acquisition, the variation of reflectance values over time of band 1, 2, 6 and 7 and the derived indices (NDVI, SWIR32) were investigated and compared over the measurement period. First, the coefficients of variation (CV %) of the 36 repetitions of both reflectance values and spectral indices at each time point were investigated in this study. Then, this study divided the 12 time points during the period 7:30 a.m.-17:30 p.m. into 55 time periods with 3 or more consecutive time points, and then determined the most appropriate daily time period based on the CV analysis of the averaged reflectance or the averaged indices at each time period. A CV of less than 10% was considered as acceptable, as suggested by Duggin [22]. Finally, One-way ANOVA was applied to further analyze the significance differences of the spectral indices over 12 time points from 7:30 a.m. to 17:30 p.m. by using 36 repeating values at each time point, taking 95% as confidence level. Refelctance Variation of Characteristic Bands over Time This study began with a qualitative analysis of the reflectance of the characteristic bands that can best distinguish PV, NPV and BS over time. The measured spectral data were resampled according to the corresponded channels 1, 2, 6, 7 of MODIS sensor. As shown in Figure 3a, the two gray lines show the reflectance of PV in band 1 and 2 corresponding to MODIS. It can be seen that the reflectance of band 1 is small and varies little with time. The reflectance of band 2 varies widely and is relatively stable between 11:00 a.m. and 16:00 p.m. Similarly, the gray lines in Figure 3b It reveals that the reflectivity of band 1 from PV varies greatly among 36 repetitions, CV of 36 repetitions of band 2 is around 10% in the period from 10:00 a.m. to 16:00 p.m., the average CV over the time period is 9.97%. The CV of band 6 and band 7 for NPV1 is around 10% at 10:00 a.m.-16:00 p.m., with the averaged CV of 11.17% and 14.37%, respectively. The CV of bands 6 and 7 for NPV2 is beneath 10% from 10:00 a.m. to 16:00 p.m., with the averaged CV of 7.15% and 8.42%, respectively. The CV of bands 6 and 7 for NPV3 is less than 10% from 10:00 a.m. to 16:00 p.m., with the averaged CV of 6.71% and 7.49%, respectively. The CV of BS's band 6 and 7 is less than 10% between 9:00 a.m. and 16:00 p.m. with the averaged CV of 8.37% and 8.34%, respectively. In order to further verify the degree of variation over time of these reflectance values and spectral indices as shown in Figure 3, the mathematical analysis was used to calculate the CV. As described in Section 2.3, the first method used 36 spectral curves at each time point to extract the reflectance of the Remote Sens. 2020, 12, 2942 7 of 14 characteristic bands corresponding to MODIS, and 36 replicate values of each band were used as a sample to calculate the CV; the results are shown in Table 2. Note: CV, coefficient of variation (standard deviation/mean). (The specific information about PV, NPV1, NPV2, NPV3 and BS is consistent with Figure 1. The band(X) represents the reflectance of a band obtained by resampling field spectra according to the band range of MODIS sensor.). It reveals that the reflectivity of band 1 from PV varies greatly among 36 repetitions, CV of 36 repetitions of band 2 is around 10% in the period from 10:00 a.m. to 16:00 p.m., the average CV over the time period is 9.97%. The CV of band 6 and band 7 for NPV1 is around 10% at 10:00 a.m.-16:00 p.m., with the averaged CV of 11.17% and 14.37%, respectively. The CV of bands 6 and 7 for NPV2 is beneath 10% from 10:00 a.m. to 16:00 p.m., with the averaged CV of 7.15% and 8.42%, respectively. The CV of bands 6 and 7 for NPV3 is less than 10% from 10:00 a.m. to 16:00 p.m., with the averaged CV of 6.71% and 7.49%, respectively. The CV of BS's band 6 and 7 is less than 10% between 9:00 a.m. and 16:00 p.m. with the averaged CV of 8.37% and 8.34%, respectively. In general, the reflectance of these FVC objects was relatively stable from 10:00 a.m. to 16:00 p.m. due to the CV of less than 10%, except for band 6 and 7 of NPV1 and band 1 of PV. In particular, the reflectance of band 6 and 7 of BS was stable even from 9:00 a.m. to 16:00 p.m. The second method was used here to check the variability by selecting different samples to calculate the CV. Figure 4 shows 15 plots which indicate the CV about characteristic bands of PV, NPV1-3 and BS, as well as the calculated spectral indices (NDVI or SWIR32) for each time period. The vertical axis represents the start time point and the horizontal axis is the end time point. Each grid in the triangular matrix represents the CV value of the period from its corresponding start time to end time, and the total 55 time periods are obtained in each plot. The lightest gray grid in the figure represents the time period in which the CV is in the acceptable 0-10% range. According to the CV of reflectance values over 55 time periods (Figure 4), it is found that reflectance of band 1 of PV was relatively stable from 10:00 a.m. NPV3 were stable from 9:00 a.m. to 16:00 p.m. with CV of 6.03% and 5.91%, respectively. Similarly, band 6 and 7 of BS were also stable from 9:00 a.m. to 16:00 p.m. with CV of 7.15% and 6.42%, respectively. From Table 2 and Figure 4, the appropriate daily time identified by two methods was different, and the set of time by the first method was slightly strict. In general, the CV of reflectance in the period of 10:00 a.m.-16:00 p.m. was acceptable for all the five FVC objects. Effectiveness of Spectral Indices for Distinguishing FVC Objects Based on experimental reflectance, the effectiveness of NDVI and SWIR32 to differentiate the PV, NPV, and BS was tested following Equations (1) and (2). The hourly variation of NDVI among the five FVC objects was shown in Figure 5a; Figure 5b represents the change in SWIR32 index hourly for five objects. From Figure 5a, it can be seen that the NDVI index of PV was significantly high, with a range of 0.921-0.935 over time in the period of 7:30 a.m.-17:30 p.m. The NDVI index of NPV fluctuated from 0.242 to 0.447, and the NDVI index of BS changed around 0.104 to 0.177 over the time. The NDVI values of PV could be much easier to differentiate than that of NPV and BS which made it effective to detect green vegetation from other objects. Based on experimental reflectance, the effectiveness of NDVI and SWIR32 to differentiate the PV, NPV, and BS was tested following Equations (1) and (2). The hourly variation of NDVI among the five FVC objects was shown in Figure 5a; Figure 5b represents the change in SWIR32 index hourly for five objects. From Figure 5a, it can be seen that the NDVI index of PV was significantly high, with a range of 0.921-0.935 over time in the period of 7:30 a.m.-17:30 p.m. The NDVI index of NPV fluctuated from 0.242 to 0.447, and the NDVI index of BS changed around 0.104 to 0.177 over the time. The NDVI values of PV could be much easier to differentiate than that of NPV and BS which made it effective to detect green vegetation from other objects. Since the reflectance of 6 and 7 bands of five kinds of FVC objects had abnormal values at 7:30 a.m., 8:00 a.m. and 17:30 p.m., the SWIR32 index of these 3 h were excluded in the Figure 5b. Figure 5b shows a very clear pattern of the distribution of SWIR32 of PV, NPV and BS. The SWIR32 index of PV, NPV and BS were 0.267-0.317, 0.661-0.764, and 1.0-1.116, respectively. It is evident that SWIR32 is an effective index to distinguish NPV from BS. CV of the Spectral Indices over Time This study used three methods to evaluate the variation of the spectral indices over time, including two kinds of CV detection method and ANOVA. The first method is consistent with the method used to calculate the CV value of reflectance in Table 2. The CV of NDVI and SWIR32 between different hours was calculated using their 36 repetitions on each time point, and the results are shown in Table 2. Compared with the great variation of band 1 and 2 of PV, the NDVI of PV was much more stable, and the CV was no more than 2% in all 36 replicates at all-time points from 7:30 a.m. to 17:30 p.m. (Table 2). Similarly, the CV of 36 repetitions of both SWIR32 index of NPVs and BS was about 10% from 9:00 a.m. to 17:00 p.m., which was more stable than reflectance of their characteristic bands with time. The results of the second method are shown in Figure 4. In addition to the CV of the reflectance analyzed in Section 3.1, Figure 4 also shows that the CV of FVC indices (NDVI and SWIR32) over the 55 time periods is stable from 7:30 a.m. to 17:00 p.m. for NDVI of PV with the value of 0.75%, the SWIR32 of NPV1 and NPV2 remained stable from 9:00 a.m. to 17:00 p.m. with the CV of 2.09% and 2.34%, respectively. For NPV3, the SWIR32 was stable from 8:00 a.m. to 17:00 p.m. with CV of 3.08%. SWIR32 of BS was stable from 7:30 a.m. to 17:00 p.m. with CV of 4.22%. From Table 2 and Figure 4, in general, compared with the absolute reflectance value of the ground objects, the derived spectral indices, NDVI and SWIR32, could remain largely stable in the time point from 9:00 a.m. to 17:00 p.m. During this period, the CVs are 0.18% for NDVI, 2.09%, 2.34%, 3.1% and 3.09% for NPV1, NPV2, NPV3 and BS, respectively. CV of the Spectral Indices over Time This study used three methods to evaluate the variation of the spectral indices over time, including two kinds of CV detection method and ANOVA. The first method is consistent with the method used to calculate the CV value of reflectance in Table 2. The CV of NDVI and SWIR32 between different hours was calculated using their 36 repetitions on each time point, and the results are shown in Table 2. Compared with the great variation of band 1 and 2 of PV, the NDVI of PV was much more stable, and the CV was no more than 2% in all 36 replicates at all-time points from 7:30 a.m. to 17:30 p.m. (Table 2). Similarly, the CV of 36 repetitions of both SWIR32 index of NPVs and BS was about 10% from 9:00 a.m. to 17:00 p.m., which was more stable than reflectance of their characteristic bands with time. The results of the second method are shown in Figure 4. In addition to the CV of the reflectance analyzed in Section 3.1, Figure 4 also shows that the CV of FVC indices (NDVI and SWIR32) over the 55 time periods is stable from 7:30 a.m. to 17:00 p.m. for NDVI of PV with the value of 0.75%, the SWIR32 of NPV1 and NPV2 remained stable from 9:00 a.m. to 17:00 p.m. with the CV of 2.09% and 2.34%, respectively. For NPV3, the SWIR32 was stable from 8:00 a.m. to 17:00 p.m. with CV of 3.08%. SWIR32 of BS was stable from 7:30 a.m. to 17:00 p.m. with CV of 4.22%. From Table 2 and Figure 4, in general, compared with the absolute reflectance value of the ground objects, the derived spectral indices, NDVI and SWIR32, could remain largely stable in the time point from 9:00 a.m. to 17:00 p.m. During this period, the CVs are 0.18% for NDVI, 2.09%, 2.34%, 3.1% and 3.09% for NPV1, NPV2, NPV3 and BS, respectively. Significance Test for Spectral Indices over Time One-way ANOVA was carried out to further test the variation of each index which was used to estimate f PV or f NPV over the time ( Figure 5). The results showed that, the variation of the PV-NDVI between 9:00 a.m. and 17:00 p.m. was not significant (p > 0.05). The difference of SWIR32 for NPV1, NPV2, NPV3 and BS was all not significant from 9:00 a.m. to 17:00 p.m. (p > 0.05). Table 3 lists the appropriate time periods (I and II) identified by the CV of less than 10% as the results of Table 2 and Figure 4, and the suitable time period (III) obtained by One-way ANOVA ( Figure 5). Combination of three methods showed that both NDVI and SWIR32 were stable from 9:00 a.m. to 17:00 p.m., indicating that measuring spectra for this time period would satisfy the requirement of vegetation cover estimation using remote sensing technology. It is also implied that the derived spectral indices obtained in this period can be comparable to those extracted from MODIS Terra images. :00-17:00 NPV1-SWIR32 9:00-17:00 9:00-17:00 9:00-17:00 NPV2-SWIR32 9:00-17:00 9:00-17:00 9:00-17:00 NPV3-SWIR32 8:00-17:00 8:00-17:00 9:00-17:00 BS-SWIR32 8:00-17:00 7:30-17:00 9:00-17:00 Note: The specific information about PV, NPV1, NPV2, NPV3 and BS is consistent with Figure 1. The appropriate time I and II mean the daily time periods detected by the mathematical methods defined in Section 2.3, the appropriate time III means the daily time period from One-way ANOVA test in this section. In addition, different spectral indices seemed to have different requirements for spectrum measurement time. The NDVI index of PV performed more stably, and the time range suitable for spectrum acquisition was longer, while the SWIR32 index of NPV and BS was shorter. Although the range of characteristic bands was the same, the SWIR32 of NPV was still stricter than that of BS in terms of time. In summary, 10:00 a.m.-16:00 p.m. was a strict time period in consideration of the variation of reflectivity of each characteristic band. Between 9:00 a.m. and 17:00 p.m. was also acceptable when the derived spectral indices were only considered over time. Variation of Reflectance over Time Reflectance is influenced by ancillary factors including atmospheric scattering and absorption, topography, slope and aspect, solar zenith angle, and even earth-sun distance [23]. Jackson et al. reported that the percentage error in the reference panel irradiance increased with the increase in the solar zenith angle [24]. Chang et al. believed that because atmospheric conditions were variable, the calculation of reflectivity would introduce errors without measuring the irradiance of both the target and the reference plate, and the error would increase with the increase in cloud cover [25]. Kimes et al. (1983) reported errors in spectral radiation due to nearby objects in field studies: people holding sensors, backgrounds, buildings, or trees [26]. Some studies have pointed out that the quality of the reference panels at the time of measurement can also introduce errors, as it is impossible to construct a perfect Lambert surface [27,28]. Duggin compared two different methods of measuring reflectance. One is a sequential measurement and the other is a simultaneous measurement, showing that sequential measurements introduce more error in the reflectance calculations [29]. These errors can be minimized by creating similar conditions in the same fields at the same time [25,30]. In order to obtain the reflectance of the target, measuring the irradiance of the standard reflectors and the target is needed. The reflectance of the target is the irradiance ratio of the target and the standard reflectors. This is based on the assumption that the intensity and distribution of irradiance is invariant during readings of the target and standard reflectors [31]. Under the natural light, the radiation received by the ground objects mainly comes from the direct sunlight in the visible and infrared band. When the sun rises or sets at a very low solar altitude, the energy of direct sunlight is weak and influenced easily by temperature and humidity, which will attenuate the radiation intensity through the comprehensive effect of reflection, absorption and scattering [32]. The resulting measurement error in radiation intensity between the standard reflector and the target is supposed to be the dominant reason why the CV of reflectance of objects at early morning and at dusk was very large for the 36 repetitions at the certain time point or in comparison with the 12 time points. Additionally, the characteristics of the object itself and the uniformity of its surface can also have an impact on the results. In our experiment, the degree of dispersion of reflectance with time fluctuations varies for different wavelengths of different objects. Compared with that of NPV1 and NPV2, the CV between 36 repetitions of reflectance in the band 6 and band 7 of NPV3 and BS was smaller. This was probably related to the surface homogeneity of different objects which will affect the radiation intensity. NPV1 and NPV2 were large broad leaves, and the shadows generated by leaf folding would have an impact on the measurements, while NPV3 as tiny grass leaves and BS as a sifted uniform soil, were evenly distributed within the field of view. Due to the influence of additional reflectivity, compared with single leaf, multiple leaves can produce higher reflectivity in the NIR band of the spectrum [33]. In our experiment, although the NDVI of PV was very stable, the reflectance in band 2 (NIR) of PV had a large error compared with other characteristic bands, which may be caused by the uneven layer of wheat leaves during the measurements. The higher CV of band 1 (Red) is probably due to the small mean reflectance. An Acceptable Error from the Point of View of Reflectance If the measured reflectance values that distinguish FVC objects are stable over time, then the derived spectral indices established by the characteristic bands should also be stable over that time, thus, the FVC values estimated were assumed to be stable. In this study, the reflectance values of the objects fluctuate more over time than the vegetation indices do. The vegetation indices removed the variation in the original reflectance values to some extent due to the ratio operation. Duggin suggested that if the error of reflectance coefficient is 10% [22,34], the measurement results were acceptable when they analyzed the difference of surface reflectance under different irradiance conditions on sunny and cloudy days. In our experiment, the CV of reflectance in characteristic spectral bands in the period of 10:00 a.m.-16:00 p.m. were less than or about 10%, and the CV of the 36 repetitions at each time point during the period 10:00 a.m.-16:00 p.m. was approximately 10%, so it was consistent with the previous results. The CV of the spectral indices derived from the characteristic bands in our study was stable between 9:00 a.m. and 17:00 p.m., and the ANOVA test further confirmed this result. Limitations of the Experiment From the above results, it is learned that when field spectral measurements are made to verify FVC estimation by remote sensing, measurements can be scheduled one and a half hours after sunrise and one and a half hours before sunset on a clear winter day. This was a broader time frame than other studies and national standards have suggested. For example, when Cao and Wang collected spectra in the field to distinguish FVC objects, there were only four hours between 10:00 a.m. and 14:00 p.m. [13,35], the national standard for measuring spectra of objects is between 10:00 a.m. and 15:00 p.m., leaving only five hours for fieldwork [11]. Even the longest time period is only 6 h from 10:00 a.m. to 16:00 p.m. in Guerschman's field experiment [12]. In this study, however, based on the analysis of the stability of the spectral indices over time, the recommended total of 8 h between 9:00 a.m. and 17:00 p.m. is longer than others' sampling schedules. Collection of field spectra at the recommended wider time frame made it feasible and easier for field spectral library and further image analysis and spectral index development. It is worth noting that the experiment in this study was basically carried out near the time of the lowest solar altitude angle in the northern hemisphere (22th December), which could provide reference for the area above 34 degrees north latitude. Readers must check the data in this study and understand the limitations of the results in this experiment. It is reasonable to believe that measurements carried out when the solar altitude angle is higher (e.g., summer) are expected to return even better results. Conclusions In order to determine and assess the appropriate time for FVC estimation using satellite remote sensing, an innovative field experiment was carried out to continuously measure PV, NPV and BS targets over a three-day period. The spectral curves were measured between 7:30 a.m. and 17:30 p.m. for five objects using a spectrometer; the variation of reflectance and FVC spectral indices over time was analyzed and the following conclusions were obtained. • NDVI and SWIR32 can potentially apply in distinguishing PV, NPV and BS objects. • The degree of stability of the reflectivity over time varies for different FVC objects and bands. Generally, the appropriate time to obtain the relative stable reflectance is from 10:00 a.m. to 16:00 p.m. with the CVs for different bands ranging from 5.01% to 9.53%.
6,976.2
2020-09-10T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Comparative transcriptome analyses of a late-maturing mandarin mutant and its original cultivar reveals gene expression profiling associated with citrus fruit maturation Characteristics of late maturity in fruit are good agronomic traits for extending the harvest period and marketing time. However, underlying molecular basis of the late-maturing mechanism in fruit is largely unknown. In this study, RNA sequencing (RNA-Seq) technology was used to identify differentially expressed genes (DEGs) related to late-maturing characteristics from a late-maturing mutant ‘Huawan Wuzishatangju’ (HWWZSTJ) (Citrus reticulata Blanco) and its original line ‘Wuzishatangju’ (WZSTJ). A total of approximately 17.0 Gb and 84.2 M paried-end reads were obtained. DEGs were significantly enriched in the pathway of photosynthesis, phenylpropanoid biosynthesis, carotenoid biosynthesis, chlorophyll and abscisic acid (ABA) metabolism. Thirteen candidate transcripts related to chlorophyll metabolism, carotenoid biosynthesis and ABA metabolism were analyzed using real-time quantitative PCR (qPCR) at all fruit maturing stages of HWWZSTJ and WZSTJ. Chlorophyllase (CLH) and divinyl reductase (DVR) from chlorophyll metabolism, phytoene synthase (PSY) and capsanthin/capsorubin synthase (CCS) from carotenoid biosynthesis, and abscisic acid 8′-hydroxylase (AB1) and 9-cis-epoxycarotenoid dioxygenase (NCED1) from ABA metabolism were cloned and analyzed. The expression pattern of NCED1 indicated its role in the late-maturing characteristics of HWWZSTJ. There were 270 consecutive bases missing in HWWZSTJ in comparison with full-length sequences of NCED1 cDNA from WZSTJ. Those results suggested that NCED1 might play an important role in the late maturity of HWWZSTJ. This study provides new information on complex process that results in the late maturity of Citrus fruit at the transcriptional level. INTRODUCTION Fruit maturity date is an important economic trait and selection of varieties with different harvest time would be advantageous to extend their storage period and market share. Citrus, one of the most important fruit crops, is a large-scale commercial production in the tropical and subtropical regions of the world. The total harvested area of citrus exceeds 8.8 million ha, with an annual yield of over 130 million tons in 2015 (Food and Agricultural Organization of the United Nations, 2014). Currently, harvest time for most citrus is mainly from November to December resulting in huge market pressure. Therefore, breeding of early-and late-maturing citrus varieties is essential to extend marketing season, meet the needs of consumers and ensure an optimal adaptation to climatic and geographic conditions. Plant hormones play important roles in the regulation of fruit development and ripening (Kumar, Khurana & Sharma, 2014). Ethylene is known to be the major hormonal regulator in climacteric fruit ripening. In addition to ethylene, abscisic acid (ABA), auxin, gibberellin (GA) and brassinosteroid are involved in regulating fruit ripening. ABA plays an important role as an inducer along with ethylene signaling for the onset of fruit degreening and carotenoid biosynthesis during development and ripening process in climacteric and non-climacteric fruits (Leng et al., 2009;Sun et al., 2010;Jia et al., 2011;Romero, Lafuente & Rodrigo, 2012;Soto et al., 2013;Wang et al., 2016). ABA treatment can rapidly induce flavonol and anthocyanin accumulation in berry skins of the Cabernet Sauvignon grape suggesting that ABA could stimulate berry ripening and ripening-related gene expression (Koyama, Sadamatsu & Goto-Yamamoto, 2010). ABA also participates in the regulation of fruit development and ripening of tomato (Zhang, Yuan & Leng, 2009;Sun et al., 2011), cucumber (Wang et al., 2013), strawberry (Jia et al., 2011), bilberry (Karppinen et al., 2013, citrus (Zhang et al., 2014) and grape (Nicolas et al., 2014). Recent studies showed that ABA is a positive regulator of ripening and exogenous ABA application could effectively regulate citrus fruit maturation (Wang et al., 2016). Those results suggest that ABA metabolism plays a crucial role in the regulation of fruit development and ripening. In addition, fruit deterioration and post-harvest processes might influence fruit quality and ripening process. However, there are few reports involved in those processes. α-mannosidase (α-Man) and β-D-N-acetylhexosaminidase (β-Hex) are the two ripening-specific N-glycan processing enzymes that have proved that their transcripts increased with non-climacteric fruit ripening and softening (Ghosh et al., 2011). Genetic results have proved that 9-cis-epoxycarotenoid dioxygenase (NCED) is the key enzyme in ABA metabolism in plants (Liotenberg, North & Marion-Poll, 1999;Luchi et al., 2001). NCED1 could initiate ABA biosynthesis at the beginning of fruit ripening in both peach and grape fruits (Zhang et al., 2009). Silence of FaNCED1 (encoding a key ABA synthesis enzyme) in strawberry fruit could cause the ABA levels to decrease significantly and uncolored fruits and this phenomenon could be rescued by application of exogenous ABA (Jia et al., 2011). Suppression of the expression of SLNCED1 could result in the delay of fruit softening and maturation in tomato (Sun et al., 2012). Overexpression of ABA-response element binding factors (SlAREB1) in tomato could regulate organic acid and sugar contents during tomato fruit development. Higher levels of organic acid, sugar contents and related-gene expression were detected in SlAREB1-overexpressing lines in fruit pericarp of mature tomato (Bastías et al., 2011). However, there is little information available about the role of NCED1 genes in citrus fruit maturation (Zhang et al., 2014). Bud mutant selection is the most common method for creating novel cultivars in Citrus. The 'Huawan Wuzishatangju' (HWWZSTJ) mandarin is an excellent cultivar derived from a bud sport of a seedless cultivar 'Wuzishatangju' (WZSTJ). Fruits of HWWZSTJ are mature in late January to early February of the following year, which is approximately 30 d later than WZSTJ (Qin et al., 2013;Qin et al., 2015). Therefore, the late-maturing mutant and its original cultivar are excellent materials to identify and describe the molecular mechanism involved in citrus fruit maturation. In this study, the highly efficient RNA-Seq technology was used to identify differentially expressed genes (DEGs) between the late-maturing mutant HWWZSTJ and its original line WZSTJ mandarins. DEGs involved in carotenoid biosynthesis, chlorophyll degradation and ABA metabolism were characterized. The present work could help to reveal the molecular mechanism of late-maturing characteristics of citrus fruit at the transcriptional level. Plant materials The late-maturing mutant 'Huawan Wuzishatangju' (HWWZSTJ) (Citrus reticulata Blanco) and its original cultivar 'Wuzishatangju' (WZSTJ) were planted in the same orchard in South China Agricultural University (23 • 09 38 N, 113 • 21 13 E). Ten six-yearold trees of each cultivar were used in this experiment. Peels (including albedo and flavedo fractions) from fifteen uniform-sized fresh fruits were collected on the 275 th (color-break stage, i.e., peels turns from green to orange) and 320 th (maturing stage) days after flowering (DAF) of HWWZSTJ and 275 th (maturing stage) DAF of WZSTJ (Fig. S1) in 2012 and pools were named T3, T1 and T2, respectively. Peels from fifteen uniform-sized fresh fruits of HWWZSTJ and WZSTJ were collected on the 255 th , 265 th , 275 th , 285 th , 295 th , 305 th , 315 th and 320 th DAF in 2012 and used for expression analyses of candidate transcripts associated with chlorophyll, carotenoid biosynthesis and ABA metabolism. All samples were immediately frozen in liquid nitrogen and stored at −80 • C until use. RNA extraction, library construction and RNA-Seq Total RNA was extracted from peels according to the protocol of the RNAout kit (Tiandz, Beijing, China) and genomic DNA was removed by DNase I (TaKaRa, Dalian, China). RNA quality was analyzed by 1.0% agarose gel and its concentration was quantified by a NanoDrop ND1000 spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA). RNA integrity number (RIN) values (>7.0) were assessed using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). Construction of RNA-Seq libraries was performed by the Biomarker Biotechnology Corporation (Beijing, China). mRNA was enriched and purified with oligo (dT)-rich magnetic beads and then broken into short fragments. The cleaved RNA fragments were reversely transcribed to the first-strand cDNA using random hexamer primers. The second-strand cDNA was synthesized using RNase H and DNA polymerase I. The cDNA fragments were purified, end blunted, 'A' tailed, and adaptor ligated. The distribution sizes of the cDNA in the three libraries were monitored using an Agilent 2100 bioanalyzer. Finally, the three libraries were sequenced using an Illumina HiSeq TM 2500 platform. Transcriptome assembly and annotation Sequences obtained in this study were annotated in reference to the genome sequence of Citrus sinensis (Xu et al., 2013;Wang et al., 2014) using a TopHat program (Trapnell, Pachter & Salzberg, 2009). Functional annotation of the unigenes was performed using BLASTx (Altschul et al., 1997) and classified by Swiss-Prot (SWISS-PROT downloaded from European Bioinformatics Institute by Jan., 2013), Clusters of Orthologous Groups of Proteins Database (COG) (Tatusov et al., 2000), Kyoto Encyclopedia of Genes and Genomes Database (KEGG, release 58) (Kanehisa et al., 2004), non-redundant (nr) (Deng et al., 2006) and Gene Ontology (GO) (Harris et al., 2004). The number of mapped and filtered reads for each unigene was calculated and normalized giving the corresponding Reads Per Kilobases per Million reads (RPKM) values. DEGs between the two samples were determined according to a false discovery rate (FDR) threshold of <0.01, an absolute log2 fold change value of ≥1 and a P-value <0.01. Gene validation and expression analysis Data from RNA-Seq were validated using qPCR. All pigment-related (chlorophyll metabolism, carotenoid biosynthesis and ABA metabolism) uni-transcripts were selected to elucidate their expression patterns at all peel coloration stages of HWWZSTJ and WZSTJ with specific primers (Table S1). The citrus actin gene (accession No. GU911361.1) was used as an internal standard for the normalization of gene expression. Expression levels of all pigment-related uni-transcripts were determined using qPCR in an Applied Biosystems 7500 real-time PCR system (Applied Biosystems, CA, USA). A total of 20.0 µl reaction volume contained 10.0 µl THUNDERBIRD SYBR qPCR Mix (TOYOBO Co., Ltd.), 50×ROX Reference dye, 2.0 µl Primer Mix (5.0 µM), 6.0 µl ddH 2 O, and 2.0 µl cDNA (40 ng). The qPCR parameters were: 94 • C for 60 s then 40 cycles of 95 • C for 15 s, 55 • C for 15 s and 72 • C for 30 s. All experiments were performed three times with three biological replicates. Relative expression levels of selected transcripts were calculated by the 2 − CT method (Livak & Schmittgen, 2011). RNA-Seq analyses To obtain differentially expressed genes (DEGs) between HWWZSTJ and WZSTJ, three libraries (T1, T2 and T3) were designed for RNA-Seq. As shown in Table 1 , 26,403,257, 29,163,126, and 28,606,868 raw reads were obtained respectively from the three libraries. After removing low-quality bases and reads, a total of approximately 17.0 Gb clean reads were obtained. The GC contents for T1, T2 and T3 were 44.27%, 44.62% and 44.20%, respectively (Table 1). The range of most transcripts length was 100-200 bp (Fig. S2). Q30 percentage (percentage of sequences with sequencing error rate lower than 0.01%) for each sample was over 90% (Table 1). A total of 44,664,047, 49,507,338 and 48,492,905 reads were mapped which accounted for 84.58%, 84.88% and 84.76% of the total reads, respectively ( Table 2). Number of unique mapped reads accounted for 97.14% (T1), 97.25% (T2) and 97.19% (T3) of the total mapped reads compared with 2.86% (T1), 2.75% (T2) and 2.81% (T3) for multiple mapped reads, respectively. Those results suggested that the throughput and sequencing quality was high enough for further analyses. Analyses of differentially expressed genes (DEGs) DEGs were screened by comparison between any two of the three libraries using p < 0.01, FDR < 0.01 and Fold Change ≥ 2 as thresholds. A total of 2,687, 3,002 and 1,834 DEGs were obtained between the T1 and T3, T2 and T1, T2 and T3 libraries, respectively (Fig. 1A). Among those DEGs, 1,162, 1,567 and 770 were up-regulated and 1,525, 1,435 and 1,064 were down-regulated (Fig. 1B). Transcriptional levels of DEGs in HWWZSTJ on Functional annotation of transcripts A total of 299 new transcripts were annotated using five public databases (Nr, Swiss-Prot, KEGG, COG and GO). A summary of the annotations was shown in Table S3. Maximum number of annotation of differentially expressed transcripts (2,954) was in the Nr databases by comparison between T1 and T3, T2 and T1, T2 and T3, followed by GO databases (2,648) ( Table S4). The differentially expressed transcripts were classified into three categories in GO assignments: cellular component, molecular function and biological process. DEGs between T1 and T3, T2 and T1, T2 and T3 were all significantly enriched in pigmentation, signaling and growth biological processes (Fig. S3A). Based on COG classifications, differentially expressed transcripts were divided into 25 different functional groups (Fig. S3B). DEGs between any two of the three libraries (T1-VS-T3, T2-VS-T1, T2-VS-T3) were assigned to 91, 100 and 91 KEGG pathways, respectively (File S1), and phenylalanine metabolism, porphyrin and chlorophyll metabolism, and flavonoid biosynthesis were the three significantly enriched biological processes (Table 3). Verification of the accuracy of the RNA-Seq data using qPCR Twelve DEGs with significant differences from the three libraries were selected for verification of RNA-Seq data by qPCR. Linear regression analysis showed an overall correlation coefficient of 0.828, indicating a good correlation between qPCR results and the transcripts per kilobase million from the RNA-Seq data (Fig. S4). Expression analyses of candidate transcripts Expression patterns of candidate transcripts associated with chlorophyll metabolism were analyzed between WZSTJ and HWWZSTJ at all fruit maturation stages (Fig. 3). Compared with WZSTJ, lower expression levels of ALAD1 and CLH were detected in HWWZSTJ at all fruit maturation stages. Expression of ALAD1 and CLH were increasing before fruit maturation and decreased thereafter in both WZSTJ and HWWZSTJ. The highest expression level of CLH was detected on the 295 th DAF in HWWZSTJ, which was 20 d later than WZSTJ. Expression levels of CAO1 and PAO in HWWZSTJ was higher than that in WZSTJ. FC1 showed a decrease trend during fruit maturation of WZSTJ and HWWZSTJ. As for GluRS, HEMF1, HEMG and CHLM, they showed irregular expression patterns in WZSTJ and HWWZSTJ (Fig. 3). Six carotenoid biosynthesis-related transcripts showed a trend from rise to decline at all fruit maturation stages of WZSTJ and HWWZSTJ (Fig. 4). The highest expression level of CCS was detected on the 295 th DAF in HWWZSTJ, which was 20 d later than that of WZSTJ. Expression levels of PDS1, PSY3, PSY5, PSY6 and PSY7 in WZSTJ were higher than that of HWWZSTJ. PDS1 showed an increasing trend during fruit maturation of WZSTJ and HWWZSTJ and reached its maximum expression on the 295 th DAF. PSY5 showed the highest expression levels on the 275 th DAF compared to the highest expression levels of PSY3, PSY6 and PSY7 on the 265 th DAF in both WZSTJ and HWWZSTJ. Expression levels of PSY5 were increasing before the 275 th DAF and decreased thereafter. PSY3, PSY6 and PSY7 were up-regulation before the 265 th DAF and decreased gradually thereafter (Fig. 4). Expression patterns of two candidate transcripts i.e., AB1 and NCED1 related to ABA metabolism were analyzed at all fruit maturation stages of WZSTJ and HWWZSTJ (Fig. 5). AB1 showed a trend from rise to decline during fruit maturation stages of WZSTJ and HWWZSTJ. The highest expression level of AB1 was obtained on the 295 th DAF in HWWZSTJ, which was 20 d later than WZSTJ. Similar expression patterns of NCED1 were observed before the 295 th DAF in HWWZSTJ and WZSTJ (Fig. 5). The expression level ABA metabolism Cs3g23530 Abscisic acid 8 -hydroxylase (AB) of NCED1 in HWWZSTJ was lower than that of WZSTJ during 275 th DAF to 305 th DAF. The highest expression level of NCED1 was detected on the 305 th DAF of WZSTJ and significantly deceased thereafter (Fig. 5). However, the highest expression of NCED1 was at 295 DAF in HWWZSTJ. Results from expression analyses of candidate genes suggested that NCED1 might play a leading role in late-maturing characteristics of HWWZSTJ. Cloning and phylogenetic analyses of candidate genes Full-length cDNA sequences of CLH and DVR from chlorophyll metabolism, PSY3, PSY5, PSY6, PSY7 and CCS from carotenoid biosynthesis, AB1 and NCED1 from ABA metabolism were cloned from HWWZSTJ and WZSTJ mandarins. There was one difference in base pair of CLH, PSY3 and PSY5 cDNA sequences between HWWZSTJ and WZSTJ (Figs. S5-S7). However, the amino acid sequences of CLH, PSY3 and PSY5 from HWWZSTJ was 100% identical to that from WZSTJ. There were 4, 6, 4, 3 and 17 bp difference between the sequences of DVR, CCS, PSY6, PSY7 and AB1 derived from HWWZSTJ and WZSTJ and this resulted in 2, 3, 3, 1 and 8 differences in the amino acids that would have been incorporated during translation of these transcripts (Figs. S8-S12). Compared with WZSTJ, there were 270 consecutive bases missing in cDNA sequence of the NCED1 from HWWZSTJ (Fig. 6). Phylogenetic analysis showed that CLH, DVR, PSY and NCED1 belonged to the same cluster, and their homology in comparison with similar sequences derived from other species is depicted in Figs. S13-S16. Results from sequence analyses suggested that deletion of 270 nucleotides in NCED1 maybe result in late-maturing characteristics of HWWZSTJ. DISCUSSION Chlorophyll degradation, carotenoid biosynthesis and ABA metabolism play important roles in regulating citrus fruit maturation through a series of related genes or special signal network (Zhang et al., 2014). In this study, RNA-Seq technology was used to screen DEGs between a late-maturing mandarin mutant HWWZSTJ and its wild type WZSTJ during fruit maturation. DEGs between any two of the three libraries were significantly enriched in biological processes such as photosynthesis, phenylpropanoid biosynthesis, carotenoid biosynthesis, chlorophyll metabolism, ABA metabolism, starch and sucrose metabolism ( Table 3). Thirteen maturing-related transcripts involved in carotenoid biosynthesis, chlorophyll degradation and ABA metabolism were selected for further analysis. CLH is the key enzyme catalyzing the first step in the chlorophyll degradation. It can catalyze the hydrolysis of ester bond to yield chlorophyllide and phytol in the chlorophyll breakdown pathway (Jacob-Wilk et al., 1999;Tsuchiya & Takamiya, 1999). Jacob-Wilk et al. (1999) isolated a CLH encoding an active chlorophyllase enzyme and verified the role of CLH in chlorophyll dephytylation by in vitro recombinant enzyme assays. Expression level of CLH in Valencia orange peel was low and constitutive and did not significantly increase during fruit development and ripening (Jacob-Wilk et al., 1999). In the present study, a CLH was obtained from the transcriptome dataset. No difference was detected in the amino acid sequences of CLH between HWWZSTJ and WZSTJ. Expression levels of CLH were increasing prior to citrus fruit maturing, decreasing thereafter in both WZSTJ and HWWZSTJ. The highest expression level of CLH was detected on the 295 th DAF in HWWZSTJ, which was 20 d later than that of WZSTJ (Fig. 3). Similar results were also observed in peels of the late-maturing mutant from Fengjie72-1 navel orange (Liu et al., 2006) and Tardivo clementine mandarin (Distefano et al., 2009). Those results suggested that CLH may balance between chlorophyll synthesis and its breakdown (Jacob-Wilk et al., 1999). Citrus is a complex source of carotenoids with the largest number of carotenoids (Kato et al., 2004). Carotenoid contents and compositions are main factors that affect peel color of most citrus fruits (Tadeo et al., 2008). PSY is a regulatory enzyme in carotenoid biosynthesis (Welsch et al., 2000). PSY is present at low expression level in unripe (green) melon fruit, reaches its highest levels when the fruit turns from green to orange and persists at lower levels during later ripening stages (Karvouni et al., 1995). Liu et al. (2006) studied the mechanism underlying the difference between Fengwan (a late-maturing mutant) navel orange and its original cultivar (Fengjie72-1). The highest expression levels of some carotenoid biosynthetic enzymes in the peels of the late-maturing mutant occurred 30 d later than that of the original cultivar (Liu et al., 2006). In this work, PSY showed a trend from rise to decline at all fruit maturation stages of the late-maturing mutant HWWZSTJ and its original line WZSTJ. The expression levels of PSY3, PSY5, PSY6 and PSY7 in HWWZSTJ were lower than that in WZSTJ. These results demonstrated that the mutation in HWWZSTJ influenced the transcriptional activation of PSY. ABA can be considered as a ripening regulator during fruit maturation and ripening. NCED, a key enzyme involved in ABA biosynthesis, plays an important role in fruit ripening of avocado (Persea americana) (Chernys & Zeevaart, 2007), orange (Citrus sinensis) (Rodrigo, Alquezar & Zacarías, 2006), tomato (Solanum lycopersicum) (Nitsch et al., 2009;Zhang, Yuan & Leng, 2009), grape (Vitis vinifera) and peach (Prunus persica) (Zhang et al., 2009). The NCED1 were expressed only at the onset stage of ripening in peach and grape, when ABA content became high (Zhang et al., 2009). Zhang et al. (2014 studied the mechanism of a spontaneous late-maturing mutant of 'Jincheng' sweet orange and its wild type through the comparative analysis. The highest expression of CsNCED1 was at 215 DAA in WT. In our study, expression levels of NCED1 increased prior to fruit maturing and decreased significantly thereafter in both HWWZSTJ and WZSTJ. The highest expression level of NCED1 was detected on the 305 th DAF of WT (WZSTJ). Our results were consistent with previous findings that NCED1 plays the most important role in the ABA biosynthesis pathway during the fruit maturing process (Zhang et al., 2014). Deletion of nucleotides could cause a shift of the reading frame and truncated protein, which can result in natural mutants. Compared with the cDNA sequence of NCED1 from WZSTJ, there were 270 consecutive bases missing in HWWZSTJ (Fig. 6). Those results suggested that NCED1 might play an important role in late-maturing of HWWZSTJ. A high-efficient regeneration system for WZSTJ has been established (Wang et al., 2015) and further study on the role of NCED1 in citrus is being carried out through genetic engineering. CONCLUSION RNA-Seq technology was used to identify pigment-related genes from a late-maturing mandarin mutant HWWZSTJ and its original cultivar WZSTJ. Thirteen candidate transcripts related to chlorophyll metabolism, carotenoid biosynthesis and ABA metabolism were obtained. NCED1, a gene involved in ABA metabolism, is probably involved in the formation of late maturity of HWWZSTJ based on sequence and expression analyses. The present study opens up a new perspective to study the formation of late maturity in citrus fruit.
5,090.4
2017-05-18T00:00:00.000
[ "Agricultural and Food Sciences", "Biology" ]
Synthesis of silver nanoparticles using white-rot fungus Anamorphous Bjerkandera sp. R1: influence of silver nitrate concentration and fungus growth time Currently, silver nanoparticles (AgNPs) constitute an interesting field of study in medicine, catalysis, optics, among others. For this reason, it has been necessary to develop new methodologies that allow a more efficient production of AgNPs with better antimicrobial and biological properties. In this research growth time effects Anamorphous Bjerkandera sp. R1 and the silver nitrate (AgNO3) concentration over AgNPs synthesis were studied. Through the protocol used in this work, it was found that the action of the capping proteins on the surface of the mycelium played a determining role in the reduction of the Ag+ ion to Ag0 nanoparticles producing a particle size that oscillated between 10 and 100 nm. The progress of the reaction was monitored using visible UV–Vis spectroscopy and the synthesized AgNPs were characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM) and Fourier transform infrared radiation (FTIR) spectroscopy. The best synthetic properties were found at 1 mM of AgNO3 concentration, growth time of 8 days, and reaction time of 144 h. Nanometals obtention from microorganisms could be considered as a new method of synthesis, due to reducing abilities of metal ions through its enzymatic system and represents low-cost synthesis that reduces the generation of harmful toxic wastes. Silver nanoparticles (AgNPs) have recently attracted considerable attention in the development of applications due to their excellent physical and chemical properties, such as its high thermal stability and low toxicity 1 . Studies have shown that these can overcome pathologies previously treated with conventional antibiotics, due to their strong antimicrobial characteristics and broad spectrum 2,3 . One of the challenges in terms of the synthesis process is to obtain nanoparticles with specific characteristics such as size distribution, shape, and surface charge, among others, that will in turn determine their physical and chemical properties 4 . The standardization of the nanoparticle synthesis process is a very important aspect since the antibacterial properties are highly related with their size and surface charge. If these properties are adequately controlled, silver nanoparticles could have an enormous potential as antibacterial agents 5,6 . The methods mostly used for the synthesis of nanoparticles have been both physical and chemical 7 . The conventional physical methods tend to produce low nanoparticle quantities, while the chemical methods consume too much energy and require the use of stabilizing agents that are often toxic such as sodium dodecyl benzene sulfonate or polyvinylpyrrolidone (PVP), which are used to avoid nanoparticle agglomeration 8,9 . Therefore, there is a need to implement green and/or biological synthesis methods to reduce hazardous and toxic waste, with the possibility of obtaining particles on the nanometric scale 10 . In the biological synthesis of AgNPs, the reducing and toxic stabilizing agents are replaced by nontoxic molecules (proteins, carbohydrates, antioxidants, etc.), produced by living organisms like bacteria, fungi, yeasts and plants 7,11 . For example, the implementation of fungi is considered an important synthesis route, due to the high binding capacity and intracellular metal uptake. It has been reported that fungal material is more advantageous with respect to bacteria and plants, being that the mesh-like fungal mycelium can withstand flow pressures, agitation and adverse conditions in processes that require the use of bioreactors and chambers 12 . Furthermore, fungi secrete significantly higher quantities of proteins than bacteria, which would amplify the productivity of nanoparticle synthesis 13 . There are different fungi strains that have been studied to synthesize silver nanoparticles such as Aspergillus niger, Aspergillus flavus, Alternaria alternate, Cladosporium cladosporioides, Fusarium solani, Fusarium oxysporum, Penicillium brevicompactum, Trichoderma asperellum and Verticillium 14,15 . Azmath et al. for example identified that using culture filtrates from various Colletotrichum sp. synthesized AgNPs with sizes between 5 and 60 nm and found that the biomolecules secreted by the fungus possibly functioned as stabilizing agents to prevent them from agglomerating in the aqueous medium 16 . With respects to white-rot fungi [17][18][19][20] , their use has been reported in the biosynthesis due to their high tolerance to metals and their powerful enzyme system (protein release); this last property gives it a great capacity for adsorption of Ag + ions on the walls of the mycelium 21 . Some white-rot fungi as Phanarochaete chrysosporium 19 , Trametes ljubarskyi, Ganoderma enigmaticum 18 and Trametes trogii 22 have been reported to produce stable silver nanoparticles when silver nitrate (AgNO 3 ) is used as a metallic precursor in an aqueous medium, showing that fungal biomolecules under different experimental conditions play an important role in the production of AgNPs. Although many studies are known about the importance of using fungal material in obtaining nanoparticles, it is still necessary to evaluate some fungi as particle synthesizers and verify how their growth process affects the synthesis. In the case of the fungus used in this study, it has been reported as an anamorphous of Bjerkandera adusta. These anamorphous are characterized by presenting asexual spores called conidia whose purpose is rapid reproduction and survival; this would mean great potential for various biotechnological and biomedical applications due to its high nutritional and organoleptic quality and the ease of growing on agro-industrial by-products 23 . The objective of this work was to evaluate the effect of silver nitrate concentration and growth time of fungus of on the synthesis of silver nanoparticles (AgNPs) from the white-rot fungus anamorphous Bjerkandera sp. R1. The formation of silver nanoparticles was monitored using UV-Vis spectrophotometry and complemented with its morphological characterization through scanning electron microscopy (SEM) and transmission (TEM). Methodology Microorganisms and culture media. White-rot fungus strain Bjerkandera sp. R1 was used and cryopreserved in pinewood splinters and bagasse. All fungi were donated by the Group of Environmental Biotechnology from the department of chemical engineering at Universidad de Santiago de Compostela (Spain) 24 . The reagents necessary for the preparation of the culture media were donated by the bioprocess group from the department of chemical engineering at Universidad de Antioquia (Colombia). Cultures were made every month in Petri dishes with solid Kimura medium [agar (15 g/L), glucose (20 g/L), peptone (5 g/L), yeast extract (2 g/L) KH 2 PO 4 (1 g/L), MgSO 4 . 5H 2 O (0.5 g/L)] and pH 5.5 25 . The inoculum necessary to start all the assays were prepared by transferring 4 pieces of colonized agar to a Fernbach flask with liquid Kimura culture medium [glucose (20 g/L), peptone (5 g/L), yeast extract (2 g/L), KH 2 PO 4 (1 g/L); MgSO 4 ·5H 2 O (0,5 g/L)] and pH 5.5 25 . Subsequently, the mycelium layer formed was homogenized in a blender for 20 s for the different tests. The crushed mycelium of fungus was mixed with Tween 80 and aseptically transferred to the liquid Kimura culture medium. The sample was incubated in a shaking incubator (JEIO TECH SI-300) at 30 °C at 200 rpm to favor the pellets formation; then it was centrifuged at 4500 rpm for 20 min to obtain two fractions: pellets and supernatant. Each of these fractions was then used to determine the effect of silver nitrate (AgNO 3 ) concentration and growth time on the synthesis of silver nanoparticles. Evaluation of the operational conditions for the production of silver nanoparticles (AgNPs). The production of AgNPs was carried out using two reduction methods. 1. Reduction of silver ions in the fungal filtrate (CS sample): 1% v/v solutions were prepared with the fungal filtrate obtained from the different growth time of fungus and the corresponding concentrations of AgNO 3 and mixed for 144 h. Control for this sample was done only using the fungal filtrate. 2. Reduction of silver ions from the mycelium-pellets (MP sample): For this method the pellets in a 1% w/v concentration were mixed with the AgNO 3 solutions and were incubated during 144 h, then the solution was centrifuged, and the pellets were separated by membrane filtration. Finally, they were re-suspended in deionized water and homogenized using a probe above 8. www.nature.com/scientificreports/ UV-Vis spectrophotometer (Helios-α Thermo Spectronic) by scanning the absorbance spectra in 350-800 nm range of wavelength. The resulting spectra helped identify the absorption band of silver (Ag). Table 1 shows the AgNO 3 concentration values studied and the growth time of fungus on the response variable, area under the curve. The area under the curve of the UV-Vis spectra (AUC) was used because a quantitative variable was required to associate the presence or absence of AgNPs. The significance was determined using a variance analysis (ANOVA) and statistical program Statgraphics centurion ® was used for the response surface analysis. Evaluation of the effect of growth time of fungus on the synthesis of AgNPs. The most suitable AgNO 3 concentration found in Sect. "Determination of a suitable AgNO3concentration for the synthesis of AgNPs" was used to evaluate the effect of growth time of fungus. Six different growth days were evaluated for the fungus anamorphous Bjerkandera sp. R1 (3, 4, 5, 6, 7 and 8 days of culture). In this case, the CS samples was incubated in the dark (30 °C) in a shaking incubator (JEIO TECH SI-300) for 144 h at 200 rpm. These tests were carried out in triplicate. Additionally, a control was performed using only fungal filtrate, with the purpose of having a reference for the spectral analysis done. Small aliquots of CS samples were monitored every 24 h, for a total of 144 h by the scanning the absorbance spectra using a UV-Vis spectrophotometer (Helios-α Thermo Spectronic), with the same conditions as previously described. Table 2 presents the studied values of growth time of fungus and incubation time, on the response variable area under the curve (AUC). The significance was determined using an analysis of variance (ANOVA) and the response surface analysis was done using the statistical program Statgraphics Centurion®. Characterization of the AgNPs. The evaluation of the size distribution of the AgNPs was performed by Transmission Electron Spectroscopy (TEM) and Scanning Electron Microscopy (SEM). For the SEM evaluation, the fungus was lyophilized, and small MP samples were fixed in a graphite tape. Additionally, a thin gold coating (Au) was placed (DENTON VACUUM Desk IV equipment) and analyzed in the scanning electron microscope (JEOL-JSM 6490 LV) with an accelerating voltage of 20 kV. Semi-quantitative chemical composition analysis of the sample was measured by energy dispersive X-ray Microscope-EDX (INCA PentaFETx3 Oxford Instruments) using the system coupled to the SEM equipment. For the TEM evaluation a Tecnai F20 Super Twin TMP instrument with an accelerating voltage of 200 kV and 0.1 nm resolution was used. For this, a drop of CS sample containing AgNPs was placed on a carbon coated copper grid, and samples were dried under an infrared (IR) lamp. A chemical compositional analysis of the colloidal suspension was measured by the detector EDX Oxford Instruments XMAX. The reported size distribution was found using calculated averages (10-20 measurements) over specific regions of the TEM and SEM micrographs. In this case, to measure the size of the nanoparticles both on the surface of the mycelium and in the colloidal suspension (CS samples), they were found using the internal software Scandium equipment for SEM and Image J for the TEM. Finally, in order to determine the possible biomolecules responsible for the reduction of silver ions and for the confirmation of the capping agents on AgNPs, Fourier transform infrared radiation (FTIR) spectroscopy tests was performed (Nicolet iS50 FTIR). All measurements were carried out in the range of 400-4000 cm −1 at a resolution of 2 cm −1 . Results and discussion Determination and influence of silver nitrate (AgNO 3 ) concentration on the synthesis of Ag-NPs from the fungus anamorphous Bjerkandera sp. R1. Synthesis of silver nanoparticles (AgNPs) in the CS samples of the fungus anamorphous Bjerkandera sp. R1. The reduction of the silver nanoparticles (Ag-NPs) in the fungal filtrate obtained from the white-rot fungus anamorphous Bjerkandera sp. R1 was examined through a qualitative analysis. A yellow to brown color change was observed after 48 h of reaction when the fungal filtrate was worked at final silver nitrate (AgNO 3 ) concentrations of 1 and 1.5 mM respectively (Fig. 1). The color change explains the presence of AgNPs due to the Surface Plasmon Resonance (SPR) exhibited by the synthesized AgNPs 26,27 . With respect to the color change in these solutions, the determination of the size and shape of the synthesized AgNPs were initially corroborated through the absorption changes observed in the www.nature.com/scientificreports/ UV-Vis spectra (maximum wavelength) in function of time. A strong SPR at 430 nm was observed through the spectra, which increased its intensity with time and reached the stabilization point after 120 h for the fungal filtrate obtained from a growth time of fungus of 5 days and a concentration of 1 and 1.5 Mm AgNO 3 (Fig. 2b,c). A broad SPR was also observed after 144 h for the fungal filtrate obtained from 7 days of growth of the fungus and concentration of 1.0 Mm AgNO 3 (Fig. 2e), in contrast for a concentration of 1.5 mM AgNO 3 a tenuous SPR was observed (Fig. 2f) . These changes in coloration, as well as the appearance of these bands were quantitative evidence of the presence of AgNPs in the solution. Table 3 presents the ANOVA for an incubation time of 120 h and 144 h. According to the values obtained (p ≤ 0.05) with a confidence level of 95%, it was established that both the growth time of fungus variable and AgNO 3 concentration showed no significant effect on the area under the curve of the UV-Vis spectra (AUC), for an incubation time of 144 h. In contrast, a significant effect was seen when the solution was incubated for 120 h. Figure 3 shows the response surface graphs on the AUC variable, after 120 and 144 h of incubation. A higher growth 'rate' favored an AUC increase and therefore the extracellular synthesis of AgNPs 28 . From these results, it was found that working with a AgNO 3 concentration of 1 mM for 144 h was adequate for the synthesis of AgNPs using the fungus anamorphous Bjerkandera sp. R1. It was also found that the absorbance of these spectra increased and were higher in contrast to the solution worked at 1.5 mM (Fig. 4b). The absorbance of AgNPs (at the wavelength of maximal absorbance) is proportional to the concentration of AgNPs and these results indicated the formation of a greater number of AgNPs within the fungal extract (CS sample) 29 and are in accordance with research conducted by Gudikandula et al. 18 and Saravanan et al. 19 , who reported that working at a final concentration of 1 mM AgNO 3 facilitates the stable formation of AgNPs from white-rot fungi. Effect of silver nitrate (AgNO 3 ) concentration on the biosynthesis of AgNPs using the fungus anamorphous Bjerkandera sp. R1. To observe the effect of AgNO 3 on the synthesis on the surface of the mycelium, measurements were taken using a scanning electron microscope (SEM). The SEM images show the micrographs of lyophilized fungus (MP samples) from day 7 of growth time of fungus (best result found according to Table 3, Fig. 3 and Fig. 4b on the CS sample), which was incubated for 144 h using different AgNO 3 concentrations. Low accumulation of silver residues (macroparticles (Fig. 5a, see red circle) and well-defined particle distributions were observed for 1 mM AgNO 3 , with spherical shape and size distribution of 70-90 nm (Fig. 5a, see black circle). Regarding the other AgNO 3 concentrations, the synthesis of AgNPs was highly regulated for a final concentration of 0.5 mM AgNO 3 (Fig. 5b). In this case there was little formation of silver residues (Ag macroparticles) 17 since the ions released in the solution were not adsorbed on the surface of the mycelium. From this analysis, this substrate concentration was not enough for some biomolecules to act appropriately as stabilizing and reducing agents. On the other hand, for a final concentration of 1.5 mM AgNO 3 (Fig. 5c), the reduction of silver ions could have occurred intracellularly, but the combinations and interactions of the functional groups present in the wall of the fungus were affected by the implementation of higher levels of AgNO 3 ; under these conditions the nucleation process of the Ag + species became slower, which caused excessive accumulations of macroparticles on the surface of the mycelium (Fig. 5c, see red circle). Regarding the ideal concentration for intracellular synthesis, the results found are different from those reported by Kobashigawa et al. where it was found that 5 mM AgNO 3 favors both intra and extracellular synthesis of AgNPs from white-rot fungus Trametes trogii 22 . In contrast to the fungus implemented in this research, the EDX spectra revealed the synthesis of AgNPs, due to the presence of a peak at approximately 3 keV that corresponds to the formation of pure silver 14,30 , indicating that a final concentration of 1 mM is ideal for carrying out the reduction of the Ag + ion to Ag 0 from the fungus anamorphous Bjerkandera sp. R1. In the EDX support carbon and oxygen peaks can also be observed, these peaks could indicate the presence of proteins and/or fungal filtrate remains that were retained in the interstitial spaces of the fungus. As described in the methodology, the culture medium is rich in carbon and this is an essential element that the fungus requires to fulfill its metabolic functions 24,31 . www.nature.com/scientificreports/ the fungus (Fig. 6). This sharper contrast in color elucidates a higher proportion of AgNPs due to the surface plasmon resonance (SPR) 26,27 . The change in color (CS samples in triplicate) were also verified with the Uv-Vis spectra. The higher absorbance peaks were seen at 430 nm for the fungal extracts obtained from the following growth days: 4, 5, 6, 7 and 8 (Fig. 7). For the latter growth day, the highest absorbance band was seen after 144 h of reaction with 1 mM AgNO 3 www.nature.com/scientificreports/ (CS sample) (Fig. 7f). With respect to day 7 of growth, it can be established that the filtered fungal extract from this day, at the time of reacting with AgNO 3 1 mM, showed much less color compared to day 6 and 8. This probably could have occurred because under these conditions anamorphous Bjerkandera sp. R1 finished its stationary growth phase and entered the death phase. In this case, the secretion of proteins involved in the stabilization and reduction process of Ag + ions could be affected, causing a low synthesis rate in the fungal extract and an adverse effect regarding the dispersion of the synthesized AgNPs 23 . With these growth days evaluated for the synthesis, it could be seen that there were no shifts to the left (blue) or to the right (red) on the maximum wavelength in the SPR peak; this process indicated according to Mie's theory 32 , that the anisotropy of the AgNPs decreased considerably and that the size could have possibly been controlled (Fig. 7a-e) [32][33][34] . In this study, the SPR bands founds suggested that the nanoparticles synthesized were spherical 32 . Table 4 presents the ANOVA on the effects of growth time of fungus, according to the values achieved (p ≤ 0.05) with a confidence level of 95%. It was established that the growth time of the fungus had a significant effect on the response variable, area under the curve of the UV-Vis spectra (AUC). Observing the response surface graph, the best growth results obtained for the fungal extract was seen on the 8th day, which was incubated for 144 h with 1 mM AgNO 3 (Fig. 8). With regards to this result, the optimal time required for the fungus to release more biomolecules in charge of the reduction process is the 8th day, and a greater AgNPs production is more feasible with longer reaction times with 1 mM AgNO 3 . These findings can be compared with the research carried out by Birla et al. where it reported that an absorbance peak intensity increase over time indicates the continuous reduction of silver ions and an increase in the concentration of AgNPs 35 . Effects of growth time of the fungus anamorphous Considering the previously cited results, the TEM micrographs (Fig. 9) show the difference in size and shape of the AgNPs once the fungal filtering was adjusted under the different conditions worked with. Most of the particles observed were spherical and separated from each other, with little agglomeration and size distribution between 10 and 30 nm. These results suggest that in this process, biological residues (capping agents) may have performed the function of reduction and stabilization of AgNPs 36,37 , as reported by Seetharaman et al. 14 , Saravanan et al. 19 and Balakumaran et al. 38 in studies using different types of fungi. Analysis through Energy Dispersive X-ray (EDX) confirmed the presence of elemental silver signal (Fig. 8). Identification lines for the major emission energies for silver (Ag) are displayed in a range between 2.8 and 3.4 keV confirming the presence of AgNPs in the fungal filtrate. Other peaks appear in the EDX spectra; this indicates that in the process biomolecules were bound to the surface of AgNPs 39 . Another band appeared at 667 cm −1 , this band corresponds to C-S stretching vibrations and possibly corresponds to heterocyclic compounds that can be found in the fungal filtrate 41 . From these results it was shown that probably the strong functional groups like carbonyl (-C=O) present in the fungal filtrate had more participation in the synthesis. In this context, FTIR study confirmed that possibly the adsorptive carbonyl group from amino acid residues and peptides of proteins had the stronger ability to bind silver ions (Ag + ) in the mycelium 21 , and then; the bioreduction of these ions both the wall of the fungus and the CS sample was perhaps due to the release of some proteins which managed the nucleation and subsequent synthesis of AgNPs 15,40,42,43 , for this conditions, an adequate affinity between the substrate and the biomolecules responsible for the reduction process may have been achieved 26 . These findings are in agreement with several investigations that argue that functional groups present in the extracellular substances work as a capping agent and better adsorb the particles located on the surface of the mycelium, sealing the AgNPs and forming a coating with that prevents them from agglomerating when the sample is being rocked 17,40,44 . According to the evaluated days of growth, the SEM images show that most of the AgNPs biosynthesized, present on the cell surface, are spherical with an approximate size of 30-100 nm. The result indicated that the reduction process occurred on the surface of the mycelium and that the AgNPs were uniformly distributed, forming small groups for the fungal material for all the growth times (Fig. 11a-e); but inhibiting the agglomerate formation for the fungus that grew for 8 days (Fig. 11f, see black circles). Since the best synthesis results were seen for the biomass obtained from day 8 of growth time of fungus (Fig. 11f, see black circles), it is likely that the long-time exposure caused the depletion of the carbon and nitrogen sources present in the fungal filtrate, and therefore facilitating the lysis of the fungal mycelium. In this context, the membrane becomes more permeable; therefore, more extracellular substance (proteins) is retained between www.nature.com/scientificreports/ the interstices of the mycelium (Fig. 12b). Under these conditions, it was possible that at the time of reaction with 1 mM AgNO 3 there was a greater adsorption of AgNPs on the surface of the mycelium, causing a better coating and greater stabilization of particle size. The contrary occurred using biomass from shorter growth times (for example, 5 days of growth time of the fungus) since little extracellular substance accumulated in the interstitial spaces (in these conditions the fungal mycelium is not smoothed, (Fig. 12a) and the process of stabilization and reduction of AgNPs was possibly affected, generating agglomerates and macroparticles formation. The EDX support showed strong signs of Ag 014, 30 and other elements such as carbon and oxygen. Regarding to these results and considering what has been reported by Taboada-Puig et al. 24 , where it is argued that growth long periods of the same fungus increase the production of proteins; it could be stated that a higher concentration of carbonyl groups (Figs. 10 and 12b) act better as a capping agent by arresting the nucleation growth of AgNPs during formation, giving rise to small sized particles (Fig. 11f, see black circles). Conclusions The effect of silver nitrate (AgNO 3 ) concentrations on the silver nanoparticle synthesis (AgNPs) was evaluated and was established that this factor significantly affected the behavior of the fungus anamorphous Bjerkandera sp. R1 against the ion reduction both in the fungal extract and on the mycelium surface. The best synthesis behavior was observed for an incubation time of 144 h using a 1 mM AgNO 3 concentration. When evaluating the effect of growth time of fungus on the synthesis of AgNPs using this concentration, it was possible to corroborate that proteins on surface of the mycelium or chemical functional groups fell off more easily on the fungal extract from day 8, thus reducing most of the Ag + in the 1 mM AgNO 3 solution (CS sample) in Ag 0 nanoparticles. Finally, it was found that the increase of the interstitial spaces of the mycelium was favored when the fungus grew during this period. This condition triggered greater adsorption of the silver ions on the surface of the mycelium. In this context and under prolonged reaction times with 1 mM AgNO 3 (144 h), the greatest reduction of Ag + ions in situ to Ag 0 (MP sample) occurred. www.nature.com/scientificreports/
6,017.2
2021-02-15T00:00:00.000
[ "Environmental Science", "Chemistry", "Biology", "Materials Science" ]
Surface Waters and Urban Brown Rats as Potential Sources of Human-Infective Cryptosporidium and Giardia in Vienna, Austria Cryptosporidium and Giardia are waterborne protozoa that cause intestinal infections in a wide range of warm-blooded animals. Human infections vary from asymptomatic to life-threatening in immunocompromised people, and can cause growth retardation in children. The aim of our study was to assess the prevalence and diversity of Cryptosporidium and Giardia in urban surface water and in brown rats trapped in the center of Vienna, Austria, using molecular methods, and to subsequently identify their source and potential transmission pathways. Out of 15 water samples taken from a side arm of the River Danube, Cryptosporidium and Giardia (oo)cysts were detected in 60% and 73% of them, with concentrations ranging between 0.3–4 oocysts/L and 0.6–96 cysts/L, respectively. Cryptosporidium and Giardia were identified in 13 and 16 out of 50 rats, respectively. Eimeria, a parasite of high veterinary importance, was also identified in seven rats. Parasite co-ocurrence was detected in nine rats. Rat-associated genotypes did not match those found in water, but matched Giardia previously isolated from patients with diarrhea in Austria, bringing up a potential role of rats as sources or reservoirs of zoonotic pathogenic Giardia. Following a One Health approach, molecular typing across potential animal and environmental reservoirs and human cases gives an insight into environmental transmission pathways and therefore helps design efficient surveillance strategies and relevant outbreak responses. Introduction Even though the Danube River is the second largest river in Europe, information regarding the prevalence of protozoan enteric pathogens in its water is scarce. In a comprehensive study, Kirschner et al. [1] reported that human-associated faecal pollution is a crucial problem throughout the Danube River basin, posing a threat to all types of water uses. Cryptosporidium and Giardia are parasitic protozoa responsible for diarrheal diseases in humans and other animals worldwide [2]. Although infections caused by the two parasites are underreported, the prevalence of Cryptosporidium among the world's population is estimated to range between 3-5% while prevalence is approximately 10% for Giardia [3]. In the USA these parasites are responsible for 30,000 cases of diarrhea every year [3]. The two parasites have similar life cycles, characterized by an environment-resistant infective stage, the Cryptosporidium oocysts and Giardia cysts, which initiate infection Microorganisms 2021, 9,1596 3 of 14 assemblages of G. duodenalis (assemblages A, B and the rodent-exclusive G) [25] with a prevalence ranging from 22.2-100% [26]. The dynamics of infections at the animal-human interface are determined by changes in host populations such as rat and pathogen prevalence and diversity, abundance, spatial distribution and contact rates within the rats-humans-pathogens system. Moreover, sitespecific abiotic factors [19] as well as factors related to anthropogenic activities can also impact infection dynamics. Therefore, local studies are needed in order to accurately assess the risk and adapt relevant preventive strategies. Using a One Health approach, this study aimed to (i) assess for the first time the prevalence of parasitic protozoa such as Cryptosporidium and Giardia in urban surface water such as the Danube Canal, a side-arm of the River Danube that flows through the city center of Vienna, Austria, and in urban brown rats trapped in the city center of Vienna, (ii) determine the diversity and zoonotic potential of the aforementioned parasites by identifying species, genotypes, and assemblages, and (iii) assess the potential role of the River Danube and urban brown rats as sources and/or reservoirs of Cryptosporidium and Giardia in the study area. Trapping R. norvegicus were trapped between March and June 2017 at two sites in the city of Vienna, Austria, highly frequented by humans: at a promenade along the Danube Canal (mean coordinates of the trapped rats: 16.36540, 48.22633 decimal degrees (D.D.)) and at Karlsplatz (16.37044, 48.20363 D.D.), a tourist attraction in the city. Detailed information on the trapping method used can be found elsewhere [27]. Rats were identified at the species level based on morphological characteristics and named with codes AD31 to AD84. No feces samples were obtained for rats AD53, AD60, AD65 and AD75. For each animal, morphological data were recorded such as sex, body mass (g), body length (nose to anus, mm) and sexual maturity. Sexual maturity was assessed for males when rats developed seminal vesicles and had scrotal (vs. inguinal) testes. According to Vadell et al. [28], females were assessed as sexually mature when showing a distinct uterus blood supply, placental scars or presence of embryos. Feces were collected from the rectum and stored in 96% ethanol until DNA extraction. Cryptosporidium and Giardia Quantification in Water Samples Surface water samples from the Danube Canal (48.211826, 16.383592), a side arm of the River Danube that flows through the city center of Vienna, were analyzed monthly for the presence of Giardia and Cryptosporidium between May 2019 and September 2020, except during March and April due to SARS-CoV-2 derived lockdown (n = 15). Water volumes of 5-15 L were enumerated following the flat membrane method described in ISO 15553 [29]. Briefly, after filtration, 142 mm cellulose acetate membranes with pore size 1.2 µm were placed in stomacher bags and transported to the lab. Particles on the membranes were scraped with a cell scraper and recovered using 50 mL of Glycine 1M (Sigma Aldrich, Steinheim, Germany) buffer at pH 5.5, followed by an incubation of 10 min in the Stomacher Lab-Blender 400 and 5 min in an ultrasound bath. The contents of the bags were then placed into 50 mL tubes and centrifuged at 1550× g for 15 min. Supernatants were discarded and pellets were resuspended in 2 mL of ultrapure water. One mL of the suspension was used for identification of the parasites by molecular methods (explained below). The remaining 1 mL was then used for immunomagnetic separation of Giardia and Cryptosporidium using the Dynabeads GC Combo kit (Life Technologies, Oslo, Norway). Concentrates were stained with the EasyStain kit (Biopoint Pty. Ltd., Belrose, Australia) and quantified as described in [30]. After implementation, tests were performed to determine the recovery efficiency of the used enumeration method when spiking surface water samples. For that purpose, reference materials G. muris H3 and C. parvum (Waterborne Inc, New Orleans, LA, USA) were used. As water matrixes we used several samples from surface waters with different turbidity ranging from 1.7-70 NTUs (Supplementary Material Table S2). Recovery efficiencies were 47 ± 27% and 58 ± 28% for Cryptosporidium oocysts and Giardia cysts, respectively. The theoretical limit of detection (LOD) of the flat membrane method varied according to the volume of water analyzed. For a 10 L water sample, the LOD was 0.4 (oo)cyst/L. DNA Extraction Approximately 200 mg of rat stool was dried using silica for 48 h. DNA extraction was performed using the QIAamp ® Fast DNA Stool Mini Kit (Qiagen, Hilden, Germany), according to the manufacturer's instructions. DNA extraction from water samples was carried out with the DNeasy ® Power Soil Kit (Qiagen, Hilden, Germany), according to the manufacturer's instructions, using 200 µL of concentrated water samples. Identification of Cryptosporidiumand Giardia Species and Genotypes For identification below the genus level, all samples were subjected to two independent nested PCRs, specific to Cryptosporidium and Giardia respectively. For identification of Cryptosporidium, the nested PCR described by [31] was used, targeting a fragment of the 18S rDNA gene. The PCR consisted of 45 cycles at 94 • C for 30 s, 58 • C for 30 s and 72 • C for 30 s. The conditions for the secondary nested PCR were identical to those of the primary PCR. Giardia was identified by nested PCR targeting a fragment of the triosephosphate isomerase (tpi) gene [9]. Conditions for the primary and secondary nested PCRs were identical and consisted of 35 cycles at 94 • C for 45 s, 50 • C for 45 s and 72 • C for 1 min. All PCRs were run on an Eppendorf Mastercycler (Eppendorf AG, Hamburg, Germany) with an initial hot start at 94 • C for 10 min and a final extension at 72 • C for 7 min. Amplicons were visualized by 2% agarose gel electrophoresis stained with GelRed™ stain (BioTrend, Cologne, Germany), cut from the gel with a sterile scalpel and purified using the PCR and Gel Band Purification kit (illustra GFX, GE Healthcare, Austria). Sanger sequencing was performed directly from the PCR products with a Thermo Fisher Scientific SeqStudio (Thermo Fisher Scientific, MA, USA). Sequences were obtained from both strands in two independent setups, aligned to obtain a consensus sequence using ClustalW in BioEdit Sequence Alignment Editor [32] and compared using NCBI BLAST to reference sequences of Cryptosporidium, Giardia and Eimeria species and genotypes from GenBank [33]. Statistical Analysis and Mapping Maps of the capture sites were built using QGIS 3.4.15 (QGIS Development Team, 2018); the raster dataset "Orthofoto 2016 Wien" (Open Data Österreich, https://www. data.gv.at/, 31 March 2019) was used as base map. Statistical analysis was performed as described in [27]. Briefly, the spatial autocorrelation between parasite positive rats was assessed using a non-parametric spatial covariance function. We investigated the impact of place of capture, body mass, sex, sexual maturity and land-use variables (within a 200 m radius buffer zone around place of capture, considered as a proxy for home range) on the individual infection status of rats for the three parasites. We computed a logistic regression model (generalized linear mixed-effect model under binomial distribution) using the glmer function in the lme4 library while controlling for clustering by site of capture (random effect). The conditional model average estimates were calculated and weighted according to the Akaike Information Criterion corrected for small sample size (AICc). For each covariate, the relative variable importance (RVI) was computed from model averaged parameter estimate weights to determine the probability that each variable might contribute to the model for these data. Ethical Statement This study followed institutional and national standards for the care and use of animals in research. It was approved by the institutional ethics and animal welfare committee and the national authority (GZ 68.205/0196-WF/V/3b/2016). Trapping Traps were set on 15 nights in two locations in Vienna (10 at Danube Canal, five at Karlsplatz) and 50 brown rats (R. norvegicus) were captured (Danube Canal 39; Karlsplatz 11) (Figures 1 and 2). Twenty-eight (56%) of the captured rats were male (Danube Canal 23; Karlsplatz 5) and 22 (44%) were female (Danube Canal 16; Karlsplatz 6). Among them, 24 (48%) were sexually mature (15 male and 9 female). The median body mass and length (nose tip to anus) were 129.5 g and 167.5 mm for rats caught at Danube Canal, and 91.4 g and 153 mm at Karlsplatz. The median body mass of sexually mature rats was 176.8 g versus 121.1 g for immature rats ( Table 1, Table S1). corrected for small sample size (AICc). For each covariate, the relative variable im portance (RVI) was computed from model averaged parameter estimate weights to de termine the probability that each variable might contribute to the model for these data. Ethical Statement This study followed institutional and national standards for the care and use of an imals in research. It was approved by the institutional ethics and animal welfare com mittee and the national authority (GZ 68.205/0196-WF/V/3b/2016). Trapping Traps were set on 15 nights in two locations in Vienna (10 at Danube Canal, five a Karlsplatz) and 50 brown rats (R. norvegicus) were captured (Danube Canal 39 Karlsplatz 11) (Figures 1 and 2). Twenty-eight (56%) of the captured rats were mal (Danube Canal 23; Karlsplatz 5) and 22 (44%) were female (Danube Canal 16; Karlsplat 6). Among them, 24 (48%) were sexually mature (15 male and 9 female). The media body mass and length (nose tip to anus) were 129.5 g and 167.5 mm for rats caught a Danube Canal, and 91.4 g and 153 mm at Karlsplatz. The median body mass of sexuall mature rats was 176.8 g versus 121.1 g for immature rats ( Table 1, Table S1). Prevalence and Identification of Protozoa in Rat Faeces Overall, 74% (n = 34) of the stool samples from the 50 urban brown rats analyzed by independent nested PCR provided Giardia and/or Cryptosporidium sequences. Interestingly, Eimeria was also detected by the Cryptosporidium nested PCR. Co-occurrence of Cryptosporidium and Giardia was observed in six rats and co-occurrence of Eimeria and Giardia was observed in four rats ( Figure 1, Figure 2, Table 1, Table S1). We obtained a limited amount of DNA from the rat feces; thus, we aimed to sequence every sample at least twice in independent setups to obtain reliable consensus sequences, and we concentrated our efforts on identifying to the species/assemblage level the three protozoa shed by rats in the present study rather than exploring in depth the phylogeny of only one of them. Thus, no further testing regarding Cryptosporidium and Eimeria species as well as Giardia sub-assemblages was possible. Cryptosporidium Of the 50 rat fecal samples analyzed, 20 (40%) revealed an amplicon in the 18S rDNA nested PCR for Cryptosporidium. Seventeen of these 20 samples were successfully sequenced. The sequence analysis identified 10 of them belonging to the genus Cryptosporidium and seven belonging to the genus Eimeria. Among the Cryptosporidium sequences, we identified five sequences (MZ314966-70) with 100% similarity to Cryptosporidium rat genotypes I and IV available in GenBank and previously isolated from rodents, specifically rats. The fragment of the 18S rDNA analysed here shows no nucleotide differences between the two genotypes to further distinguish between them. We also identified four sequences (MZ314971-74) with 99.81-100% similarity to others found in GenBank identified as Cryptosporidium environmental spp. and isolated from wastewater (KY483983), storm water (AY737582, AY7375824 and AY7375825) and brown rats (MT56130, MG917671, MT504540) from all over the world. Moreover, we identified one sequence (MZ314975) with 100% sequence identity to those reported in [34] and in recently named C. occultus n. sp. Eimeria The analysis of the 18S rDNA amplicons revealed that seven sequences belonged to the genus Eimeria. Among the seven sequences, five of them (MZ314986-90) are 99.82-100% identical to the sequences of E. alorani (MK625209, KU192965) and E. caviae (JQ993649) available in GenBank and isolated from other rodents. No further species distinction was possible due to the low sequence diversity of these two species in the amplified region (1 bp) and there being only one 18S rDNA E. caviae sequence available in GenBank. The other two sequences obtained (MZ314991-92) show 100% identity to E. ferrissi sequences available in GenBank and isolated from mice (MH752036, MH751925) and squirrels (KT360995). Giardia Of the 50 rat fecal samples analyzed by nested PCR targeting the tpi gene locus of Giardia, 17 (34%) gave amplicons of the correct length. Of these 17 amplicons, 12 were successfully sequenced, revealing two Giardia assemblages, namely assemblage G (n = 3) and assemblage A (n = 9). Two of the Giardia assemblage G sequences obtained (MZ322740-41) were 100% identical to others available in GenBank isolated from rats (MT114179, EU781013), whereas sequence MZ322742 showed one nucleotide difference. Among the nine sequences belonging to assemblage A, eight sequences MZ322743-50 showed 100% identity to sequences from GenBank classified as assemblage AI and isolated from cattle (EF654693) and sheep (MK639171). Moreover, sequence MZ322751 showed a 99.8% similarity with other sequences available in GenBank classified as assemblage AII and isolated from donkeys (MN704937), dogs (KY608997) and cats (LC341572) from all over the world. Land-Use Data For each investigated site, the surface occupied by each land-use category in a 200-m radius buffer area (proxy for home range) of the captured rats is summarized in Table 2. Predictors of Cryptosporidium, Eimeria and Giardia Shedding in Urban Rats The non-parametric spatial correlation did not reveal a spatial correlation between rats trapped and the shedding of the three investigated protozoan parasites (Table S2, Supplementary Material). Statistical analyses showed that the investigated variables (place of capture, body mass, sex, sexual maturity and land-use) were low to moderate predictors of Cryptosporidium, Eimeria or Giardia shedding. The variable that contributed the most to predicting the shedding of Cryptosporidium by brown rats was the presence of green infrastructure in the rats' home range (RVI 0.60); the most important predictor of Eimeria shedding was sexual maturity, although the RVI was low (0.31), and the presence of transport infrastructure in the rat home range was found to be the most important predictor of Giardia shedding (RVI 0.83) (Supplementary Material, Table S3). Prevalence and Identification of Cryptosporidium and Giardia in Surface Water Samples Cryptosporidium was detected in nine of the 15 water samples analysed between May 2019 and September 2020, in concentrations ranging from 0.3-4 oocysts/L. Giardia was detected in 11 samples, in concentrations ranging from 0.6-96 cysts/L (Table 3). Cryptosporidium and Giardia's highest concentrations from November 2019 occurred during a rainy day, however we have no explanation for the unusual high values shown in May 2020. Table 3. Sampling date, volume filtered, turbidity and Cryptosporidium and Giardia (oo)cyst concentration/L at the Danube Canal from May 2019 until September 2020. The sample limit of detection (<) is given when no Cryptosporidium and Giardia (oo)cyst were observed. Sampling Date Volume Filtered (L) Turbidity (NTU) Cryptosporidium Oocysts/L Giardia Cysts/L Volumes of surface water filtered were determined by the turbidity of the water samples. Samples with high turbidity limited the filtration to 5-7 L in some cases. Due to the variability of volumes filtered, sample-specific limits of detection were calculated ( Table 3). Out of 15 samples, 12 revealed an amplicon in the 18S rDNA nested PCR for Cryptosporidium. Of these 12 samples, only six generated a good consensus sequence; however, none showed >97% sequence similarity with any Cryptosporidium species available in GenBank. Four of them were identified as a Perkinsea species known to be a frog parasite, and one as Peridinopsis penardii, a dinoflagellate. Both of these genera also belong to the alveolates, as Cryptosporidium does, and both are known to be abundant in fresh water samples. One other sample showed the highest sequence identity to an unidentified stramenopile, the stramenopiles being the sister taxon of the alveolates. Regarding Giardia, out of 15 samples we successfully sequenced one amplicon of the tpi gene locus of Giardia spp. The 295 bp long sequence (MZ393409) had a 100% similarity with various Giardia assemblage C sequences found in GenBank. Discussion The present study reports for the first time the prevalence of Cryptosporidium, Eimeria, and Giardia at the wildlife-water interface ecosystem in two densely populated sites located in the city center of Vienna, the capital of Austria. Cryptosporidium rat genotypes I/IV and Cryptosporidium environmental species identified in our study have been previously isolated from brown rat feces worldwide [35] showing a good capacity of these Cryptosporidium genotypes for infecting rats [35]. C. occultus has also been previously isolated in domesticated animals, brown rats, and once in humans; however, it failed to infect calves under controlled experimental conditions [34]. Our results showed that at the time they were trapped, urban brown rats did not shed zoonotic Cryptosporidium species. Although the prevalence of Cryptosporidium was very similar in rats trapped at the Danube Canal and Karlsplatz, at the latter only Cryptosporidium rat genotype I/IV sequences were identified. Differential pathogen prevalence and the existence of isolated patches among rats within the same city has also been reported in other studies [36][37][38]. Such differences have mostly been attributed to the fragmentation of the urban environment. The nested PCR used to sequence Cryptosporidium also detected Eimeria sequences. That fact can be explained by the high similarity of the two genomes, classified within the same taxonomical Order Eucoccidioida. Coccidians of the genus Eimeria have been described as host-specific intracellular parasites [39]. Several members of the genus cause considerable morbidity and mortality in livestock and wildlife and are thus of veterinary importance [40]. Eimeria species associated with rodents show a degree of host specificity, but individual isolates can experimentally infect different species and even genera of rodents [39]. Recent studies on rural rodents in Central Europe reported an Eimeria prevalence of 32.7% combining coprological investigations with molecular methods [41]. In our study, without using specific primers for Eimeria spp., we detected the parasite in 14% of rats trapped. Out of the seven amplicons obtained, five showed a 99-100% identity with the 18S rDNA sequences of E. alorani and E. caviae isolated from other rodents and the other two amplicons obtained were 100% identical with sequences of E. ferrissi isolated from mice [41]. In fact, all molecular studies on the prevalence of Eimeria in rodents from urban or rural areas performed in the past 20-25 years have focused on mice. The diversity of Eimeria, as observed for Cryptosporidium, differed according to the site, being lower at the Karlsplatz than the Danube Canal, supporting the hypothesis of the existence of isolated patches due to the fragmentation of the urban environment [36,37]. Regardless of that, our results suggest that Viennese brown rats are infected by rodent-specific Eimeria species, thus spillover to other domestic animals such as dogs and cats seems unlikely. Among the 50 rat feces samples analysed for Giardia in our study, the tpi locus was successfully sequenced for 12 samples revealing two Giardia assemblages, namely the typical rodent assemblage G (n = 3) and the typically human but zoonotic assemblage A (n = 9). Giardia assemblage G sequences obtained in our study had 99.8-100% similarity to other sequences isolated from rats [42,43], which is not surprising since rats are considered the major hosts of that assemblage [44]. Among the nine sequences belonging to assemblage A, eight showed 100% identity to sequences classified as assemblage AI and isolated from sheep [45], cattle [46] and humans [47]. The other isolate belonging to assemblage A showed a 99.8% similarity with sequences classified as assemblage AII, previously isolated from other mammals [48] and, as observed with assemblage AI sequences, humans [49]. Among the sequences available in GenBank that had a 100% similarity with the latter assemblage A sequence obtained in our study, only one had been previously isolated from rodents, specifically from prairie dogs [50]. Thus, our study is the first one reporting identical Giardia assemblage A isolate in rats that has elsewhere caused symptomatic infections in humans, suggesting that Giardia shed by rats in Vienna may pose a risk for public health. Co-occurrence of two protozoa parasites was observed in ten rats, with six of them shedding Cryptosporidium and Giardia and four of them shedding Eimeria and Giardia. Both protozoa were successfully sequenced in five rats. Most of the Cryptosporidium and Eimeria species as well as Giardia assemblage G isolated from urban brown rats' feces in the city center of Vienna are known to infect rats or other rodents. However, Giardia assemblage A, shed by the majority of the rats in our study, has the broadest host-range, infecting all kinds of mammals including livestock, cats, dogs, rodents, marsupials, non-human primates and humans [8]. Thus, although human-to-human infections are common, assemblage Arelated infections can have a zoonotic origin. In our study, the prevalence of rats shedding Giardia was higher at the Danube Canal (38% vs. 18%). As rats were trapped within a <200 m radius of the Danube Canal, we hypothesized that Giardia assemblage A sequences may have been transmitted via water. Cryptosporidium and Giardia (oo)cysts were detected in 60% and 73% of water samples taken <1 km downstream from where the rats were trapped at the Danube Canal with concentrations ranging between 0.3-4 oocysts/L and 0.6-96 cysts/L (median 1 (oo)cyst/L), respectively. The occurrence of Cryptosporidium and Giardia in surface waters has been traditionally linked to seasonality and more specifically to rainfall events worldwide [51]. This information needs to be taken into account when comparing studies conducted during different seasons and years. From the most upstream point of the Danube basin to its mouth, Cryptosporidium oocysts have been detected in 40% of the samples taken monthly over 2004-2005 downstream from Budapest (Hungary) before the implementation of a wastewater treatment plant, in concentrations ranging from 0-0.5 oocysts/L [52], detected but not quantified in Serbia [53], and detected from June to September at the river mouth in Romania in 2010 with concentrations ranging from 10-65 oocysts/L [54]. Giardia cysts were frequently detected with concentrations ranging from 1.35-3 cysts/L at the same spot in Hungary [52] over 2004-2005, detected but not quantified in two different spots within Serbia without co-occurrence with Cryptosporidium [53], and also detected from June to September at the river mouth in Romania in 2010 with concentrations ranging from 4-45 cysts/L [54]. As in our results, Giardia concentrations reported by the aforementioned studies were usually higher than Cryptosporidium, except at the river mouth. Although the concentration of both parasites may seem low in comparison to the traditional standard fecal indicator bacteria used to assess the quality of recreational waters, note that the infective stages of both parasites' cysts and oocysts are robust and able to survive longer than bacteria under harsh environmental conditions and disinfectants commonly used for the production of drinking water, such as chlorine, as well as wastewater treatments [55]. Moreover, they have a very low infectious dose-as few as ten Giardia cysts or one Cryptosporidium oocyst may be enough to cause infection [3,55]. Despite the fact that in none of the studies aforementioned, including the present one, was the infectivity of the parasites tested, our results highlight the need for more environmental studies for a correct assessment of the risk of protozoan infections while using the River Danube for recreational purposes or for drinking water production. The low concentration of (oo)cysts detected in most of the water samples, the presence of numerous species/genotypes co-occurring in the same sample thus making identification by direct sequencing without cloning impossible, and the rich diversity and partly high abundance of other, related protists in freshwater (e.g., other alveolates or excavates), hindered the identification below the genus level of many samples. Moreover, Cryptosporidium oocysts may remain in the environment as empty shells, so-called "ghosts", after losing their nuclei and with them their genetic information. Nonetheless, they can still be detected and quantified by the flat membrane method used in this study. Regarding Giardia taxonomy, there are also several hurdles one has to overcome when identifying isolates, among the most important ones being co-occurrence of more than one genotype, potential recombination within a population, allelic sequence heterozygosity (ASH), specificity of the primers used which bind preferentially to certain genotypes, and non-concordance between loci [5,15]. Thus, the identification of Cryptosporidium and Giardia species and assemblages from (oo)cysts found in environmental sources pose a real challenge. During the Joint Danube Survey of 2013, a six week monitoring campaign on Danube water quality, Kirschner et al. [1] demonstrated that the major contributors to the microbial fecal pollution of the Danube were humans. Other contributors such as ruminants and pigs were detected in <10% of the samples with low concentrations, despite animal farming and pastureland along the river [1]. According to these findings, a majority of Cryptosporidium and Giardia species and assemblages typically infecting humans such as C. hominis and C. parvum from both human and zoonotic origin, as well as Giardia assemblages A and B, would be expected. In our study we tried to identify species, genotypes and assemblages of the two parasites; however, all sequences obtained with the primers specific for Cryptosporidium were in fact from other alveolates, indicating less than 100% specificity for these primers and a higher density of these other alveolates in the water samples investigated. Similarly, for Giardia, only one isolate was sequenced successfully, revealing Giardia assemblage C, the "dog" genotype, the source of which could be close as a result of urban run-off or much further upstream. In Austria, there is no information on the number of infections caused by Cryptosporidium and Giardia, since they are both not reportable diseases. Germany, with a population 10 times higher, reported 1974 Cryptosporidium and 3296 Giardia infections in 2019 [56]. Lee et al. [57] determined the assemblages of Giardia causing diarrhea in Austrian patients in 2015. Interestingly, 65.4% of the infections were caused by assemblage B, whereas 34.6% of them by assemblage A, among which 25% were classified as sub-assemblage AII and 9.6% of them as sub-assemblage AI. Authors suggested that the high diversity found among Giardia isolates could be explained by travelers returning from various endemic areas worldwide [57]. The urban brown rats trapped in Vienna in 2017 and part of the Austrian patients with diarrhea in 2015 shed the same zoonotic Giardia genotypes. Our study did not reveal proximity to water as a predictor for rats shedding Giardia. Moreover, the Giardia assemblage identified at the Danube Canal was assemblage C. The number of rats trapped in the present study does not provide strong statistical power to compute a robust model, and rats were trapped in 2017 whereas water samples were taken over 2019-2020. Moreover, Cryptosporidium and Giardia are known to seasonally infect humans and thus, their presence in wastewater and surface water also varies over the year [51]. Therefore, given the presence of Giardia assemblages infecting urban brown rats and humans within the same urban environment, it is crucial to monitor the prevalence and species diversity of protozoan parasites in humans and their reservoirs. Moreover, elucidating potential transmission pathways such as contact with wastewater or fecally polluted surface waters will help in detecting potential emerging threats for public health and designing effective preventive strategies within a One Health approach. Such an approach may include an integrated pest management program taking into account the ecology of urban rats [17,27].
6,839
2021-07-27T00:00:00.000
[ "Biology" ]
Genetic Knock-Down of HDAC7 Does Not Ameliorate Disease Pathogenesis in the R6/2 Mouse Model of Huntington's Disease Huntington's disease (HD) is an inherited, progressive neurological disorder caused by a CAG/polyglutamine repeat expansion, for which there is no effective disease modifying therapy. In recent years, transcriptional dysregulation has emerged as a pathogenic process that appears early in disease progression. Administration of histone deacetylase (HDAC) inhibitors such as suberoylanilide hydroxamic acid (SAHA) have consistently shown therapeutic potential in models of HD, at least partly through increasing the association of acetylated histones with down-regulated genes and by correcting mRNA abnormalities. The HDAC enzyme through which SAHA mediates its beneficial effects in the R6/2 mouse model of HD is not known. Therefore, we have embarked on a series of genetic studies to uncover the HDAC target that is relevant to therapeutic development for HD. HDAC7 is of interest in this context because SAHA has been shown to decrease HDAC7 expression in cell culture systems in addition to inhibiting enzyme activity. After confirming that expression levels of Hdac7 are decreased in the brains of wild type and R6/2 mice after SAHA administration, we performed a genetic cross to determine whether genetic reduction of Hdac7 would alleviate phenotypes in the R6/2 mice. We found no improvement in a number of physiological or behavioral phenotypes. Similarly, the dysregulated expression levels of a number of genes of interest were not improved suggesting that reduction in Hdac7 does not alleviate the R6/2 HD-related transcriptional dysregulation. Therefore, we conclude that the beneficial effects of HDAC inhibitors are not predominantly mediated through the inhibition of HDAC7. Introduction Huntington's disease (HD) is an autosomal dominant late-onset progressive neurodegenerative disorder with a mean age of onset of 40 years. Symptoms include psychiatric disturbances, motor disorders, cognitive decline and weight loss, disease duration is 15-20 years and there are no effective disease modifying treatments [1]. The HD mutation is an expanded CAG trinucleotide repeat in the HD gene that is translated into a polyglutamine (polyQ) repeat in the huntingtin (Htt) protein [2]. Neuropathologically, the disease is characterized by neuronal cell loss in the striatum, cortex and other brain regions and the deposition of nuclear and cytoplasmic polyQ aggregates [3,4]. The R6/2 mouse model expresses exon 1 of the human HD gene with more than 150 CAG repeats [5,6]. The R6/2 phenotype has an early onset and rapid and reproducible phenotype progression that recapitulates many features of the human disease. Motor and cognitive abnormalities can be detected before 6 weeks of age [7,8], and mice are rarely kept beyond 15 weeks. PolyQ aggregates are clearly apparent in some brain regions from 3 to 4 weeks of age and striatal cell loss has been documented at later stages [9]. This suggests that the mouse phenotype is predominantly caused by neuronal dysfunction. Transcriptional dysregulation occurs early in the molecular pathology of HD and has been recapitulated across multiple HD model systems (reviewed in [10]). RNA Affymetrix expression profiles of brain regions and muscle from both the R6/2 transgenic mouse and knock-in mouse models of HD show high correlation to expression profiles from HD post-mortem tissue [11][12][13]. The molecular mechanisms that underlie these selective transcriptional disturbances are unknown and remain the subject of investigation. The control of eukaryotic gene expression in part depends on the modification of histone proteins associated with specific genes with the acetylation and deacetylation of histones playing a critical role in gene expression [14][15][16]. Studies in numerous HD models have shown that mutant huntingtin expression leads to a change in histone acetyltransferase (HAT) activity and suggest that aberrant HAT activity may contribute to transcriptional dysregulation in HD [17][18][19]. Supporting this view, administration of histone deacetylase (HDAC) inhibitors such as suberoylanilide hydroxamic acid (SAHA) consistently shows therapeutic potential in HD models [20][21][22][23][24][25][26][27][28], at least partly through increasing the association of acetylated histones with down-regulated genes and correcting mRNA abnormalities [29]. There are three major classes of mammalian HDACs, based on their structural homology to the three Saccharomyces cerevisiae HDACs: rpd3 (class I), hda1 (class II) and sir2 (class III). Class I comprises HDAC1, -2, -3 and -8, class IIa HDAC4, -5, -7 and -9, class IIb HDAC6 and -10 and HDAC11 (class IV) shows homology to both rpd3 and hda1 [30]. Pan-HDAC inhibitors such as SAHA target the zinc-dependent HDACs 1-11 and not the NAD+ dependent class III HDACs (the seven sirtuins, SIRT1-7) [31]. In order to gain insight into which HDACs must be inhibited in order to alleviate HD-related phenotypes, genetic approaches have been used to complement pharmacology in Drosophila melanogaster and Caenorhabditis elegans HD models [19,32]. However, as HDACs show differential levels of evolutionary conservation [33], the extent to which these studies will inform drug development in man has yet to be demonstrated. The molecular mechanisms by which SAHA exerts its beneficial and toxic effects are currently not clear, nor whether the beneficial and toxic effects can be dissociated. We therefore need to systematically interrogate known targets of SAHA genetically in order to identify the HDAC(s) that present therapeutic targets relevant to HD. As a consequence, we have embarked on a serried of genetic crosses between the R6/2 mouse and specific HDAC knock-out mouse lines. It has previously been shown that in addition to enzyme inhibition, SAHA treatment selectively suppresses expression of HDAC7 in vitro [34], thus representing a potential molecular avenue by which pan-HDAC inhibition exerts a beneficial effect. We confirmed that chronic administration of SAHA decreases Hdac7 mRNA expression levels in mouse brain irrespective of the HD genotype. Hdac7 knockout mice were previously generated through a targeted inactivation of the endogenous murine Hdac7 gene [35]. Hdac7 null mice are embryonic lethal and die at E11.0. However, Hdac7 heterozygote knockout mice were found to be viable and fertile, with no overt phenotype. We demonstrate that Hdac7 mRNA and protein levels are reduced in Hdac7+/2 heterozygous knock-out mice and that Hdac7 expression is not altered by the presence of the R6/2 transgene. We performed a genetic cross between R6/2 mice and Hdac7+/2 heterozygotes and found that genetic knock-down of Hdac7 fails to confer any improvement to a number of physiological, behavioral and transcriptional phenotypes. We conclude that neither inhibition of HDAC7 nor its downregulation contributed significantly to the beneficial effects that we observed upon administration of SAHA. Results The expression of Hdac7 is decreased in R6/2 brain in response to chronic treatment with SAHA SAHA has been described as a pan-HDAC enzyme inhibitor [36,37]. In addition, it was recently shown that treatment of a number of cell lines with SAHA resulted in the specific downregulation of HDAC7 at the mRNA level [34]. To determine whether the expression of Hdac7 might be similarly altered by the administration of SAHA in vivo, we established a quantitative real time PCR assay (RT-qPCR) for murine Hdac7. We had previously conducted an efficacy trial to assess the effects of the administration of SAHA, when complexed with hydroxypropyl-b-cyclodextrin at a concentration of 0.67 g/L in the drinking water, and observed a considerable improvement in RotaRod performance [23]. The WT and R6/2 mice that had been treated with SAHA or vehicle in this experiment had been sacrificed at 13 weeks of age and we still had access to cerebellar cDNA that had been prepared from the brains of these mice. We were able to show that SAHA significantly decreased the expression of Hdac7 in the cerebellum of both wild type (WT) and R6/2 mice ( Figure 1A). Therefore, SAHA has a similar effect on the expression of Hdac7 in vivo as has been described in cell culture and this could contribute to the beneficial effects of SAHA administration. Hdac7 expression level is unaffected by the expression of the R6/2 transgene We found heterozygous Hdac7 knockout mice to be viable and fertile, with no overt phenotype. Therefore, prior to interpreting the results of a genetic cross to determine whether heterozygous knock-down of Hdac7 has a beneficial effect on R6/2 HD-related phenotypes, it was important to show that (1) Hdac7 expression is reduced in Hdac7+/2 knock-out mice: that Hdac7 is not autoregulated to wild type levels as is the case for Hdac1 [38] and (2) that the presence of the R6/2 transgene does not alter Hdac7 expression levels. RT-qPCR was performed on cDNA prepared from the striatum and cerebellum of 15 week old wild type (WT), Hdac7 heterozygote (Hdac7+/2), R6/2 transgenic (R6/2) and R6/ 2 mice heterozygote for Hdac7 (R6/2-Hdac7+/2) mice. We found that Hdac7 expression levels were significantly decreased in the striatum and cerebellum of Hdac7+/2 heterozygote mice, irrespective of the presence of the R6/2 transgene ( Figure 1B and C). Furthermore, we saw no difference in Hdac7 expression levels between WT and R6/2 mice in either brain region. Western blotting was performed on whole cell lysates from 15 week cortices. We found HDAC7 protein levels to be in agreement with the RNA expression profiling ( Figure 1D and E). Hdac7 is expressed in neuronal populations in the mouse brain An important feature of the class IIa HDACs is their ability to shuttle between the nucleus and the cytoplasm. Precise regulation of the subcellular distribution of class IIa HDACs is intimately linked to the control of their activity and plays a pivotal role in cellular processes and organ development [39]. Previous observations have highlighted a differential subcellular localization of HDAC7 in different cell lines and body tissues, suggesting that the control of subcellular localization and function of HDAC7 differs with respect to the cell type. To explore the expression pattern of Hdac7 in the mouse brain we performed immunohistochemistry to coronal sections of WT and R6/2 brains from animals at 14 weeks of age. HDAC7 is present in both the nucleus and cytoplasm in the striatum (Figure 2), cortex and cerebellum (data not shown) of both WT and R6/2 brains. It is present in all neurons, but interestingly appears to be absent from the nuclei of at least some non-neuronal cell populations. Genetic reduction of Hdac7 does not modify the R6/2 phenotype We have previously established a set of quantitative tests with which to monitor progressive behavioral phenotypes in R6/2 mice [40][41][42]. To generate mice for this analysis, male R6/2 mice were bred with female Hdac7 heterozygote knock-out mice (Hdac7+/2) to produce at least 10 female mice from each genotype (WT, n = 10; Hdac7+/2, n = 13; R6/2, n = 14; R6/2-Hdac7+/2, n = 12), which were born over a period of 5 days. The CAG repeat size was well matched between the R6/2 and R6/2-Hdac7+/2 groups (P = 0.102) ( Table 1). Weight gain, RotaRod performance, grip strength and exploratory activity were monitored from 4 to 15 weeks of age, and in each case, a specific test was performed on the same day and at the same time during the weeks in which measurements were taken. The body temperature of the mice was recorded at 14 and 15 weeks of age by rectal probe ( Figure 3B). R6/2 mice were found to be hypothermic [F (1,45) = 69.781, P,0.001], which worsened over the course of the week between measurements [F (1,45) RotaRod performance is a sensitive indicator of balance and motor coordination, which has been reliably shown to decline in R6/2 mice [40]. Using this test, we found that R6/2 and R6/2-Hdac7+/2 mice performed similarly with age and that the performance of Hdac7+/2 and WT mice was equivalent ( Figure 3C). Consistent with previous results, the overall RotaRod performance of R6/2 was impaired as compared to WT [F (1 ,45) = 40.634, P,0.001] and deteriorated with age [F (2 ,900) = 9.119, P,0.001]. Overall, the performance of Hdac7+/2 mice did not differ from WT [F (1,45) = 0.047, P = 0.83] but did change with age [F (2,900) = 4.427, P = 0.009] however, this is most likely the result of the exceptionally good, but atypical, performance of the WT mice at four weeks. There was no overall effect of Hdac7 knock-down on the RotaRod performance of the R6/2 mice [F (1 ,45) = 0.194, P = 0.661] and examination of the data ( Figure 3C) indicates that the statistically significant interaction between the genotypes over the course of the experiment [F (2,900) = 4.516, P = 0.009] is, once again, due to the performance of the WT mice at 4 weeks of age and not a reflection of an alteration in the R6/2 phenotype. Blots were probed with an antibody that recognizes Hdac7 (120 kDa) and a non-specific band (70 kDa). (E) Quantification of Hdac7 protein expression levels in WT (white) and Hdac7+/2 (grey) mice. Quantification was performed on blots containing four samples per genotype using the non-specific band for reference. Blots were additionally probed with an antibody to a-tubulin to confirm equal protein loading (data not shown). Error bars represent standard deviation from the mean (n = 4). * p,0.05, ** p,0.01, *** p,0.001. doi:10.1371/journal.pone.0005747.g001 Exploratory activity was assessed fortnightly from 5 to 13 weeks of age as described previously [41] and analyzed by repeated measures general linear model (GLM) ANOVA. Mice were assessed for a period of 60 min for total activity, mobility and rearing and the P-values obtained through the analyses are displayed in Table 2 for each parameter. Mice of all genotypes exhibit most activity during the first 15 minutes of the assessment period [41]. R6/2 mice show an overall hypoactivity relative to WT mice from 11 weeks ( Table 2, R6/2) although the pattern of activity over the course of the 60 min period was significantly different between R6/2 and WT mice (R6/2*time) by five weeks of age. Hdac7+/2 mice were indistinguishable from WT mice, both in overall activity (HDAC7) and in the 60 minute pattern of activity (HDAC7*time). There was no overall improvement in R6/2 hypoactivity through genetic reduction of Hdac7 (R6/ 2*HDAC7), nor was the R6/2 pattern of hypoactivity changed (R6/2*HDAC7*time). In summary, Hdac7 genetic reduction does not improve hypoactivity in the R6/2 mice. Hdac7 genetic reduction does not ameliorate the dysregulated expression of genes of interest in R6/2 mouse brains As HDAC inhibitors have been shown to ameliorate the dysregulation of gene expression in HD models systems [28,29,43], we used RT-qPCR to measure the level of expression of a set of genes of interest in the striatum and cerebellum of WT, R6/2, Hdac7+/2 and R6/2-Hdac7+/2 mice aged 15 weeks. These included striatal genes that are consistently down-regulated in mouse models of HD and in HD patient brains and cerebellar genes that have consistently altered expression patterns in both R6/2 and the HdhQ150 knock-in mouse model of HD [13,42,44]. We found that Hdac7 reduction did not ameliorate the transgenemediated transcriptional dysregulation in the striatum ( Figure 4A) or the cerebellum ( Figure 4B) of 14 week old mice. However, the expression of Igfbp5 was increased with HDAC7 reduction in the cerebellum of WT (P = 0.047) but not R6/2 mice (P = 0.690) ( Figure 4B). Concomitantly, we found no difference in R6/2 transgene expression in the striatum (P = 0.620) ( Figure 4C) and the cerebellum (P = 0.240) ( Figure 4D) of R6/2 and R6/2-Hdac7+/2 mice. Discussion We have previously shown that chronic administration of the HDAC inhibitor SAHA to R6/2 mice significantly improves some motor and neuropathological phenotypes [23]. However, the therapeutic index of SAHA in mice is very narrow, and therapeutic doses show considerable toxicity resulting in weight loss in both WT and R6/2 mice. SAHA has been reported to inhibit the eleven Zn 2+ dependent HDAC enzymes [37] and therefore dissection of the mechanism through which SAHA exerts its beneficial effects is complex. In order to identify the HDAC enzyme(s) that are therapeutic targets for HD and in an attempt to separate the beneficial effects of SAHA from its toxicity, we have embarked on a series of genetic crosses to mice that have been genetically engineered to knock-out specific HDAC enzymes. HDAC7 is of additional interest because SAHA has been shown to decrease the level of expression of HDAC7 in a number of cell lines [34]. In this report we showed that these results extend to an in vivo system and that, after chronic administration, SAHA downregulates Hdac7 mRNA levels in the brains of WT and R6/2 mice. As nullizygosity for Hdac7 is embryonic lethal [35], it is not possible to investigate the effects of knocking-out the Hdac7 gene on the R6/2 phenotype. However, reduced levels of Hdac7 might be more akin to the effects to the pharmacological inhibition of this enzyme. We therefore established whether an investigation of the effects of knocking-down Hdac7 levels using Hdac7+/2 knock-out mice would be feasible. This was necessary as it has been shown that the effects of the genetic knock-down of Hdac1 cannot be studied because Hdac1 mRNA and protein levels autoregulate to that of WT mice in Hdac1+/2 mice [38] and furthermore, nullizygosity for Hdac1 is embryonic lethal [45]. We thus confirmed that Hdac7 expression is reduced in both the striatum and cerebellum of Hdac7+/2 mice and that the presence of the R6/2 transgene does not alter Hdac7 expression levels in either Hdac7+/+ or Hdac7+/2 mice at the mRNA or protein levels. We went on to show that genetic reduction of Hdac7 levels did not impact on the body weight, body temperature, RotaRod performance, grip strength or exploratory activity of R6/2 mice. Similarly, decreased Hdac7 expression did not ameliorate the HDrelated dysregulated expression levels of a number of specific genes of interest. Very little is currently known about the function of HDAC7 in brain. Hdac7 has previously been shown to be expressed throughout the rat brain by in situ hybridization [46]. Crucially, we established that Hdac7 is present in neurons, but were surprised to find that it could not be detected by immunohistochemistry in at least a proportion of non-neuronal brain cells. HDAC7 together with HDACs 4, -5 and -9 comprise the class IIa HDACs, which share a high degree of homology at their Cterminal catalytic domain [47]. The class IIa HDACs shuttle between the nucleus and cytoplasm and precise regulation of their subcellular distribution plays a pivotal role in modulating their function, via post-translational modifications such as phosphorylation, and the formation of nucleocytoplasmic shuttling complexes [31,[47][48][49][50][51][52][53][54][55][56][57][58][59][60]. Of the class IIa HDACs, HDAC7 is the most divergent [47,61]. HDAC7 has been shown to associate with transcription co-repressors and factors such as CtBP, MEF2, HP1a, SMRT, N-CoR, mSin3A, and HIF1a [16,31,39,47,49,57]. These interactions are consistent with the role of HDAC7 in regulating gene expression either as a co-activator or a corepressor. In vitro studies with recombinant HDAC7 protein have suggested that histones may be substrates of HDAC7 deacetylase activity [62]. However, it has been shown that modulating HDAC7 levels in vitro by siRNA knockdown or overexpression is associated with growth arrest without detectable changes in histone acetylation or p21 gene expression [34]. This would be consistent with HDACs having many protein substrates, in addition to histones, involved in the regulation of gene expression, cell proliferation, and cell death and thus HDACs can be considered to be ''lysine deactelylases''. Although the functions of HDAC7 in brain are unknown and remain to be elucidated, our genetic studies lead us to conclude that inhibition of Hdac7 is not a major mediator of the beneficial effects that we obtained upon administration of SAHA to R6/2 mice and HDAC7 should not be prioritized as a therapeutic target for HD. Phenotype analysis Mice were weighed weekly to the nearest 0.1 g. Motor coordination was assessed using an Ugo Basile 7650 accelerating RotaRod (Linton Instrumentation, UK), modified as previously described [40]. At 4 weeks of age, mice were tested on four consecutive days, with three trials per day. At 8, 10, 12 and 14 weeks of age, mice were tested on three consecutive days with three trials per day. Forelimb grip strength was measured once a week at 4, 7, 9, 11 and 13 weeks using a San Diego Instruments Grip Strength Meter (San Diego, CA, USA) as described [40]. Exploratory, spontaneous motor activity was recorded and assessed every two weeks at 5, 7, 9, 11 and 13 weeks of age for 60 min during the day using AM1053 activity cages, as described previously [41]. Briefly, activity (total number of beam breaks in the lower level), mobility (at least two consecutive beam breaks in the lower level) and rearing (number of rearing beam breaks) were measured. The data were collected and analyzed as described previously. RNA extraction and real-time PCR expression analysis RNA extraction and reverse transcription of 4 mg of total cerebellar RNA and 1 mg total striatal RNA was performed as previously described [44]. The RT reaction was diluted 10-fold in nuclease free water (Sigma) and 5 ml was used in a 25 ml reaction containing Precision MasterMix (PrimerDesign), 400 nM primers and 300 nM probe using the Opticon 2 real-time PCR machine (MJ Research). Estimation of mRNA copy number was determined in duplicate for each RNA sample by comparison to the geometric mean of two or three endogenous housekeeping genes as described [44]. Primer and probe sequences are available in Supplementary Table S1. Immunohistochemistry and confocal microscopy Whole brains were frozen in isopentane, stored at 80uC and 15 mm thick sections were cut using a cryostat (Bright Instruments Ltd.). Sections were fixed for 10 min in methanol at 220uC and washed twice in 0.1 M phosphate buffered saline pH 7.4 (PBS) for 15 min before blocking in PBS containing 2% bovine serum albumin (BSA) and 0.1% Triton-X for 15 min. Sections were incubation in primary antibodies in PBS with BSA overnight at 4uC, washed twice in PBS for 15 min, incubated in secondary fluorescent antibodies in PBS with BSA for 1 hr at room temperature and washed twice in PBS for 15 min. Primary antibodies were: HDAC7 (rabbit polyclonal Sigma H2662) (1:50) and NeuN (mouse monoclonal Chemicon MAB377) (1:200) and secondary antibodies were: Alex-555 donkey anti-rabbit (1:1000) and Alexa 488 goat antimouse (1:1000) respectively (Molecular Probes). Nuclei were visualized using TO-PRO-3 (Molecular Probes) (1:1000). Slides were mounted in Mowial and antibody location was visualized using an LSM150 Meta confocal microscope (Zeiss). Statistical analysis Statistic analysis was performed by Student's t-test (Excel or SPSS), one-way ANOVA, two-way ANOVA and repeated measures GLM ANOVA, with the Greenhouse-Geisser correction for non-sphericity using SPSS.
5,175.8
2009-06-01T00:00:00.000
[ "Medicine", "Biology" ]
Effect of pairing on the symmetry energy and the incompressibility The role of superfluidity on the symmetry energy and on the incompressibility is studied in nuclear matter and finite nuclei employing Hartree-Fock-Bogoliubov modeling based on several types of pairing interactions (surface, mixed and isovector-density dependent). It is observed that, while pairing has only a marginal effect on the symmetry energy and on the incompressibility at saturation density, the effects are significantly larger at lower densities. Introduction The nuclear symmetry energy and the incompressibility are closely related to the isovector Giant Dipole Resonance (GDR) [1] and to the isoscalar Giant Monopole Resonance (GMR) [2,3], respectively. The symmetry energy, and in particular its density dependence, as well as the isospin dependence of the incompressibility modulus are largely debated issues in nuclear physics at present. In fact, these issues have relevant implications i) for nuclear structure, since the symmetry energy has an important effect on the size of the neutron root-mean-square (r.m.s.) radius in neutron-rich nuclei and the incompressibility is related to the GMR centroid, ii) for nuclear reactions, e.g., in intermediate energy heavy-ion collisions where the isospin distribution of the reaction products is dictated by the density dependence of the symmetry energy, and obviously iii) for the description of neutron stars and their formation in core-collapse supernovae. Review papers have been devoted to this topic [4,5]. Empirical information on the symmetry energy can be obtained from various sources, none of them being so far conclusive by itself. Measurements of the neutron skin, in lead, for instance, are still not conclusive enough: while most of them are plagued by unknown model dependence, the recent model-independent PREX measurement [6] could not yet reach the promised accuracy. The properties of Contribution to the Topical Issue "Nuclear Symmetry Energy" edited by Bao-An Li,Àngels Ramos, Giuseppe Verde, Isaac Vidaña. a e-mail<EMAIL_ADDRESS>the isovector GDR, of the low-lying electric dipole excitations, and of the charge-exchange spin-dipole strength have been suggested as constraints (see, e.g., [7]). In addition, different model analysis of heavy-ion collisions have been proposed as a test of the main trend of the symmetry energy at densities below saturation. However, in none of these studies, to our knowledge, the problem of the pairing effects on the symmetry energy has been addressed. The apparent decrease of incompressibility in superfluid nuclei [8,9] raises the question about a possible similar effect in infinite nuclear matter: until now, when the nuclear incompressibility is extracted from Energy Density Functional (EDF) calculations of the GMR, and compared with experiments, there has been no attempt to pin down the contribution of the pairing component of the functional. However, considering results for finite nuclei, the equations of state used for neutron stars and supernovae predictions should take into account pairing effects in the calculation of the incompressibility modulus. Therefore the question of the behavior of K ∞ with respect to the pairing gap is raised since it seems clear from nuclear data that the finite nucleus incompressibility K A decreases with increasing pairing gap [8]. A similar study for nuclear matter, as well as a more systematic study in finite nuclei, should be undertaken. This is the goal of the present work [10]. It should be also noted that we will not consider the neutron-proton T = 0 pairing channel since the nuclei considered are far from N = Z. In this paper, the effects of the pairing correlations on the symmetry energy and on the incompressibility are studied consistently in nuclear matter and in finite nuclei. The effects coming from the correlation energy associated with the pairing force are included. It should be noted that the surface versus the mixed nature of the pairing interaction is still under discussion. For instance, a recent systematic study based on the odd-even mass staggering seems to slightly favor a surface type of pairing interaction [11]. In the following, we will therefore explore various kinds of pairing interactions. Nuclear matter In this section, we study the effects of the pairing correlations on the symmetry energy and incompressibility in nuclear matter. Energy density The nuclear energy density ( = E/V ) is the sum of the Skyrme part, Skyrme , that includes the kinetic energy [12], plus the pairing energy density, Here In eq. (2), Δ τ is the pairing gap and N τ is the density of states, given by N τ = m * τ k F τ /(2π 2h2 ), with τ = n, p. The energy density is a function of the total density ρ and of the asymmetry parameter δ = (ρ n − ρ p )/ρ. In the T = 1 channel, several pairing interactions are defined by as a function of the value of η that can range from 0 (volume-type pairing) to 1 (surface-type pairing). In eq. (3) the parameter α is set to 1 and ρ 0 is taken as the saturation density of symmetric nuclear matter throughout all the study; moreover, we adopt the parameters η = 0.35 and 0.65 for the volume-surface mixed-type pairing interactions, and η = 1.0 for the surface-type interaction. The values of v 0 in all these cases are adjusted, for each η, in such a way to obtain equivalent results for the two neutron separation energy in the Sn isotopes by Hartree-Fock-Bogoliubov (HFB) calculations with the SLy5 parameter set [12]. The pairing cutoff energy is set at 60 MeV [13]. These values of v 0 are given in table I of ref. [10]. In the following, these pairing interactions will be denoted as IS, because they depend on the isoscalar density. The pairing gap in uniform matter is obtained from the BCS gap equation [16] solved under the condition of the particle number conservation. In a given volume V one assumes constant density given by where the quasiparticle energy is defined as e τ (k) being the single-particle energy, and μ τ is the chemical potential. In eq. (6), v kk is the pairing matrix element for the plane waves, namely kk|v|k k . Notice that in the case of the zero-range pairing interaction, the pairing gap Δ k is independent of k. In fig. 1 we display the pairing gap Δ τ , the pairing energy per particle e pair and the percentage of the pairing energy with respect to the total energy e in symmetric matter for the various pairing interactions together with the SLy5 Skyrme interaction [12] in the mean-field channel. There is a critical density ρ c ≈ 0.11 fm −3 at which the pairing interactions, which have been adjusted on nuclear energies (IS 0.35, IS 0.65, IS 1.0, YS), give almost the same result for the pairing gap, around 1.5 MeV. This has already been noticed in ref. [13] and may be related to the fact that, in fitting the two-neutron separation energy, one is sensitive to the space region of the nuclear surface, where the density is somewhat lower than the saturation density: therefore the pairing gap is constrained rather at ρ c than at ρ 0 . Above ρ c , the more surface-type the pairing interaction (that is, the larger η is taken), the smaller the pairing gap Δ τ . Below the critical density, the trend is reversed: the more surface-type the pairing interaction is, the larger the pairing gap. The contribution of the pairing energy is increased at low densities. Around the saturation density, the pairing energy per particle is much smaller than the binding energy (−16 MeV). In fig. 2 we display the pairing gaps Δ n and Δ p , the pairing energy per particle e pair and the total energy versus the asymmetry parameter δ = (N − Z)/(N + Z) for the total density fixed to the average density in nuclei (∼ 0.11 fm −3 ). It is observed a systematic decrease (respectively, increase) of the pairing gap Δ n (respectively, Δ p ) as the asymmetry parameter increases, for the isoscalar pairing interactions (IS 0.35, IS 0.65 and IS 1.0). This phenomenon is not related to the pairing interaction itself, since for a fixed total density, the pairing interaction is contant, see eq. (3). This phenomenon is therefore uniquely related to the single-particle spectrum in asymmetric matter, and in particular, to the asymmetry dependence of the effective mass. The two other interactions MSH and YS are function of the isospin asymmetry, and therefore the asymmetry dependence of the pairing gap depends both on the interaction itself and on the single-particle spectrum in asymmetric matter. The pairing interaction MSH, been adjusted in the microscopic BCS pairing gaps in symmetric and neutron matter, show the expectations from microscopic calculation for both δ = 0 and 1. Let us notice that there is a large spreading around these expectations depending on the pairing interactions. Symmetry energy and incompressibility The density-dependent symmetry energy S(ρ) is defined by and it can be expanded, around the saturation density, as where J is defined by J = S(ρ 0 ), L = 3 ρ0 ∂S ∂ρ | ρ0 , and K sym = 9 We can define the density-dependent incompressibility as [17,10] which coincides with the incompressibility K ∞ = 9ρ 2 0 at the saturation density. Figure 3 displays the pressure P , the incompressibility K(ρ) in eq. (11), and the symmetry energy S(ρ) in eq. (10) without pairing (top panels), and the contribution of pairing to these quantities (bottom panels), using the SLy5 interaction [12]. This contribution is calculated with the same equations, but considering only the pairing term of the energy density in fig. 1. The same pairing interactions have been considered here as in fig. 1. Close to the saturation density, the contribution from pairing is very small. This is also illustrated in table 1: the pairing interaction has small effects at the saturation density. In the case of the incompressibility K ∞ , pairing can still produce a few % effect (for instance, K ∞ is changed from 230.2 MeV to 223.9 MeV in the case of the MSH pairing interaction). Results The MSH, YS, IS 0.35 pairing interactions modify the incompressibility by 3 to 6 MeV, that is, by about 2%. It should be noted that at the saturation density, the contribution to the slope parameters of the symmetry energy, L, and K sym , of the interactions MSH and YS is larger than that of the other IS forces. The effects on L can be about 15% while K sym can be modified in an important way. This is related to the dependence of these pairing interaction on the isovector density. However at lower densities, the pairing effects become appreciably larger as seen in fig. 3. In the case of the pure surface pairing, there are important contributions to the pressure, incompressibility and symmetry energy: these quantities can be strongly affected by pairing, which can lead to variations up to about a factor 2. Other pairing interactions also provide significant corrections to the pressure and the incompressibility, typically, around 10%. In the case of the symmetry energy, for typical densities ρ ≈ 0.1 fm −3 , the IS+IV pairing interaction YS predict a positive contribution which is opposite to all the other interactions considered here. It should be noted that the pairing contribution to these quantities is generally larger at densities below saturation. To obtain a more general view of the pairing effect on the incompressibility, table 2 displays the K ∞ values obtained for SLy5, LNS, Sk255 and Sk272 Skyrme functionals, with various pairing interactions. In table 2, the pairing interaction IS 0.35 is the largest one among the IS interactions and reduces the incompressibility K ∞ by about 3 MeV. The MSH interaction induces a correction of 6.3 MeV on the incompressibility. It should be noted that the pure surface pairing interaction provides no modification of K ∞ . Depending on the Skyrme models, there Table 2. Nuclear matter incompressibility K∞ (MeV) for SLy5 [12], LNS [18], Sk255 [19] and Sk272 [19] Skyrme functionals. The dependence of K ∞ on the pairing interaction is displayed: mixed (IS η = 0.35), surface (IS η = 1.00). The effect of the MSH pairing is also displayed in the SLy5 case. shall also be an effect due to the different effective masses m * /m, but they are incorporated in the renormalization of the pairing interaction parameter v 0 . It is expected that the above pairing effects at low densities may also affect finite nuclei. In the case of incompressibility, we can define a finite nucleus value K A and expect that this value is affected by the pairing more than K ∞ , due to the presence of a lower density region, i.e. the nuclear surface. We analyze this point in the next section, and we argue that a similar reasoning holds for the symmetry energy. Local density approximation (LDA) This section relates the general expressions in uniform matter obtained in sect. 2 with the observables in finite nuclei in the local density approximation. The aim is to estimate the role of pairing in the incompressibility and symmetry energy of finite nuclei in a simple and transparent way. The validity of the LDA will be estimated by comparing the predicted nuclei incompressibility with the one obtained by a microscopic approach. The binding energy per nucleon in the LDA reads where B Nucl. (N, Z) includes the bulk, surface and pairing contributions. It is defined by where (r) = (ρ n (r), ρ p (r)) = Skyrme (r) + pair (r) as was defined in eq. (1). The neutron and proton densities ρ n (r), ρ p (r) can be obtained, in the present context, by means of a spherical HF calculation. The pairing contribution to the binding energy is defined by B Nucl. can be expanded around the saturation density, where the symmetry energy in nuclei, S A , is defined by (16) and the contribution of the pairing correlations to S A is defined by The incompressibility in nuclei, K A , is defined by while the pairing contribution to the incompressibility is defined by The Coulomb contribution, K Coul. , can be evaluated using, for instance, the Thomas-Fermi approximation (cf. eq. (A1) in ref. [20]). It will not be included in the present work but the value obtained in ref. [20] is −8 MeV < K Coul. < −4 MeV, depending on the interaction which is used. Introducing the mass formula (13) into eq. (18), one obtains with For small values of the density (ρ < 0.6ρ 0 , that is r > 5 fm in 120 Sn), the incompressibility is found to be negative: this is due to the spinodal instability in nuclear matter which is not present in finite systems [21]. For this reason, the integral (20) is limited to the region where K Nucl. (r) is positive. In this way, the spurious component due to the spinodal instability is removed. Introducing the quantity the symmetry energy in nuclei (16) reads We first perform a self-consistent HF calculation which provides the neutron and proton densities in 120 Sn. From these densities we deduce the radial distributions of meanfield part and pairing part of (r) given in eq. (1), K Nucl. (r) given in eq. (21), and S A (r) given in eq. (22): these radial functions are shown in fig. 4. As expected from the results discussed in the previous section, the pairing effects on (r), K Nucl. (r) and S A (r) come from the low-density surface region. From eqs. (13), (20) and (23), we obtain, in the SLy5 case, B A = −13.5 MeV, K Nucl. = 119.8 MeV, and S A = 25.7 MeV without the contribution due to the pairing correlations. The Coulomb contribution has not been included. The value for K Nucl. should be compared with that of 141 MeV obtained by the constrained HFB (CHFB) calculations presented in ref. [10]. It should be noted that the CHFB calculations take into account the contribution coming from the Coulomb interaction. This contribution is estimated to be about 20 MeV in 120 Sn, using the values of K Coul. from ref. [20]. The good agreement between the LDA and the CHFB results ensures that LDA provides a sound framework to relate the nuclear matter incompressibility and the finite nucleus one. The contributions of pairing correlations to the binding energy, the bulk modulus and the symmetry energy are shown in table 3 for the various pairing interactions considered. The contribution of the surface-type pairing (IS η = 1.0) reduces K A by about 5%, whereas, for the IS mixed-type (η = 0.35 or 0.65) and the IS+IV (MSH and YS) pairing interactions, the effect on K A is predicted to be smaller. In table 3, it is also observed that pairing effects affect the binding energy by few percents, up to 5% for the surface-type pairing interaction. For the symmetry energy, pairing effects are negligible, being below 1% except the IS+IV pairing (MSH). Conclusions The effect of superfluidity on the symmetry energy and the incompressibility has been studied in both nuclear matter and finite nuclei, using various pairing energy density functionals. A small effect is observed on the nuclear matter incompressibility and the volume symmetry energy at the saturation point, but the effect is non-negligible on the derivative terms, L and K sym , especially in the case of IS+IV pairing. However at lower density, the pairing effect on the incompressibility is significant and can have a substantial impact on neutron stars studies or on the interpretation of multifragmentation data. It has been shown that the LDA provides a relevant framework for a qualitative understanding and interpretation of the microscopic results. The effect of the pairing correlations is localized near the surface of nuclei and the effect of the pairing correlations is to make slightly softer nuclear EOS. Especially in the low-density region in nuclear matter, the pairing effect is more noticeable. This may explain why such effects are expected to happen in the surface of the finite nuclei. In the case of the IS+IV pairing interaction, no strong effect is observed on K A . In general, the pairing effects on the finite nucleus incompressibility K A are more important when the interaction is more surface type (larger η value). This study shows that with respect to current experimental uncertainties, the pairing effects should be considered when extracting the incompressibility value from GMR data which can now reach an accuracy of several hundreds of keV [22]. Experimentally it would be useful to measure the GMR on isotopic chains, including both open-shell and doubly magic nuclei such as 132 Sn. Such measurements are starting to be undertaken [22][23][24] and will be extended to unstable nuclei [25].
4,375
2014-02-01T00:00:00.000
[ "Physics" ]
Output-Feedback Adaptive SP-SD-Type Control with an Extended Continuous Adaptation Algorithm for the Global Regulation of Robot Manipulators with Bounded Inputs In this work, an output-feedback adaptive SP-SD-type control scheme for the global position stabilization of robot manipulators with bounded inputs is proposed. Compared with the output-feedback adaptive approaches previously developed in a bounded-input context, the proposed velocity-free feedback controller guarantees the adaptive regulation objective globally (i.e. for any initial condition), avoiding discontinuities throughout the scheme, preventing the inputs from reaching their natural saturation bounds and imposing no saturation-avoidance restrictions on the choice of the P and D control gains. Moreover, through its extended structure, the adaptation algorithm may be configured to evolve either in parallel (independently) or interconnected to the velocity estimation (motion dissipation) auxiliary dynamics, giving an additional degree of design flexibility. Furthermore, the proposed scheme is not restricted to the use of a specific saturation function to achieve the required boundedness, but may involve any one within a set of smooth and non-smooth (Lipschitz-continuous) bounded passive functions that include the hyperbolic tangent and the conventional saturation as particular cases. Experimental results on a 3-degree-of-freedom manipulator corroborate the efficiency of the proposed scheme. Introduction Since the publication of [1], the Proportional-Derivative with gravity compensation (PDgc) controller [2] has proved to be a useful technique for the regulation of robot manipulators. In its original form, such a control technique achieves global stabilization under ideal conditions, for instance unconstrained input, measurability of all the system (state) variables and exact knowledge of the system parameters. Unfortunately, in actual applications, such underlying assumptions are not generally satisfied, giving rise to unexpected or undesirable effects, such as input saturation and those related to such a nonlinear phenomenon [3], noisy responses and/or deteriorated performance [4], or steadystate errors [5]. However, such inconveniences have not necessarily rendered the PDgc technique useless. Inspired by this control method, researchers have developed alternative (nonlinear or dynamic) PDgc-based approaches that deal with the limitations of the actuator capabilities and/or of the available system data, while keeping the natural energy properties of the original PDgc controller, which are the definition of a unique arbitrarily-located closed-loop equilibrium configuration and motion dissipation. For instance, extensions of the PDgc controller that cope with the input saturation phenomenon have been developed under various analytical frameworks in [6, 7, 8, 9, 10 and 11]. Indeed, assuming the availability of the exact value of all the system parameters and accurate measurements of all the link positions and velocities, a bounded PDgc-based approach was proposed in [6] and [7]. In these works, the P and D terms (at every joint) are each explicitly bounded through specific saturation functions; a continuously differentiable one, or more precisely the hyperbolic tangent function, is used in [6] and the conventional nonsmooth one in [7]. In view of their structure, these types of algorithms have been denoted SP-SD controllers in [12]. Two alternative schemes that prove to be simpler and/or give rise to improved closed-loop performance were recently proposed in [8]. The first approach includes both the P and D actions (at every joint) within a single saturation function, while in the second one all the terms of the controller (P, D and gravity compensation) are covered by one such function, with the P terms internally embedded within an additional saturation. The exclusive use of a single saturation (at every joint) including all the terms of the controller was further achieved through desired gravity compensation in [13]. Moreover, velocityfree versions of the SP-SD controllers in [7] and [6] (still depending on the exact values of the system parameters) are obtained through the design methodologies developed in [9] and [10]. In [9] global regulation is proven to be achieved when each velocity measurement is replaced by the dirty derivative [14] of the respective position in the SP-SD controller of [7]. A similar replacement in a more general form of the SP-SD controller is proven to achieve global regulation through the design procedure proposed in [10] (where an alternative type of dirty derivative, which involves a saturation function in the auxiliary dynamics that gives rise to the estimated velocity, results from the application of the proposed methodology). Furthermore, an outputfeedback dynamic controller with a structure similar to that resulting from the methodology in [10], but which considers a single saturation function (at every joint) where both the position errors and velocity estimation states are involved, was proposed in [11] (where a dissipative linear term on the auxiliary state is added to the saturating velocity error dynamics involved for the dirty derivative calculation). Extensions of this approach to the elastic-joint case were further developed in [15]. Furthermore, SP-SD-type adaptive algorithms that give rise to bounded controllers, while alleviating the system parameter dependence of the gravity compensation term, have been developed in [16, 17, and 18]. In [16] global regulation is aimed for, through a discontinuous scheme that switches among two different control laws, under the consideration of state and output feedback. Both considered control laws keep an SP-SD structure similar to that of [7]; the first one avoids gravity compensation taking high-valued control gains (by means of which the closed-loop trajectories are lead close to the desired position) and the second one considers adaptive gravity compensation terms that are kept bounded by means of discontinuous auxiliary dynamics. Each velocity measurement is replaced by the dirty derivative of the corresponding position in the output-feedback version of the algorithm. Unfortunately, a precise criterion to determine the switching moment (from the first control law to the second one) is not furnished for either of the developed schemes. In [17] semi-global regulation is proven to be achieved through a state feedback scheme that keeps the same structure as the SP-SD controller of [6] but additionally considers adaptive gravity compensation. The adaptation algorithm is defined in terms of discontinuous auxiliary dynamics, by means of which the parameter estimators are prevented from taking values beyond some prespecified limit, which consequently keeps the adaptive gravity compensation terms bounded. This approach was further extended in [19] where the control objective is defined in task coordinates and the kinematic parameters, in addition to those involved in the system dynamics, are considered to be uncertain too. In [18] a controller that keeps the SP-SD structure of [6] is proposed, where each velocity measurement is replaced by the dirty derivative of the corresponding position and an adaptive gravity compensation term with initialcondition-dependent bounds is considered. Based on the proof of the main result, semi-global regulation is claimed to be achieved. Let us note that, by the way the SP and SD terms are defined in the adaptive schemes mentioned above, the bound of the control signal at every link turns out to be defined in terms of the sum of the P and D control gains (and of an additional term involving the bounds of the parameter estimators). This limits the choice of such gains if the natural actuator bounds (or arbitrary input bounds) are to be avoided. This, in turn, restricts the closed-loop region of attraction in the semi-global stabilization cases. On the other hand, as far as the authors are aware, the semi-global and/or discontinuous approaches developed in [18] and [16] are the only output-feedback bounded adaptive algorithms proposed in the literature. Moreover, a continuous adaptive scheme with continuous auxiliary dynamics, which achieves the global regulation objective, avoiding input saturation and disregarding velocity measurements in the feedback, is still missing in the literature and consequently remains an open problem. These arguments have motivated the present work, which aims to fill in the aforementioned gap. It is worth adding that recent works have focused on the global regulation problem in the bounded-input context through nonlinear PID-type controllers. This is the case for instance of [20], [21], [22] where state-feedback and output-feedback schemes were presented, and [23] where a controller with the same structure as the state-feedback algorithm presented in [22] was previously proposed. Such PID-type algorithms are not only independent of the exact knowledge of the system parameters, but also disregard the structure of the system dynamics (or of any of its components). However, in a bounded-input context, the design of an output-feedback adaptive scheme that solves the regulation problem globally, avoiding input saturation, and being free of discontinuities, remains an open analytical challenge. Moreover, as will be corroborated in subsequent sections of this work, regulation towards a suitable configuration permits the output-feedback adaptive scheme to provide an estimation (exact under ideal conditions) of the system parameters (involved in the gravity-force vector), which is not the case for other types of controllers. In this work, an output-feedback adaptive SP-SD-type control scheme for the global regulation of robot manipulators with saturating inputs is proposed. Through its extended structure, the adaptation algorithm may be configured to evolve either in parallel (independently) or interconnected to the velocity estimation (motion dissipation) auxiliary dynamics, giving an additional degree of design flexibility. With respect to the previous output-feedback adaptive approaches developed in a bounded-input context, the proposed velocity-free feedback controller guarantees the adaptive regulation objective globally (i.e. for any initial condition), avoiding discontinuities throughout the scheme, preventing the inputs from attaining their natural saturation bounds and imposing no saturation-avoidance restriction on the choice of the P and D control gains. Furthermore, contrarily to the adaptive schemes of the previously cited studies, the approach proposed in this work is not restricted to involving a specific saturation function to achieve the required boundedness, but may involve any one within a set of smooth and non-smooth (Lipschitz-continuous) bounded passive functions that include the hyperbolic tangent and the conventional saturation as particular cases. Experimental results on a 3-degree-of-freedom manipulator corroborate the proposed contribution. Preliminaries Let us consider the general n-degree-of-freedom (n-DOF) serial rigid robot manipulator dynamics with viscous friction [26,27]: are, respectively, the position (generalized coordinates), velocity and acceleration vectors. is a piecewise continuous function with bounded discontinuities but well defined at i  , are, respectively, the vectors of Coriolis and centrifugal, viscous friction, gravity and external input generalized forces, with n n F R   being a positive definite constant diagonal matrix whose entries 0 i f  , , , 1 i n   , are the viscous friction coefficients. Some well-known properties characterizing the terms of such a dynamical model are recalled here (see for instance [2,Chap. 4] and see further [2, Chap. 14] and [28] concerning Property 6 below). Property 6 The gravity vector can be rewritten as is a constant vector whose elements depend exclusively on the system parameters and is a continuous matrix function, whose elements depend exclusively on the configuration variables and do not involve any of the system parameters. Equivalently, the potential energy function of the robot can be rewritten as is a continuous row vector function whose elements depend exclusively on the configuration variables and do not involve any of the system parameters. Actually, Property 7 Consider the gravity vector ( , ) Let us suppose that the absolute value of each input i  ( th i element of the input vector  ) is constrained to be smaller than a given saturation bound 0 In other words, letting i u represent the control signal (controller output) relative to the th i degree of freedom, we have: Let us note from (1) and (2)  . Thus, the following assumption turns out to be crucial within the analytical setting considered in this work: The control schemes proposed in this work involve special functions fitting the following definition. Definition 1 Given a positive constant M , a non-decreasing Lipschitz-continuous function : Functions meeting Definition 1 satisfy the following: be a generalized saturation with bound M and k be a positive constant. Then The proposed output-feedback adaptive control scheme is defined as (3),  is a constant that may arbitrarily take any real value and  is a (sufficiently small) positive constant. A block diagram of the proposed output-feedback adaptive control scheme is shown in Fig. 1. Remark 2 Note that the simplest version of the proposed control scheme arises by taking 0   . However, the term extending the adaptation dynamics in (7a) has been included for the sake of generality, since an analogue term was considered in a previous approach [18]. Furthermore, the  -term in (7a) has a natural influence in the closed-loop responses which could be used for performance adjustment purposes. This aspect is not explored in this work. Experimental results In order to experimentally corroborate the efficiency of the proposed scheme, referred to as the SP-SDc-ga controller, real-time control implementations were carried out on a 3-DOF manipulator. The experimental setup, shown in Fig. 2, is a 3-revolute-joint anthropomorphic arm located at the Benemerita Universidad Autonoma de Puebla, Mexico. The actuators are direct-drive brushless motors (from Parker Compumotors) operated in torque mode, so they act as a torque source and accept an analogue voltage as a reference of torque signal. Position information is obtained from incremental encoders located on the motors. The setup includes a Pentium host computer and a system of electronic instrumentation, based on the motion control board MFIO3A, manufactured by Precision Microdynamics. The robot software is in open architecture, whose platform is based in C language to run the control algorithm in real time. The control routine registers data generated during the first 2000 samples at a default sample time of 2.5 s T  ms, but s T can be changed to higher values in accordance to the desired experimental duration. The experiments carried out in the context of this work, whose results are presented below, were run taking 0.12 s T  s. A more detailed technical description of this robot is given in [30]. For the considered experimental manipulator, Properties 5 and 6 are satisfied with For comparison purposes, additional experiments were run implementing the output-feedback adaptive algorithm proposed in [18], referred to as the L00 controller (choice made in terms of the analogue nature of the compared algorithms: output-feedback adaptive developed in a bounded input context; comparison of controllers of a different nature loses coherence), i.e., Observe that the regulation objective was achieved preventing input saturation and avoiding steady-state position errors. Furthermore, note that despite the presence of a small overshoot, through the SP-SDc-ga algorithm shorter stabilization times took place in both position error and parameter estimation responses. Let us further note that at 240s, where the experimental data registration was stopped, the parameter estimations were still evolving. This is a consequence of the slow evolution of the adaptation subsystem dynamics, due to the relatively small value of  in the proposed scheme and the analogue coefficients  and  in the L00 controller. Nevertheless, the slow evolution of the adaptation subsystem dynamics did not have any influence on the position responses, which had been stabilized during the initial seconds of the experiment. The subsequent parameter estimator evolution was expected to reduce the difference among the estimations obtained through each implemented controller. 7 One can verify from ( ) G q in (16) that, for the considered manipulator, the desired configurations that satisfy the condition stated by Corollary 1 are those such that In this work, an output-feedback adaptive control scheme for the global regulation of robot manipulators with bounded inputs was proposed. With respect to the previous output-feedback adaptive approaches developed in a bounded-input context, the proposed velocity-free feedback controller guarantees the adaptive regulation objective: globally, avoiding discontinuities throughout the scheme, preventing the inputs from reaching their natural saturation limits and imposing no saturation-avoidance restriction on the control gains. Moreover, the developed scheme is not restricted to the use of a specific saturation function to achieve the required boundedness, but may rather involve any one within a set of smooth and non-smooth (Lipschitzcontinuous) bounded passive functions that include the hyperbolic tangent and the conventional saturation as particular cases. The efficiency of the proposed scheme was corroborated through experimental tests on a 3-DOF manipulator. Good results were obtained, which were observed to improve those gotten through an algorithm that was previously developed in an analogue analytical context.
3,743.6
2013-01-01T00:00:00.000
[ "Mathematics" ]
Intelligent One-Class Classifiers for the Development of an Intrusion Detection System: The MQTT Case Study : The ever-increasing number of smart devices connected to the internet poses an unprecedented security challenge. This article presents the implementation of an Intrusion Detection System (IDS) based on the deployment of different one-class classifiers to prevent attacks over the Internet of Things (IoT) protocol Message Queuing Telemetry Transport (MQTT). The utilization of real data sets has allowed us to train the one-class algorithms, showing a remarkable performance in detecting attacks. Introduction The "Internet of Things" (IoT) refers to any technology implementation, including a set of smart devices interconnected to the internet, interacting with external systems through information and data exchange. Currently, there are over 5 billion connected IoT devices [1], according to the previous definition. One of the most relevant applications of IoT is in, what is usually referred to as, Industry 4.0 [2]. It allows for real-time management of automated controlled systems through remote monitoring via a client device, such as a Smartphone, tablet, or PC. Additionally, by collecting and analysing data and information with cloud processing techniques, it is possible to achieve more complex interactions amongst the different elements of the IoT system [3,4]. IoT devices are usually quite affordable, but they usually present a rather limited computation capacity, not being feasible to implement cybersecurity primitives at a device level. Therefore, IoT devices tend to be fast and efficient but with limited resilience against network attacks. IoT systems have traditionally been a target for attacks. They have been used in botnets, such as the Mirai attack in September 2016, in which 400,000 IoT devices were infected and eventually performed a massive DDoS attack. Another notable example took place in April 2020, when the Dark Nexus botnet, based on the Mirai code, compromised over 1350 IoT devices [5]. To improve security in IoT systems, one of the most popular approaches is to use an intrusion detection system (IDS). This method is based on monitoring network traffic and, therefore, it does not require high computing capacity at a device level. In addition, the IDS does not require any change in the configuration in the existing IoT systems. IDS can be signature-based or anomaly detection-based. Signature-based detection methods, using predefined rules, are effective in detecting attacks for previously known behaviours. On the other hand, anomaly-based identification is used in order to detect unknown attacks or attacks with patterns that are not clearly defined. In order to do so, the IDS is monitoring the entire system, comparing anomalous traffic with predefined behaviours assumed to be normal through previously trained artificial intelligence models. The most resource-consuming activities are performed offline, allowing for an overall smooth, efficient IDS activity. One quite common approach to increase the security of an IoT network is the utilization of patterns of previous attacks within an IDS [12][13][14][15]. In particular, the two main methods for an IDS introduction are: attack recognition via real-time status monitoring and juxtaposition with the previous normal value [16] or signature-based pattern recognition in the network data flow [17]. In the latter, the implementation of a detection model is mandatory. This action is usually performed by introducing machine learning algorithms, such as Support Vector Machine and Random Forest [18,19] or clustering techniques [20]. A supplementary approach involves deep learning methods, such as auto-encoders and Deep Belief Networks (DBNs) due to their dimension-reduction capabilities in optimal classification models [21][22][23]. For thread detection modelling, more sophisticated techniques, such as Long Short Term Memory (LSTM), have been proved to be a valid solution [24,25]. More recent research lines have implemented the NSL-KDD dataset (enhanced version ofKDD99) to incorporate Remote to Local (R2L) and User to Root (U2R) attacks on IoT systems. Specifically, it introduces two-level classification algorithms using a Bayesian network and the K-Means method [26]. Another current study introduces the NSL-KDD dataset in an IDS, using Kontiki. In this environment, a network of IoT devices is simulated using the MQTT protocol [27]. Finally, in the AWID dataset, attack detection through Machine Learning is proposed in wireless IoT environments (WIDS) [3,28,29]. Regarding open source IDS systems, Snort and Suricata [30] are the most widely implemented to address IoT system security. In the present paper, the Snort 3.0 version, launched in 2021, is selected to be compared to the proposed IDS. The purpose of the article is to improve security IoT environments, specifically in those presenting certain characteristics (in terms of computing capacity and protocols), making them a likely target for attacks (i.e., botnets). To that end, we propose the implementation of an IDS, introducing one-class classifiers using the MQTT protocol. This approach should constitute a viable solution since it analyses the network traffic without altering the system configuration or demanding additional computing capacity. The network analysis of the MQTT protocol can be used to prevent intrusion attacks through non-legitimate clients. This vulnerability is included in RFC5246 [31], "Communications could be intercepted, altered, re-routed or disclosed", which also prevents an attacker from performing code injections to alter the operation of the system. The final factor to develop a truly applicable model is the utilisation of reliable datasets [32], such as e KDD99 [33], NSL-KDD [34], and AWID [35] for TCP/IP, containing network data of this protocol. Therefore, the present article also details how to create a dataset based on MQTT. The main goal of this research is the development of an IDS with a new machine learning approach, complementing the traditional, signature-based method. The one-class machine Learning algorithm is a state-of-the-art technique previously presented in [36]. Regarding the composition of this article, Section 2 introduces the case study, while Section 3 refers to the one-class classifier outlook. Finally, Section 4 explains the experiments and results and Section 5 presents both the conclusions and some future lines of work. Case Study MQTT is a messaging protocol specifically designed for light machine-to-machine (M2M) communications, making it very suitable for connecting small devices to networks with limited bandwidth. The MQTT protocol is widely used in IoT [37] and industry [2] environments. The architecture of a MQTT system follows a star topology [38], with a central node or "broker" acting as a server. The broker is responsible for managing the network and transmitting the messages in real time. The communication protocol is based on topics created by the client publishing the message and the receiving node(s) subscribing to the topic. Therefore, it allows for both one-to-one or one-to-many communications. As previously mentioned, the rather limited computational capability of the devices in a MQTT system makes it vulnerable, which could lead to attackers monitoring and even affecting the normal activity. One of the most common ways in which an attacker gains access to the system is by using the Shodan network scan on the default port 1883 [39]. Subsequently, and once inside the system, the invader can proceed to "sniff" the data packages and identify the plain text password [40]. Once obtained, it can gain access to the system, scan the broker messages (identified by the use of the '#' character), and ultimately manipulate the different topics. For this article, the following IoT system implementing the MQTT protocol is defined: -Actuators and sensors: Comprised of two integrated boards NodeMCU, including a low-power micro controller connected to a wireless network via a ESP8266 chip [41]. The NodeMCU chip is connected to a HC-SR04 ultrasonic sensor, which subscribes to the topic "distance/ultrasonic1", where it publishes the distance to any element in front of it, up to a range of 40 centimetres. The other NodeMCU is connected to an actuator consisting of a relay that turns a desk light on and off, subscribes to the topic "light/relay", and, depending on the value, changes the state to "0" off, "1" on. - The server: Developed in node.js due to its efficiency in controlling multiple and simultaneous connections, with the npm package manager, which has installed the "Aedes' 'library [42], with which a MQTT broker server has been programmed. The server also hosts the client web application. -Web application:Developed using angular.js, which connects to the broker as another client with the angular-MQTT library. The difference is that it implements web sockets instead of the MQTT protocol for the communication with the broker. The web application has an interface that shows the status of the different devices and allows for interaction with them in real time. -System clients: A PC and a smartphone interacting with the sensors through the web app connected via WiFi while generating network traffic. In the present article, the intrusion was conducted using the mosquito software from a client, apart from the IoT infrastructure [43]. By means of a distinctive symbol, the topics of the system were unveiled. Subsequently, the sensor was targeted to alter the associated temperature and the actions on the actuator. In order to produce additional random frames, a power shell was implemented. Finally, a router customized with OpenWRT [44] received the traffic from the regular traffic, the IoT system under attack, and the internet navigation in a PCAP file. For file management purposes, the traffic included in the PCAP files was separated, taking into account the fields in the MQTT protocol, together with the common fields for every frame (ports, IP locations, and the time code according to the AWID data set). Additionally, a tag was assigned to each frame in order to mark whether it was under attack or not. The output was a data set in csv format with a total of 80.893 frames. A total of 78.995 frames (97.65%) were found to be normal, while 1.898 (2.35%) presented irregularities. Figure 1 introduces the steps for obtaining the data set, with the elements of the WLAN environment, as previously described. In this case, the system was attacked from another computer with intrusion attacks against the server. All the traffic was collected, dissected, and tagged by the router to generate the data set in CSV format. Intrusion Detection Classifier In order to address the security problems associated with the MQTT protocol, a solution based on the one-class technique was introduced. Classifier Approach In order to implement a one-class classifier, it is necessary to have prior knowledge of which conditions under the MQTT environment are working correctly. The followed steps are: • The target set comprised only of legitimate samples is divided into 10 random groups: • for i = 1:1:10 -All groups except the ith are used to train the classifier. An example of this process is shown in Figure 2. - Once the training stage has finished, the group i, together with all the non-target samples, are used to validate the classifier. An example of this process is shown in Figure 3. • The mean value from the 10 iterations is used as a measure of the classifier performance. • Finally, the classifier's performance is best selected and trained with all the target samples. Methods In this section, different sets of anomaly detection techniques are introduced. Approximate Convex Hull An Approximate Convex Hull (ACH) refers to a one-class classification technique of the boundary subset, with a very positive track record of practical implementations [45]. The underlying core concept is to obtain a reliable approximation to the limits of a certain data set S ∈ R n ; hence the "boundary" denomination. This is achieved by calculating the convex limits. Considering that the typical convex hulls of S with N samples and d variables implies a computational cost of O(N (d/2)+1 ) [45], it is computationally advisable to opt for a reliable enough approximation. Therefore, p random projections of the hull are generated over 2D planes to subsequently calculate their respective convex boundaries [46], reducing the overall computational cost, as presented in the following Figure 4. With the approximation modelling completed with the p projections, a new data set was considered an anomaly when it surpassed the convex hull for any of the generated projections. Additionally, adding an expansion factor α, by which the convex limits were enlarged or contracted from the centroid of each projection, increased the adaptability to different typologies of data sets. An α over 1 means that the limits are expanded, while under 1 implies narrowed boundaries. Non-Convex Boundary over Projections The Non Convex Boundary over Projections (NCBoP) technique relies on concepts akin to those introduced in the previous section and Figure 4, but it produces remarkably improved results for non-convex data sets [36]. In the NCBoP, the limits are calculated via non-convex limits, therefore eliminating false positives occurring if an anomaly presents within the the convex hull. As for the previous method (ACH), it is possible to introduce an α factor to avoid over-or under-fitting [36]. Figure 5 shows the difference between the convex hull and non-convex hull calculation for a given dataset in R 2 . Convex hull Non-convex hull Figure 5. Difference between convex and non-convex hull. One caveat worth mentioning for NCBoP is that the computational cost is higher than in ACH. Therefore, the decision regarding which one to go with should be based on the complexity and/or shape of each dataset to be processed. K-Means The non supervised family of learning algorithm, named K-Means, is well known for clustering purposes [47,48]. This procedure defines a set of groups for the initial data set based on the number of groups selected by the user. The algorithm uses the total sum of distances from each cluster centroid to every point. The training set is utilized for calculating the centroid of each group. If the distance of a certain test data to its closest centroid is minimum, K-means can be considered a one-class technique. Therefore, the anomaly can be detected when the distance is higher than the distance of every cluster data to the centroid. An example where the training set is divided in two clusters is presented in the following Figure 6. In this case, a test point, represented by a green dot, is labelled as a target because the distance to the nearest centroid (black star of Cluster #2) is lower than that of many others in the training samples. Feature 2 Cluster #1 Cluster #2 Figure 6. Graphical representation of a sample labelling using K-means. Principal Component Analysis Principal Component Analysis (PCA) is a common technique oriented at reducing the dimension when this factor represents a potential threat [49,50]. In addition, PCA can provide a good solution for detecting anomalies and solving classification-related problems [51,52]. The PCA algorithm is based on the eigenvectors of the co-variance matrix for calculating the directions, where the set of data has higher variability. Upon definition of these directions, they are known as principal components. Subsequently, they are used for linear projections with fewer dimensions. If the criteria chosen is based on the distance between the data projected and the primitive data, a reconstruction error value is obtained as a one-class procedure. By doing so, if the reconstruction error of the test data is larger than the value obtained in the training process, an anomaly has been identified. The following Figure 7 introduces an example of how the distance from a test point to its projection is greater than all the distances of the training points to their respective projections. In this case, only the first component is used. The limit between normal and anomalous behaviour is commonly related to the training distance percentile. One-Class Support Vector Machine One of the most typical algorithms for anomaly detection is the Support Vector Machine (SVM) for one-class (OCSVM) tasks due to the high-quality outcome for various applications [53,54]. The OCSVM is a supervised-learning, procedure-mapping method supported by a kernel function. Subsequently, an hyper-plane able to maximize the distance between the origin and the mapped points is defined. Consequently, the instances near the hyper plane are considered the support vectors. When the training procedure is completed and a new data set enters the classifier, it returns the distance from the high dimensional plane to the input data. Any negative result is flagged as an anomaly. Experiments Setup The following subsection introduces the set of experiments implemented for this article. Techniques Configuration In order to evaluate the performance of each of the five techniques presented so far, the following benchmarks have been set: The amount of main components ranges from 1 to n − 1, with n being the number of variables in the training set. The K-means algorithm outlier fraction is taken into consideration as well. • SVM: An outlier percentage is also considered, similarly to the K-means and PCA cases. Data Pre-Processing In order to improve the results, the dataset was normalized, ranging from 0 to 1 while introducing the z-score method [55]. Additionally, the categorical variables were converted into numerical values. On top of that, a data set without pre-processing was tested for complementary reasons. Performance Measurement In order to assess the output for each classifier, the "Area Under the Receiver Operating Characteristic Curve" (AUC) [56] was the selected measure. This measure represents the probability for a random positive sample of being labelled as positive. Furthermore, in contrast to other variables, such as sensitivity, precision, or recall, AUC is not sensitive to class distribution, which is a significant advantage, especially in one-class problems [57]. As an additional item, the required training time for each classifier was also introduced as an indicator for the associated computational cost. Finally, the benchmark tests were confirmed with a k-f old cross-validation test, considering k = 10. Results The experiments presented so far have led to the results shown in Table 1 for Intrusion Detection and Table 2 for MiTM events, respectively. For each technique, the configuration presenting a stronger AUC is selected and represented. With the default set of rules, Snort can detect DoS attacks but not intrusions. Therefore, it is necessary to create a specific directive at the byte level to screen and discover a potential use for the special character "#" by an external client. Unfortunately, this approach is vulnerable if the attacker changes its network configuration. Conversely, the proposed IDS can detect intrusion attacks using the proposed model without any additional configuration required. As per deployment easiness, it implements the XGBoost algorithm. Since it introduces GPU processes, the overall implementation becomes more complex and computationally costly compared to previous works [24]. Conclusions and Futures Works The results have revealed that the best overall technique for MiTM detection is PCA, simultaneously presenting an AUC over 89% and the shortest training time. In particular, the highest performance level was obtained with eight components, Zscore normalization, and an outlier fraction of 10% in the training set. Similar conclusions were reached with the IDS, although the AUC values obtained were slightly lower in comparison. As stated above, the training time is a significantly important parameter in machine learning tasks. Therefore, K-means and PCA algorithms are suitable for these kinds of high-dimensional data sets. In other cases, NCBoP and SVM might be acceptable, since their associated AUC performance is still acceptable. Under such an assumption, the implementation of the proposed approach would be presented as a very promising tool to monitor the potential appearance of two different types of attacks within an MQTT protocol environment. This could have a very positive, direct impact on the network performance, helping to increase the cybersecurity protection level. For future works, additional one-class techniques could be applied in order to further improve the performance of the anomaly detection system. In particular, dimension reduction techniques, such as autoencoder, could be specially promising. Finally, the introduction of unsupervised techniques, as opposed to semi-supervised techniques, could prove extremely useful in the future. To that end, clustering techniques might play a significant role, which is worth being further studied. Finally, the application of the presented one-class techniques to new IoT datasets, such as Constrained Application Protocol (CoAP), could provide valuable sources of information against currently undetected attacks. Conflicts of Interest: The authors declare no conflict of interest.
4,635.4
2022-01-30T00:00:00.000
[ "Computer Science", "Engineering" ]
The effect of environmental conditions on diapause in the blister beetle , Mylabris phalerata ( Coleoptera : Meloidae ) In the field, the blister beetle Mylabris phalerata Pallas (Coleoptera: Meloidae) undergoes larval diapause in the ground, which lasts for nearly six months. The effect of the soil environment on this diapause was examined. Final instar larvae kept at temperatures of 26°C do not enter diapause and continued to develop regardless of the soil water content and photoperiod. Below 25°C the final instar larvae entered diapause regardless of soil water content and photoperiod. The early stages, particularly L2, appeared to be more important for diapause induction than the later stages. However, the other instars were also sensitive. Temperature, rather then photoperiod was the main factor influencing pupal duration. 531 * Corresponding author; e-mail<EMAIL_ADDRESS>Larvae kept at 30°C were transferred to 25°C on the first day of the 1, 2nd, 3 or 4 instar but kept at the same photoperiod. Similar larvae kept at 25°C were transferred to 30°C as above. In the second experiment, the 5 instar larvae were transferred. Individuals reared at 25°C were transferred to 30°C when they reached 1, 30, 60, 90, 120 or 150 days age. The 5 instar larvae kept at 30°C were transferred to 25°C when 1, 20 or 30 days old. 50 individuals were used in each treatment and the soil water content was 10% (w : w). The control group was not transferred. Diapause occurrence and duration were monitored. The effect of photoperiod and temperature on pupal dura- INTRODUCTION Many animals have evolved to survive seasonally recurring adverse conditions by entering a diapausing stage.To this end, many insects respond to one or many environmental factors as cues for diapause.Photoperiod is the most common environmental factor inducing the onset of diapause in temperate-zone insects (Tauber et al., 1986;Danks, 1987;Saunders, 2002).In many insects temperature is another important factor controlling diapause, especially in insects living in warehouses and underground.Diapause in soil-inhabiting insects can be influenced by soil temperature, moisture and oxygen (Lee & Denlinger, 1990).Diapause in soil-inhabiting insects is an important topic, which is poorly studied.The present study investigates the effects of soil environmental conditions, including temperature, photoperiod and water content, on diapause in Mylabris phalerata (Coleoptera: Meloidae). M. phalerata is found usually on flowers of cowpea (Vigna unguiculata) and loofah (Luffa cylindrical).Cantharidin from Mylabris is used in medicine (Wang, 1989;Hundt et al., 1990;Wang et al., 2000;Xu et al., 2004).In addition, its larvae are predators of eggs of the grasshopper Chondracris rosea rosea De Geer (Orthoptera: Acridiidae).As the beetle is now scarce in the field it is important to rear large numbers in the laboratory.Therefore, knowledge of the factors inducing diapause in the final instar larvae is important. Insect materials Mylabris phalerata adults were collected from cowpea flowers in the fields on farms of Huazhong Agricultural University at Wuhan (30.5°N, 114.3°E),Hubei Province, People's Republic of China, in June-July 2003.Adult beetles were brought to the laboratory and reared at 25 ± 1°C, 70 ± 5% r.h. and 16L : 8D in a wire screen cage (100 cm × 100 cm at base and 300 cm deep).A plastic container (50 cm × 25 cm at base and 12 cm deep) was put at the bottom of the cage, which contained moist soil for oviposition and acted as a source of moisture.Adults were fed on cowpea flowers, cowpea pods and flowers of loofah.Daily checks were made and newly laid egg masses of M. phalerata were collected and placed in small plastic containers (4 cm wide at base and 10 cm deep).Upon hatching, larvae were placed individually in the same containers filled with fine inorganic soil and a grasshopper egg-pod. Temperature response experiment To investigate the effect of temperature on diapause occurrence in M. phalerata the larvae were reared at 18, 22, 25, 28, 31 or 34 ± 1°C in soil with a water content of 10% (w : w).50 individuals were reared at each temperature.Larval moulting and pupation were checked and recorded. The joint effects of temperature and soil water content The joint effects of temperature and soil water content on diapause occurrence in M. phalerata were determined at 25 and 30 ± 1°C and a water content of 8%, 10% or 12% (w : w).Larval moulting and pupation of 50 individuals were checked and recorded for each treatment. The joint effects of temperature and photoperiod The joint effects of temperature and photoperiod on diapause induction in M. phalerata were studied by exposing all the immature stages of M. phalerata to 22, 24, 25, 26 or 28 ± 1°C at photoperiods of 8L : 16D, 12L : 12D or 16L : 8D.The soil water content was 10% (w : w).The diapause intensity was measured as diapause duration.Larval moulting and pupation of 50 individuals were recorded. Sensitivity of larvae to photoperiod and temperature Two experiments investigated the sensitivity of larvae to photoperiod and temperature.The first experiment was on pre-5 th instar larvae.Eggs and larvae were kept at 25°C or 30°C and photoperiods of 8L : 16D, 12L : 12D or 16L : 8D, respectively.Larvae kept at 30°C were transferred to 25°C on the first day of the 1 st , 2nd, 3 rd or 4 th instar but kept at the same photoperiod.Similar larvae kept at 25°C were transferred to 30°C as above.In the second experiment, the 5 th instar larvae were transferred.Individuals reared at 25°C were transferred to 30°C when they reached 1, 30, 60, 90, 120 or 150 days age.The 5 th instar larvae kept at 30°C were transferred to 25°C when 1, 20 or 30 days old.50 individuals were used in each treatment and the soil water content was 10% (w : w).The control group was not transferred.Diapause occurrence and duration were monitored. The effect of photoperiod and temperature on pupal duration Diapausing larvae were kept at 22 or 25°C and a photoperiod of 8L : 16D, 12L : 12D or 16L : 8D and the soil water content was 10% (w: w).The non-diapausing larvae were kept at 28°C.The duration of the pupal stage was recorded in each case. Diapause identification The 5 th instar larvae wander before entering the soil.This wandering lasts from the time of the moulting of the 5 th instar larvae to when they enter the soil.The wandering of nondiapause individuals lasts for 2 days, whereas that of diapause individuals lasts for 4 days. Data analysis The difference in the duration of development of each stage in the different treatments was tested for significance by analysis of variance (ANOVA) using SAS (SAS Institute, 1999).The temperature in 2003 and 2004 was monitored by recording daily minimum and maximum temperature.The time required by fifth instar larvae reared under given diapause inducing conditions to reach the pupal stage was used as a measure of diapause intensity. Effect of temperature on the rate of development Egg development time decreased with increased in temperature from 18 to 34°C (Fig. 1).Duration of the first to the fourth instar was longer at 18°C than at the other temperatures tested.Larvae kept at temperatures 22°C from the first to the fourth instar took a similar time to complete development.More than 94% of the fifth instar larvae kept at low temperatures ( 25°C) entered diapause and took five months before they pupated.At 28, 31 and 34°C the L5 larvae did not enter diapause and completed development in 28.4-31.5 days. The effect of soil temperature and water content Diapause incidence and duration was not significantly influenced by the water content of the soil.More than 94% of the larvae kept at 25°C entered diapause.At 30°C diapause was averted regardless of the soil water content.The duration of diapause at 25°C was similar whether the water content of the soil was 8%, 10% or 12% (Fig. 2). The effect of temperature and photoperiod Most larvae entered diapause at 22, 24 and 25°C, irrespective of the photoperiod.However, 100% of the larvae developed without diapausing at temperatures > 25°C (26°C and 28°C), regardless of the photoperiod (Table 1).The critical temperature for diapause induction was between 25°C and 26°C. The development of the non-diapausing L5 was five times faster than that of larvae that entered diapause (Table 2).The duration of diapause did not differ significantly at 22, 24 and 25°C; though it was shortest at 25°C.Photoperiod did not significantly influence the duration of diapause, though at 12L : 12D, diapausing larvae required a slightly shorter time to complete development than at 8L : 16D and 16L : 8D. Sensitivity of larvae to photoperiod and temperature Rearing eggs and young larvae at different photoperiods did not affect diapause induction.Individuals exposed in the egg stage and L1-L4 to 25°C experienced 532 diapause in L5, but by such exposure to 30°C diapause was averted (Table 3).Individuals that were reared at 25°C until L1 or L2 and then transferred to 30°C did not enter diapause.Less intense diapause was induced if L3 and L4, or L4 were reared at 30°C.In contrast, diapause of normal length occurred in two treatments in which the larvae were transferred from 30 to 25°C in the first days of L1 or L2.And diapause was averted when they were transferred from 30 to 25°C on day 1 of L3 or L4 (i.e., when egg-L2 or egg-L3 were reared at 30°C).Individuals exposed in egg-L5 to 30°C developed without diapause.Individuals exposed in egg-L5 to 25°C entered diapause and age of L5 did not affect diapause induction.The early stages, particularly L2 appeared to be more sensitive to diapause induction than the later stages.However, other instars were also sensitive, as shown by the gradual increase in the duration of L5 when transferrs from 25 to 30°C occurred at later stage in development (Table 3).It seems that the later exposure to 30°C reverses the previous diapause induction by 25°; the degree depends on the duration of exposure. Diapause intensity Diapause duration of individuals reared from the egg stage at 25°C was 157.5-158.2days (Table 3).That of individuals exposed to 30°C before the third instar and 25°C throughout their subsequent development lasted 145.3-151.6 days.However, the diapause duration of individuals transferred from 25 to 30°C on the first day of L3 or L4 was shorter than that of individuals reared at 25°C throughout their development.The diapause duration was 67.9-68.5 days when transferred on the first day of L3 and 75.0-76.9days when transferred on the first day of L4.Individuals transferred on either the first or thirtieth day of L5 from 30 to 25°C took about three months to pupate, those transferred on day 60 took about four months and those on day 90 nearly five months.Development to the pupal stage of L5 kept at 30°C took one month (Table 3). 533 Note: Means in a column with the same letter are not significantly different (P > 0.05, n = 50).ND non-diapause larvae. The effect of photoperiod and temperature on the duration of pupal stage The duration of the pupal stage decreased with increased temperature (Table 4).The larvae that did not diapause at 28°C needed 17 days to complete the pupal stage, which was nearly ten days faster than for the larvae reared at 22°C and that were in diapause in L5. Adult beetles normally emerged 20 days after pupation at 25°C, which was a week faster than at 22°C.Duration of pupal development at each temperature was not significantly different at photoperiod 8L : 16D, 12L : 12D or 16L : 8D (Table 4). DISCUSSIONS The present study indicates that diapause in the final instar larvae of M. phalerata is induced by temperature rather than the water content of the soil or photoperiod.Diapause induction was averted at temperatures 26°C, but induced by temperatures 25°C.It can be concluded that high temperatures act as a diapause-averting factor in this insect and the critical temperature for diapause induction is between 25 and 26°C as at or below 25°C almost all individuals entered diapause.Temperature-controlled diapause is also reported by Shintani & Ishikawa (1997) in Psacothea hilaris and by Ishihara & Shimada (1995) in Kytorhinus sharpianus.Xue mentioned that diapause in Colaphellus bowringi is induced principally by low temperature and less so by photoperiod (Xue et al., 2002).The same response is seen in Endopiza viteana (Tobin et al., 2002).Earlier examples are cited in Beck (1991).The role of temperature in these insects is more important for inducing diapause than regulating development rate. The stages sensitive to induction of diapause of the final larval instars are reported, for instance, by Kurota & Shimada (2001) for Bruchidius dorsalis and Milonas & Savopoulou-Soultani (2000) for Colpoclypeus florus.B. dorsalis enters diapause in the final (late fourth) larval instar under short photoperiods and the stages sensitive to photoperiod are the late egg stage and early first instar larva.The pupa of the maternal generation is the most sensitive stage for the induction of larval diapause in C. florus.In this study, exposure of larvae of M. phalerata to low temperature ( 25°C) from L3 onward results in diapause induction in the final instar (Fig. 3). In the field the final instar larvae entered diapause at the end of October (Fig. 3).Temperature recordings suggest that the maximum temperature dropped below 25°C in October, which induced the beetles to enter diapause at this time.The minimum temperature in winter is -5°C.As in other insects, diapause enables M. phalerata to survive the low temperature conditions prevailing in winter.Temperature remained 25°C up to the end of May (Fig. 3) and prevented beetles from pupating, which resulted in beetles emerging in early July when cowpea flowers are available in the field. Fig. 1 . Fig. 1.Duration of development of the immature stages of Mylabris phalerata measured at 18, 22, 25, 28, 31 and 34 ± 1°C.The soil water content was 10% (w : w).The duration of development time is the mean of 50 individuals in each stage. Fig. 2 . Fig. 2. The average duration of development of 5 th instar larvae (L5) of M. phalerata reared at different temperatures and soil water contents.Diapause occurred at 25°C.Means followed by the same letter are not significantly different (P > 0.05). The joint effect of photoperiod and temperature, when the soil water content was 10% (w: w), on the duration of the fifth larval instar of Mylabris phalerata.*: number in the bracket is the age in days of L5. Note: Means with the same letter are not significantly different (P > 0.05, n = 50The effect of photoperiod and temperature, when soil water content was 10% (w : w), on the duration of the pupal stage of Mylabris phalerata. Fig. 3 . Fig. 3. Life cycle of Mylabris phalerata and the daily maximum and minimum temperatures in 2003-2004.Arrows indicate the date of appearance of the life stages of M. phalerata.A: adult, E: egg, L1: the first instar larva, L5: the fifth instar larva (overwintering stage), P: pupa TABLE 3 . Duration of development of L5 of Mylabris phalerata reared under different photoperiods and transferred at different stages during their development from 30 to 25°C or from 25 to 30°C (n = 50).
3,483.2
2006-07-03T00:00:00.000
[ "Environmental Science", "Biology" ]
Black droplets Black droplets and black funnels are gravitational duals to states of a large N, strongly coupled CFT on a fixed black hole background. We numerically construct black droplets corresponding to a CFT on a Schwarzchild background with finite asymptotic temperature. We find two branches of such droplet solutions which meet at a turning point. Our results suggest that the equilibrium black droplet solution does not exist, which would imply that the Hartle-Hawking state in this system is dual to the black funnel constructed in [1]. We also compute the holographic stress energy tensor and match its asymptotic behaviour to perturbation theory. Introduction The discovery of Hawking radiation and its associated information paradox has led to a deeper understanding of quantum gravity, and formed a basis for the development of holography and the AdS/CFT correspondence [2][3][4]. Recently, there have been many attempts to use holography to further our understanding of Hawking radiation. In particular, while Hawking radiation is mostly understood for free fields on black hole backgrounds, the authors of [5][6][7] apply AdS/CFT to the study of Hawking radiation when these fields are strongly interacting. The AdS/CFT correspondence conjectures the equivalence between a large-N gauge theory at strong coupling to a classical theory of gravity in one higher dimension. The correspondence gives us the freedom to choose a fixed, non-dynamical background spacetime for the gauge theory, which translates to a conformal boundary condition on the gravity side. For a gauge theory background B in D − 1 dimensions, this amounts to solving the D-dimensional Einstein's equations with a negative cosmological constant with a boundary that is conformal to B. For the moment, let us consider the case where B is an asymptotically flat black hole of size R and temperature T BH . Let's also suppose that far from the black hole, the field theory has a temperature T ∞ . The authors of [5] conjectured two families of solutions that describe the gravity dual. They argue that in the bulk gravity dual, the thermal state far from the boundary black hole is described in the gravity side by a planar black hole, while JHEP08(2014)072 the horizon of the boundary black hole must extend into a horizon in the bulk. These two horizons are either connected, yielding a black funnel or disconnected, yielding a black droplet. These are illustrated in figure 1. In the field theory, the difference between these families is manifest in the way the black hole couples to the thermal bath at infinity. The connected funnel horizon implies that the field theory black hole readily exchanges heat with infinity. On the other hand, the disconnected droplet horizons suggest that the coupling between the boundary black hole and the heat bath at infinity is suppressed by O(1/N 2 ). Indeed, unless T BH = T ∞ , the funnel solutions would exhibit a "flowing" geometry. 1 The droplet solutions, however, are necessarily static for a static boundary black hole. A phase transition between these two families would resemble a "jamming" transition in which a system moves between a more fluid-like phase and a phase with more rigid behaviour. Based on gravitational intuition for the stability of the bulk solution, it was conjectured in [5] that funnel phases should be preferred for large RT ∞ , while droplets should be preferred for small RT ∞ . In order to test these conjectures, one would need to construct corresponding droplet and funnel solutions. Droplet solutions are simpler to construct when T ∞ = 0. In this case, the planar horizon in the droplets becomes the AdS Poincaré horizon. Such droplet solutions were constructed in [8] for a Schwarzschild boundary, and in [9,10] for a boundary that is equal-angular momentum Myers-Perry in 5 dimensions. There is also an analytic droplet based on the C-metric with a three-dimensional boundary black hole [11]. Static funnel solutions (that is, with T BH = T ∞ = 0) were constructed in [1], for a Schwarzschild boundary and for a class of 3-dimensional boundary black holes. Unfortunately, none of these solutions can be directly compared with each other. The T ∞ = 0 droplets will compete with a funnel that flows to zero temperature, and the static funnels compete with a droplet solution with equal temperature horizons. Neither of these solutions have been constructed. In this paper, we shed light on the droplet and funnel transition by numerically constructing new black droplet solutions with T ∞ = 0. As in [1,8], our boundary metric is Schwarzschild. We find that there can be two black droplet solutions for a given T ∞ /T BH . These merge in a turning point around T ∞ /T BH ∼ 0.93, which suggests that Schwarzschild black droplets in equilibrium do not exist. JHEP08(2014)072 We use a novel numerical method to construct these geometries. It joins three existing numerical tools: transfinite interpolation on a Chebyshev grid, patching, and the DeTurck method. This method is not only useful for the construction of the solutions detailed here, but can be used in a broader sense with modest computational resources -see for instance [12] where this method was used to construct black rings in higher dimensions. In particular, the fact that we use transfinite interpolation on a Chebyshev grid means we do not require overlapping grids for the patching procedure, 2 which in turn not only simplifies the coding of the problem but also decreases the need for larger computational resources. In the following section, we detail our numerical construction of these solutions. In section 3, we investigate these solutions by computing embedding diagrams and the holographic stress tensor and matching our results to perturbation theory. We make a few concluding remarks in section 4. Choosing a reference metric We opt to use the DeTurck method which was first introduced in [13] and studied in great detail in [8]. This method alleviates issues of gauge fixing and guarantees the ellipticity of our equations of motion. The method first requires a choice of reference metricḡ that is compatible with the boundary conditions. One then solves the Einstein-DeTurck equation where ξ µ = g αβ Γ µ αβ +Γ µ αβ , andΓ µ αβ is the Levi-Civita connection forḡ. For the kinds of solutions we are seeking, a maximal principle guarantees that any solution to (2.1) has DeTurck vector ξ = 0, and is therefore also a solution to Einstein's equations [8]. To find a black droplet suspended over a planar black hole, the chosen reference metric must have a planar horizon, a droplet horizon, a symmetry axis, and a conformal boundary metric. Furthermore, the reference metric must approach the planar black hole metric in the right limit. Thus, the integration domain is schematically a pentagon. Most numerical methods for PDEs use grids that lie on rectangular domains, but these methods can be extended to a pentagonal domain by patching two grids together. Because of the difference in geometry between the two horizons, we will patch together two grids in different coordinate systems, each adapted to one of the horizons. To motivate our choice of reference metric, let us first begin with AdS D in Poincaré coordinates Notice that fixing the time and angular coordinates gives us a two-dimensional space that is confomally flat. This two-dimensional space in the line element (2.2) is written in Cartesian coordinates that can be adapted to a planar horizon. We can also move to JHEP08(2014)072 polar coordinates which are more suitable for a droplet horizon. Therefore, we now search for a reference metric with a conformally flat subspace that also contains a droplet horizon and a planar horizon. To do this, let us first write the planar black hole in conformal coordinates. We begin with the usual line element for the planar black hole solution in D bulk dimensions: Now let which gives us a line element of the form for some functionsg,f , and constantλ. This line element has our desired conformal subspace. For a boundary metric that is conformal to Schwarzschild, we find it numerically desirable to redefine the coordinates to which yields The planar horizon is located at the hyperslice y = 1/λ. The constant λ (orλ) sets the temperature of the black hole and can be related to Z 0 in (2.3). The functions f and g (orf andg) are smooth, positive definite, and depend on the temperature. They can be determined by integrating (2.4) and inverting the resulting Hypergeometric function. 3 To determine the integration constant, we choose g(0) = f (0) = 1. Now let us write down a line element (not necessarily a solution of Einstein's equations) that has a single droplet horizon in conformal coordinates. We search for something of the form where we have chosenf ρ to be a function of √ z 2 + r 2 in anticipation of moving to polar coordinates. The functionf ρ is determined by a choice of conformal boundary metric ds 2 ∂ . At the boundary z = 0, we must have JHEP08(2014)072 for some conformal factor ω. For a boundary metric that is conformal to Schwarzschild, We find that it is convenient to set t = 4τ . This then uniquely specifies the functioñ f ρ , which together with (2.9) gives us our droplet line element in conformal coordinates. Switching to the polar coordinates gives us By construction, the droplet horizon is at ρ = 1 and its temperature (with respect to the time coordinate τ ) matches the temperature of the boundary Schwarzschild black hole. Additionally, the line element (2.14) can be used as a reference metric to reproduce the results of the solution in [8]. Now we can attempt to combine the planar and droplet line elements to create our desired reference metric. Guided by the similarities between (2.5) and (2.9), the reference metric we have chosen is where we treat g and f y as functions of the coordinate y, and f ρ as a function of the coordinate ρ. The x, y coordinates are related to the ρ, ξ coordinates through (2.6) and (2.13): The reference metric (2.16) has a regular planar horizon at y = 1/λ, a regular droplet horizon at ρ = 1, and an axis at x = 0 (or ξ = 0). Near x = 1, we recover the planar black JHEP08(2014)072 hole metric as written in (2.7). Since g(0) = f (0) = 1, near y = 0 or ξ = 1 we have (in the ρ, ξ coordinate system) We can see that this is equivalent to Schwarzschild (2.11) by performing the coordinate transformation We have thus found a reference metric that is compatible with our desired boundary conditions. By construction, this reference metric can be written in two orthogonal coordinate systems, with all boundaries in our domain being a constant hyperslice in at least one of these two coordinate systems. Furthermore, in the λ → 0 limit, our reference metric becomes the droplet metric (2.14), which is an appropriate reference metric for a droplet without a planar black hole. We have two parameters given by λ and R 0 , which determine the temperatures T ∞ and T BH , respectively. This system, however, only has one dimensionless parameter given by the ratio T ∞ /T BH , so we have one remaining gauge degree of freedom which we can choose for numerical convenience. Ansatz and boundary conditions With a reference metric in hand, we can now write down a metric ansatz: where T c , A c , B c , F c , and S c are functions of the Cartesian coordinates x and y, and T p , A p , B p , F p , and S p are functions of the polar coordinates ρ and ξ. Since we must demand that the metric is equivalent between these two coordinate systems, the functions are related to each other via where we used the coordinate transformations (2.17). JHEP08(2014)072 Now let us discuss boundary conditions. At the boundary y = 0 or ξ = 1, we must recover a metric conformal to Schwarzchild. This was already done in the reference metric, so we choose Similarly, we must recover the planar black hole at x = 1 and impose The remaining boundary conditions are determined by regularity. At the planar horizon y = 1/λ, we need At the axis, x = 0 or ξ = 0, we require Finally, at the droplet horizon ρ = 1, . (2.27) Numerics To solve the equations of motion numerically, we employ a standard Newton-Raphson relaxation algorithm using pseudospectral collocation. To choose a suitable grid, we first divide the entire integration domain into two patches, one in each coordinate system. We then place a spectral grid on each patch using transfinite interpolation on a Chebyshev grid. An example of such a grid is shown in figure figure 2. In addition to imposing the boundary conditions, we require the smoothness of the metric across patches. This amounts to requiring (2.22) and the equivalent expression for normal derivatives across the patch boundary. We obtained our first solution by using the reference metric as a Newton-Raphson seed. Since it has been proven that the DeTurck vector ξ = 0 for any solution of (2.1) satisfying boundary conditions such as those appearing here [8], we can use this quantity to monitor our numerical error and test the convergence of our code. As seen in figure 3, the maximum value of the norm of the Deturck vector converges exponentially with increasing grid size, as predicted by pseudospectral methods. All of our results presented below have |ξ| 2 < 10 −10 . We have also verified that our results do not change when we vary the location of our patch boundary or when we change λ and R 0 while keeping T ∞ /T BH fixed. Figure 4. The proper length between the droplet and planar horizons along the axis of symmetry as a function of the temperature ratio. For a given temperature ratio, there can be two droplet solutions. The turning point occurs around T ∞ /T BH ∼ 0.93, which suggests that the equilibrium solution does not exist. Embedding and distance between the horizons To get a sense for the relationship between these two horizons, in figure 4 we plot the proper distance between the horizons along the axis of symmetry as a function of temperature. For small T ∞ /T BH , there are solutions with a large distance between the black droplet and the planar black hole. These are solutions which are close to the T ∞ = 0 solution found in [8]. As we follow these solutions with increasing T ∞ /T BH , we find that the proper distance decreases until T ∞ /T BH ∼ 0.93. At this value there is a turning point where the proper distance continues to decrease only if we decrease T ∞ /T BH . These results suggest that T ∞ /T BH ∼ 0.93 is a critical temperature above which only (possibly flowing) funnel solutions exist. In particular, the equilibrium state would be the funnel solution found in [1]. To help us understand the geometry of the solutions, we embed the two horizons in Euclidean hyperbolic space: Demanding that the pullback of hyperbolic space to a curve γ(x) = (z(x), r(x)) is equal to the pullback of our solution to the horizon gives a system of ODEs in z(x) and r(x). We solve these ODEs numerically to obtain our embedding diagram. The embeddings of the droplet horizon and planar horizon are shown in figure 5. The size of the droplets at the boundary is normalised to 1, and the location of the planar black hole far from the droplet is also normalised to 1. Starting at small T ∞ /T BH , the droplet horizon looks very similar to that of [8], and the planar horizon is approximately flat. As we increase T ∞ /T BH , we see that even past the turning point, the droplet horizon continues to lower itself deeper into the bulk and the centre of the planar horizon continues to rise towards the boundary. Based on the shape of these solutions from the embedding diagram, we call our two branches of droplet solutions long dropets and short droplets. Similar behaviour has been observed for black droplets in global AdS [14]. Eventually, our numerics break down and we are unable to continue the long droplets any further. We can only conjecture a number of possibilities. One scenario is that the long droplets continue to exist down to T ∞ = 0, these solutions may join with the AdS black string. In this case, one might reinterpret the naked singularity of the string as a degenerate droplet/funnel merger point. Another possibility is that the two horizons merge at some finite temperature ratio towards a funnel. This situation might be similar to the approximate solutions found in [15,16]. At the merger, they would reach a conical transition. Since the two horizons are not at the same temperature, this would mean a transition between a static solution to a stationary one with some amount of flow. But going a small amount across a conical merger should not change the geometry far from the cone significantly, so the amount of heat flux at infinity should be small. If this picture is correct, this would mean that there are two types of flowing funnel solutions, one with a narrow neck and small flow, and one with a wider neck with larger flow. Though, like the caged black holes [17], it is also possible that there is no stationary solution on the funnel side of the merger, and the solution necessarily becomes dynamical and possibly evolves into a wide flowing funnel. Stress tensor Now we compute the boundary stress tensor. The procedure we use is similar to those of [18]. We expand the equations of motion off of the boundary in a Fefferman-Graham expansion, choosing a conformal frame that gives Schwarzschild on the boundary. We can JHEP08(2014)072 Figure 7. Components for the stress tensor with T ∞ /T BH = 0.89 (same scheme as figure 6). The larger red curve is the short droplet while the smaller blue curve is the long droplet. then read off the stress tensor from one of the higher order terms in the expansion. There is no conformal anomaly in our case because we have chosen a boundary metric that is Ricci flat. Representative stress tensors of our solutions are plotted in figures 6, and 7. Far from the boundary black hole, the stress tensor fits the form where k 0 is the boundary stress tensor for a bulk planar black hole. This R −1 behaviour was also found for the funnel solutions in [1]. In the insets of figures 6, and 7, we subtract k 0 from the stress tensor, take an absolute value, and plot the result using a Log-Log scale. Note that there are clearly two power-law JHEP08(2014)072 regimes. Far from the black hole, we see a R −1 power law, similar to that of a funnel. Closer to the black hole, we see a R −5 power law, similar to that of the droplets found in [8]. This dual power-law can be explained from the bulk perspective. The presence of the droplet warps the planar horizon, making it funnel-like far away. This is most easily seen in our embedding diagrams in figure 5. This funnel-like behaviour gives the stress tensor a R −1 power law. Closer to the droplet, the physics near the boundary is dominated by the hotter droplet horizon rather than the planar horizon, giving a R −5 droplet behaviour. As the distance between the horizons decreases, this R −5 behaviour becomes more obscured. In figure 7 we can see that both long and short droplets have the same large R behaviour, suggesting that this is universal. Indeed, we shall match this behaviour with perturbation theory in the next section. Matching with perturbation theory Far away from the axis of symmetry of the droplet, i.e. close to x = 1 in eq. (2.21a), perturbation theory should be valid. This region can solely be studied using standard perturbation theory techniques around the planar black hole line element (2.3). For concreteness, we will take D = 5, even though our procedure admits a straightforward extension to arbitrary D. We first note that the planar black hole can be written as where dE 2 3 is the line element of three dimensional Euclidean space. Following [19], we can decompose our perturbations according to how they transform under diffeomorphisms of E 3 . These can be decomposed as tensors, vectors or scalar derived perturbations. Here, we are primarily interested in scalar perturbations. Its basic building block are the scalar harmonics on E 3 , which satisfy the following simple equation Furthermore, we are interested in perturbations that do not break the 2−sphere inside E 3 , so we only have radial dependence in S. These can be computed and we find A general perturbation can be decomposed as The remaining Einstein equations reduce to two first order equations in H L and f tt , which we reduce to a single second order equation in f tt : where we performed the coordinate transformation Z 2 = w and defined Z 2 0 = w 0 . Before proceeding to determine the solution, let us first discuss the boundary conditions. Recall that at the boundary we need to recover the Schwarzschild line element (2.11) expanded at large values of R. This is equivalent to demanding: This boundary condition picks α = 0, and without loss of generality we take C 2 = 2 . For this choice, eq. (3.6) admits a simple analytic solution: where A and B are constants to be chosen in what follows. Regularity at the black hole horizon and the boundary condition (3.6) demand A = R 0 /Z 4 0 and B = −R 0 /Z 6 0 . The full metric perturbation can be reconstructed from eq. (3.7) and is given by: 8) where we parametrize the 2−sphere in the standard way dΩ 2 2 = dθ 2 +sin 2 θdφ 2 . This metric perturbation does not seem to have a boundary metric perturbation that approaches the large R behavior of the Schwarzschild line element (2.11). However, this is an illusion of the gauge we choose to work in. If we perform a gauge transformation with gauge parameter JHEP08(2014)072 ξ = − 2 R 0 /(2 Z 2 ) dR, we bring the metric perturbation (3.8) to which manifestly exhibits the boundary metric we desire. It is now a simple exercise to determine the perturbed stress energy tensor in terms of the boundary black hole temperature T BH and planar temperature T ∞ : This should be the leading asymptotic behavior of the holographic stress energy tensor of the droplet solution as we approach R → +∞. This is partially confirmed by [1] where the stress energy tensor is found to be consistent with (3.10) if T ∞ = T BH = T Schwarzschild . A linear fit of our log-log plots agrees with (3.10) to less than 0.1%. The next correction should appear at O(R −2 ) and can be computed using a similar approach, albeit with a more tedious calculation. Based on our solution at smaller R, we expect the first undetermined coefficient in the R = +∞ expansion to appear at O(R −5 ). In particular, the difference between droplet and funnel holographic stress energy tensors should only appear at O(R −5 ). Discussion To summarise our findings, we have numerically constructed Schwarzschild black droplet solutions suspended over a planar black hole. These solutions are dual to the "jammed" phase of a large N strongly coupled CFT. We find two branches of droplets: long and thin, and that these solutions only exist below a critical temperature T ∞ /T BH ∼ 0.93. We have computed their stress tensor and find generically two power-law regions corresponding to a droplet-like falloff of R −5 and a funnel-like falloff of R −1 . It would be interesting to study the stability of these droplet solutions. The short droplet with T ∞ = 0 were argued to be stable in [8]. If they are, then it seems likely that short droplets for small temperature ratios are also stable. The long droplets, on the other hand, may be unstable to forming a flowing funnel, or perhaps a short droplet. If all of our short droplets remain stable, then the critical temperature might be interpreted as a "melting" or "freezing" point. Consider a short droplet at small T ∞ /T BH . Keeping the boundary black hole fixed, suppose we slowly increase the temperature T ∞ . If JHEP08(2014)072 we do this slowly enough, the dynamical solution should remain close to the static solution. Eventually, these static droplets no longer exist, so the system must become fully dynamical, perhaps evolving into a flowing funnel. The rigid behaviour of the droplet transitions into the more fluid behaviour of a funnel. Unfortunately, we cannot directly compare the long and short droplets to each other. These solutions are not at equilibrium, so their free energy is not well-defined. One can in principle still compare their entropies and energies. These quantities are formally infinite, but can be regulated by subtracting the large R behaviour obtained via perturbation theory. Unfortunately, these quantities are finite only after subtracting down to an O(R −4 ) behaviour, which is beyond our numerical control. To complete our understanding of solutions with a Schwarzschild boundary, the flowing funnels need to be constructed. These solutions would require non-Killing horizons, such as those in [20][21][22]. Additionally, in our solutions, the droplet horizon has the same temperature as the boundary black hole. It is possible to detune these temperatures so that they are not equal [21]. In our study, we have focused on boundary black holes that correspond to fourdimensional Schwarzschild. These boundary black holes do not need to satisfy any field equations, so we are free to choose any metric. It would be interesting to see what changes as we vary the boundary black hole. For instance, equilibrium droplets or droplets with T ∞ /T BH > 1 may exist, particularly for boundary black holes that are small relative to their temperature.
6,167
2014-08-01T00:00:00.000
[ "Physics" ]
Jiangnan at SemEval-2018 Task 11: Deep Neural Network with Attention Method for Machine Comprehension Task This paper describes our submission for the International Workshop on Semantic Evaluation (SemEval-2018) shared task 11– Machine Comprehension using Commonsense Knowledge (Ostermann et al., 2018b). We use a deep neural network model to choose the correct answer from the candidate answers pair when the document and question are given. The interactions between document, question and answers are modeled by attention mechanism and a variety of manual features are used to improve model performance. We also use CoVe (McCann et al., 2017) as an external source of knowledge which is not mentioned in the document. As a result, our system achieves 80.91% accuracy on the test data, which is on the third place of the leaderboard. Introduction In recent years, machine reading comprehension (MRC) which attempts to enable machines to answer questions when given a set of documents, has attracted great attentions. Several MRC datasets have been released such as the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) and the Microsoft MAchine Reading COmprehension Dataset (MS-MARCO) (Nguyen et al., 2016). These datasets provide large scale of manually created data, greatly inspired the research in this field. And a series of neural network model, such as BiDAF (Seo et al., 2016), R-Net (Wang et al., 2017), have achieved promising results on these evaluation tasks. However, machine reading comprehension is still a difficult task because without knowledge, machines cannot really understand the question and make a correct answer. As an effort to discover how machine reading comprehension systems would be benefited from commonsense knowledge, (Ostermann et al., 2018b) developed the Machine Comprehension using Commonsense Knowledge task. In this task, commonsense knowledge is given as the form of script knowledge. Script knowledge is defined as the knowledge about everyday activities which is mentioned in narrative documents. For each document, a series questions are asked and each question is associated with a pair of candidate answers. Machines have to choose which is the correct answer. To let machines make correct decisions, explicit information which can be found in the document and external commonsense knowledge are both required. Table 1 shows an example of the dataset in this task. In this paper, we make a description about our submission system for the task. The system is based on a deep neural network model. The input of the model is a (document, question, answer) triple and the output is the probability that the answer is the correct one for the given document and question. We also combine the neural network model with a variety of manual features, including word exact match features and token features such as part-of-speech (POS) ,named entity recognition (NER) and term frequency (TF). These manual features are helpful in solving the problem that the correct answer can be easily found in the given document. Furthermore, for more complicated problem that the answer is not explicitly mentioned in the document, we try to model the interactions between document, question and answer by computing the attention score of question to document and question to answer respectively, which is described in (Lee et al., 2016). These features add soft alignments between similar but non-identical words (Chen et al., 2017). We evaluate our system on the shared task and obtain 80.91% accuracy on the test set, which is on the third place of the leaderboard. The rest of this paper is organized as follows. (Ostermann et al., 2018b). The first line shows the document and the following lines show question and answer pair respectively. The answer of question1 can be easily found in the text while answering question2 requires external knowledge which is not mentioned in the text. Section 2 describes the submission system. Section 3 presents and discusses the experiment results. Section 4 makes a conclusion about our work. Model In this task, a document (D), a question (Q), and a pair of answers (A 0 , A 1 ) are given and a machine comprehension system should choose the correct answer from the answers pair. We attempt to solve this problem by leveraging a deep neural network model which can generate the probability p θ (A i |D, Q), i = 0 or 1 that the input answer is correct for the given document and question. The system predicts the probability for each answer in (A 0 , A 1 ) respectively and decides which is the correct answer by comparing their probability scores. We represent the set of all trainable parameters of the neural network model as θ. The model basically consists 3 parts: an encode layer, an interaction layer and a final inference layer, which is depicted in figure 1. Below we will discuss the model in more detail. Encode layer We first represent all tokens of document {d 1 , ..., d m }, question {q 1 , ..., q n } and answer {a 1 , ..., a l } as sequences of word embeddings where m, n and l are sequence lengths of document, question and answer respectively. In this task, we use the 300-dimensional 840B Glove word embeddings (Pennington et al., 2014). We then pass each sequence through a multi-layer bidirectional long short term memory network (BiLSTM) to get the word level semantic representations of each sequence: The index j represents the jth BiLSTM layer. We concat all the output units of each BiLSTM layer and get the final word level representations: h d , h q and h a . The BiLSTM layers used to encode document, question and answer sequence share same parameters in order to reduce the number of trainable parameters and make the model uneasily overfitting. Interaction layer This layer models the interactions between document, question and answer. We first align each word representation vectors in the question sequence to document and answer by leveraging attention mechanism and get question-aware representation Att d , Att a for document and answer respectively: The attention score s d i,j captures the similarity between the word representation vector d i and q j in document sequence and question sequence respectively. And s a i,j captures the similarity between answer vector a i and question vector q j . We get s d i,j and s a i,j by computing the dot products between the nonlinear mappings of two word representation vectors: α(·) is a single dense layer with ReLU nonlinearity. We concat Att d i and Att a i behind each h d i and h a i and get new word representation vectors r d and r a for document and answer. Following (Chen et al., 2017), we combine the model with a variety of manual features, including word exact match features and token features. For exact match features, we use three binary features indicating whether a token in d and a can be exactly matched by one token in q, either in its original, lowercase or lemma form. For token features, we use part-of-speech (POS), named entity recognition (NER) and term frequency (TF). For document and answer, we combine the manual features as vectors f d i , f a i and concat to r d i , r a i and get new word level representation vectors r d i and r a i : Inference layer In inference layer, we first convert the document and answer sequence r d , r a into fixed length vectors with weighted pooling method and get sequence level representation vectors R d and R a : The weight vector w d and w a are learnable parameters of the model. As we haven't use any external source of knowledge, we attempt to use other pre-trained language model as external knowledge, in order to get more implicit information which is not mentioned in the document. Here we use CoVe (McCann et al., 2017) in document and answer sequences. The Glove embedding of each token will pass through a pre-trained BiLSTM layer. The BiLSTM layer outputs a sequence of CoVe vectors of document and answer c . We then convert the sequences into fixed length vectors C d and C a by using the weighted pooling method which is mentioned above. We fuse the pooled CoVe vectors with the sequence level representation vectors with semantic fusion unit (SFU) (Hu et al., 2017) and get the final sequence level representation vectors R d and R a : Finally, we represent the probability that the answer is correct by computing the bilinear match score of document and answer vectors: W is a trainable matrix and σ(·) is the sigmoid function. In this task, we use this model to predict the probability for each answer in (A 0 , A 1 ) and decide which is the correct one by selecting the answer with higher probability score. Datasets The statistics of official training, development and test data are shown in Table 2. Training Dev Test Num of examples 9,731 1,411 2,797 We remove the words occurring less than 2 times and finally get about 12000 words in the vocabulary. We keep most pre-trained word embeddings fixed during training and only fine-tune the 100 most frequent words. For manual features, we get POS and NER features by using Stanford CoreNLP 1 toolkits. Experimental Settings We implement our model by using PyTorch 2 . The model is trained in the given training set and we choose the model which performs best on the development set among training epochs. We train the model with mini batch size 32. We use two layers BiLSTM with 128 hidden units. A dropout rate of 0.4 is applied to word embeddings and all hidden units in BiLSTM layers. We use logistic loss as the loss function optimized by using Adamax optimizer (Kingma and Ba, 2014) with learning rate η = 0.002. Results The performances of our model are depicted in Table 3. The single model achieves accuracy of 85.05% on the development data and 79.03% on the test data. The ensemble model which we finally submitted to the shared task achieves accuracy of 87.30% on the development data and 80.91% on the test data. From the result we can see that there is a gap between development data and test data for both single model and ensemble model. The model overfits the development data but does not perform well on the test data. Shows that the robustness of our model needs to be improved. We conduct ablation analysis of different features used in the model on the development data. Table 4 shows the ablation analysis results from which we can see that all the features we used can contribute to model performance. Without manual features, the model accuracy is 83.70%, which is 1.3% less than the full model. and without CoVe, the accuracy drops 1.8%. The accuracy drops 6.6% when neither manual features nor CoVe are used. The results show that the model requires both explicit information which can be found in the document and external source of knowledge to make correct decisions. Model Acc Conclusion In this paper, we make a description of our submitted system to the SemEval-2018 shared task 11. The system is based on a deep neural network model which will choose the correct answer from the answers pair when the document and question are given. We combine the model with a variety of manual features which are helpful in solving the problem that the correct answer can be easily found in the given document. For the problem that the answer is not explicitly mentioned in the document, we model the interactions between document, question and answers by using attention mechanism. We also attempt to use CoVe as an external source of knowledge. We conduct experiment and prove that the features we used are helpful in contributing to the model performance. Our system achieves 80.91% accuracy on the test data, which is on the third place of the leaderboard.
2,786
2018-01-01T00:00:00.000
[ "Computer Science" ]
Lignin and Cellulose Extraction from Vietnam’s Rice Straw Using Ultrasound-Assisted Alkaline Treatment Method The process of cellulose and lignin extraction from Vietnam’s rice straw without paraffin pretreatment was proposed to improve economic efficiency and reduce environmental pollution. Treatment of the rice straw with ultrasonic irradiation for 30min increased yields of lignin separation from 72.8% to 84.7%. In addition, the extraction time was reduced from 2.5 h to 1.5 h when combined with ultrasonic irradiation for the same extraction yields. Results from modern analytical methods of FT-IR, SEM, EDX, TG-DTA, and GC-MS indicated that lignin obtained by ultrasound-assisted alkaline treatment method had a high purity and showed a higher molecular weight than that of lignin extracted from rice straw without ultrasonic irradiation. The lignin and cellulose which were extracted from rice straw showed higher thermal stability with 5% degradation at a temperature of over 230C. The ultrasonic-assisted alkaline extractionmethod was recommended for lignin and cellulose extraction fromVietnam’s rice straw. Introduction As the world's most abundant renewable resource, lignocellulosic biomass has been acknowledged for potential use to produce chemicals and biomaterials.Lignocellulose is a low cost biomass that is abundantly available.Its main constituents are cellulose, hemicellulose, and lignin. Cellulose is mainly used to produce paperboard and paper.Smaller quantities are converted into a wide variety of derivative products such as cellophane and rayon [1,2].Conversion of cellulose from energy crops into biofuels such as cellulosic ethanol is under investigation as an alternative fuel source [3][4][5].Another growing application of cellulose is in composite materials as reinforcement in polymeric materials [6,7]. The main methods of the extraction of lignin and cellulose from different sources historically explored are hydrothermal, acidic, alkaline, wet oxidation, ammonia fiber explosion, organosolv, and, most recently, ionic liquid pretreatment methods (which were reviewed elsewhere) [13][14][15].These extraction methods are expensive and energy intensive and utilize chemicals which require special disposal, handling, or production methods.In addition, materials for cellulose and lignin extraction are limited to straw of all kinds and timber for the limitation of technology.Extraction methods are only applied in laboratory, which seldom works in industrial production.How to break technology barrier is the key for cellulose and lignin extraction on big scale.Therefore new technologies must provide methods with inconsiderable environmental and economic impacts and high efficiency [16]. Nowadays, ultrasound-assisted extraction is evaluated as a simpler and more effective alternative to conventional extraction methods for the extraction of lignin and cellulose from natural products [17,18].Ultrasonication induces localized high temperature and pressure and results in the production of highly reactive free radicals, such as OH − , H + , and H 2 O 2 , thus enhancing chemical reactions.The sonomechanical effect of ultrasound enhances the penetration of the solvent and heat into cellular materials and thus improves the mass transfer; thus it also requires significant additional energy input.Ultrasound for pretreatment of lignocellulose was examined in conjunction with other methods [19,20].But up to now, no research has combined ultrasound, alkalinity, and temperature to separate cellulose and lignin. Vietnam is an agricultural country, producing about 45 million tons of grains annually; thus 54 million tons of rice straw is produced.However, most of the rice straw is burned on the open fields, causing serious environmental pollution.Therefore, the conversion of rice straw into valuable materials is essential.This paper gives an overview on investigation of the extraction method of lignin and cellulose from Vietnam's rice straw using the combination of ultrasound irradiation and chemical method under high temperature and alkaline concentration to reduce extraction time.The properties of obtained lignin and cellulose were also evaluated. Experimental Section 2.1.Materials and Chemicals.Rice straw was provided by farmers in Tien Kien Commune, Lam Thao District, Phu Tho Province, Vietnam.The chemical agents (NaOH, HCl, and ethanol) were purchased from Merck Chemicals (Shanghai) Co., Ltd.All of the chemicals were reagent grade or higher in purity and were used on receipt without further purification.0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00 by the Sonic system SOMERSET (England, 20 kHz) provided with a horn with ultimate power of 500 W and sonication time of 0, 10, 20, 30, and 40 min in 2 M NaOH aqueous solution at 90 ∘ C.Then, the mixture was continuously stirred at 90 ∘ C for a total period of 1.5 h.After that, the mixture was washed with 0.1 M NaOH to remove the remaining lignin on the cellulose surface.After the filtration on a nylon cloth, the residue rich in cellulose was further washed with distilled water and dried at 50 ∘ C for 24 hours.The hemicellulose was isolated from hydrolysates by precipitation of the acidified hydrolysate (pH was adjusted to 5.5 with HCl solution) with three volumes of 95% ethanol for 6 h.The pellets rich in the hemicellulose were filtered, washed with 70% ethanol, and air-dried.After the evaporation of ethanol, the alkali soluble lignin was obtained by precipitation at pH 1.5 adjusted by HCl.The solid rich in lignin was then washed with acidified solution pH 2.0 and freeze-dried.Yield of cellulose and lignin fractions is given on a dry weight basis related to the rice straw. Physicochemical and Thermal Analysis of Cellulose and Lignin.The surface morphology and element contents of the straw were analyzed on a scanning electron microscope (SEM/EDX, JEOL JMS 6490, JEOL, Japan).The thermal stability of lignin and cellulose was determined on a thermal analyzer (TG-DTA: EXSTRAR6100, Seiko Instruments, Japan).Fiber size of cellulose was determined by scanning electron microscope (SEM: JEOL, Japan); surface tension of lignin was determined on the CAM 200 (KSV Instructions, Finland).The content of neutral sugar contained in lignin was determined by gas spectrometry (GC-MS: Clarus 500, Perkin-Elmer, USA).The lignin content was determined by gel permeation chromatography (GPC: HLC8120, TOSOH, Japan) with polystyrene standards. Rice Straw Composition. Many recent studies have focused on the general composition in a straw thread. According to Saha and Cotta, the elemental composition of wheat straw was as follows: C, 44%; O, 49%; H, 5%; N, 0.92%; and other components with small content [21].In this study, obtained results from the analysis showed that the rice straw that contains the content of the inner and outer layers was different (Figure 1 and Table 1). It can be seen from Figure 1 and Table 1 that the contents of O and Cl in two surfaces are not so different and the contents of elements C, Mg, and Ca of inner surface are larger than those of outer surface but, however, the contents of other elements of outer surface are higher than those of inner surface, especially Si. The contents of lignin and cellulose in the rice straw of this study are 19.02 and 39.2%, respectively. Yield, Purity, and Molecular Properties. There are many research papers that show the separation process of cellulose and lignin from wheat straw using a solvent mixture of toluene/ethanol to remove paraffin (pretreated) before extraction.We have compared the pretreated and nonpretreated methods before the alkaline treatment, but the yield extraction and lignin or cellulose properties were not different.Therefore, we propose the process of cellulose and lignin separation from rice straw without paraffin pretreatment to improve economic efficiency, reduce environmental pollution, and develop potential industrial applications. Straws are poorly digested by ruminants because of their high cell-wall content.Alkaline treatment disrupts the cell wall by dissolving hemicellulose, lignin, and silica, by hydrolysing uronic and acetic acid esters, and by swelling cellulose [22].Furthermore, the alkaline solution breaks ether bonds between lignin and hemicellulose and ester bonds between lignin and hydroxycinnamic acids such as pcoumaric acid and ferulic acid [23].More importantly, alkaline treatment is a promising approach that does not affect the environment.Through this process, lignocellulose can be broken down into lignin, hemicellulose, and cellulose, which are materials for valuable products.Sun and Tomkinson have published a process of lignin separation from wheat straw with 0.5 M KOH at 35 ∘ C for 2.5 h, but the yield was only 43.9% [24].Xiaoa and coworkers have announced the separation of lignin from straw with 1 M NaOH solution at 30 ∘ C. The yield was only 68.3% for long period of 18 hours [25].In this study, the alkaline solution with high concentration 2 M NaOH, high temperature 90 ∘ C, and the ultrasound irradiation were used to increase lignin separation efficiency and reduce separation time.Separation yields of lignin were summarized in Table 2.As expected, treatment of the rice straw with 2 M NaOH without ultrasound irradiation at 90 ∘ C for 1.5 h and with ultrasonic irradiation for 10, 20, 30, and 40 min increased yields of separation of lignin (72.8%, 72.9%, 78.6%, 84.7%, and 84.9%, resp.). It can be seen from Table 2 that there are not much differences in extraction yields of lignin and cellulose between ultrasound-assisted alkaline extractions for 0 min and 10 min and 30 min and 40 min.This implied that ultrasonic irradiation time is a main parameter affecting the lignin and cellulose yields under the conditions used.Obviously, between the irradiation times 10 and 30 min extraction yields of lignin were increased from 72.9% to 84.7%, respectively.Approximately all of the total lignin and cellulose in rice straw were separated during the alkaline extraction at sonication time of 30 min.Thus, the application of sonication for 30 min resulted in raising the lignin yield by 12.3% in comparison to the alkaline extraction procedure without ultrasound assistance. The higher efficiency of the ultrasound-assisted alkaline extractions can be explained by the mechanical action of ultrasound on the cell walls resulting in an increased accessibility and extractability of the lignin and cellulose component.The alkaline extractions with ultrasonic irradiation under the conditions used have a greater effect on the cleavage of the ether bonds between lignin and hemicelluloses from the cell walls of rice straw than the alkaline treatment without ultrasonic assistance.The major ether linkages, that is, -O-4 bonds between lignin interunity linkages and -O-4 ether linkages between lignin and hemicelluloses, can be homolytically ruptured or cleaved to some extent by the ultrasonic irradiation [26].Mass spectrometry of lignin obtained by ultrasonic irradiation for 30 min showed that the concentrations of xylose, glucose, arabinose, and galactose contained in lignin were 0.39, 0.15, 0.12, and 0.04%, respectively.The results indicated that lignin had a high purity and xylose was the major sugar component in lignin, while galactose content was very small.Some works have published the average molecular weights ( ) and the average number weight ( ) of lignin which was extracted from wheat straw, which ranged from 1000 to 23600 Da and from 700 to 5268 Da, respectively, depending on the method and separation conditions [27].The values of and the polydispersity ( / ) of lignin in this study are given in Table 3. As shown in Table 3, the lignin obtained by the ultrasoundassisted alkaline extraction method for 10-30 min had a slightly higher (from 2620 to 3720 Da) than that of the lignin obtained by alkaline method without ultrasonic irradiation ( = 2560 Da); the observed phenomenon indicated an increase in solubilization of large molecular size lignins under the ultrasonic conditions used.The reason for this increase in is probably the condensation reaction between the lignin structures under ultrasound irradiation conditions given.In contrast, as the irradiation time was further increased from 30 to 40 min, decreased from 3720 to 2990 Da.The decrease in is assigned to the cleavage of the -O-4 linkages between the lignin precursors under a relatively longer sonication period. Fiber size is one of the factors that affect the properties of the cellulose material.The smaller the size of cellulose fibers is, the better their mechanical properties will be.The size of cellulose fiber extracted from rice straw by the ultrasoundassisted alkaline extraction method is shown in Figure 2. The rice straw is a dense block (Figures 2(a) and 2(b)), but the bonds between lignin, hemicellulose, cellulose, and other components have been separated after alkaline treatment at 90 ∘ C for 1.5 hours and ultrasonic irradiation for 30 min.The result is that cellulose fiber has an average diameter of about 5 m with a relatively considerable roughness (Figure 2(c)).This result again confirmed the removal of lignin, hemicellulose, and other impurities from the cellulose fiber surface.The FTIR spectra of the cellulose extracted by the ultrasound-assisted extraction method are shown in Figure 3. Results showed specific peaks of cellulose around 3300, 2900, 1400, and 900 cm −1 .The signal at 902 cm −1 showed the rocking vibration of the -C-H band in cellulose, which is typical of -glycosidic linkage between glucose units.The peak at 1059 cm −1 is assigned to -C-O-group of secondary alcohols and ethers functions existing in the cellulose chain backbone.The band at 1162 cm −1 ascribed to the -C-O-C-stretch of the -1,4-glycosidic linkage is prominent for cellulose samples.The peak at 1434 cm −1 indicated the asymmetric bending of the -CH 2 group.This showed the intermolecular hydrogen attraction at the C 6 group [28].The peaks at 2901 and 1372 cm −1 represented stretching and deformation vibrations of C-H group in glucose unit.The broad absorption peak in the range of 3000 to 3500 cm −1 was assigned to the stretching of the H-bonded -OH groups.As can be seen in Figure 3 where the FTIR spectra before and after ultrasonic treatment were not much different, the ultrasonic vibration does not affect the structure of cellulose. Thermal Analysis. The thermal stability of lignin and cellulose which were extracted from rice straw was determined by TG-DTA on a simultaneous thermal analyzer (STA 625).The apparatus was continually flushed with nitrogen.The sample was heated from 30 to 800 ∘ C at a rate of 10 ∘ C⋅min −1 .Figure 4 illustrates the thermograms of lignin and cellulose (Figure 4) obtained by 2 M NaOH, at 90 ∘ C, for 1.5 h, with sonication time of 30 min.Results showed that lignin and cellulose decreased about 4.3% and 4.8% by weight at 130 ∘ C, respectively.This is the amount of water adsorbed by lignin and cellulose, in other words, the moisture content of lignin and cellulose.The 5% degradation of lignin and cellulose was shown at temperatures of 250 ∘ C and 230 ∘ C, respectively.The relatively high thermal stability of lignin in this study can be explained by the larger molecular weight.The current results were in good agreement with the thermal stability of lignin from wheat straw, in which the thermal stability increased with the molecular weight [29].The decomposition temperature of lignin was higher than that of cellulose, which may be due to the aromatic structure of lignin.However, lignin had the decomposition temperature of 50% by weight about 326 ∘ C, which was lower than cellulose (nearly 410 ∘ C).Cyclic oxidation reactions occur at high temperature; the decomposition products from cellulose have higher thermal stability than that from lignin.The decomposition temperature of over 80% by weight of lignin and cellulose is above 500 ∘ C. In addition, ultrasound could increase thermal stability of lignin and cellulose.The decomposition temperature of cellulose extracted by alkaline method without ultrasound is lower International Journal of Polymer Science than that extracted by ultrasound-assisted alkaline treatment method (Figure 4).It can be seen that the decomposition temperature of 50% by weight was only 320 ∘ C for cellulose extracted by alkaline method without ultrasound. Conclusion The obtained results indicated that ultrasound irradiation in alkaline medium with high concentration at high temperature promoted the extraction of lignin and cellulose from rice straw, and their yields increased with sonication time from 10 to 30 min under the conditions used.Lignin obtained by ultrasound-assisted alkaline extraction method showed a higher molecular weight than that of lignin extracted without ultrasonic irradiation and lignin had a high purity.The ultrasound-assisted alkaline extraction method did not cause significant changes in lignin and cellulose composition and their structure, but ultrasound could increase thermal stability of lignin and cellulose.This is of major importance from the industrial point of view and makes the ultrasoundassisted alkaline extraction process very advantageous. International Scheme 1: Ultrasound-assisted extraction method of lignin and cellulose. Lignin and Cellulose.The alkaline and ultrasound-assisted alkaline extraction of lignin and cellulose are shown in Scheme 1.The rice straw was first extracted with the ultrasound irradiation Figure 1 : Figure 1: EDX spectra of (a) inner and (b) outer surface of Vietnam's rice straw. Figure 2 : Figure 2: SEM of rice straw: outer layer (a), inner layer (b), cellulose fiber extracted by the ultrasound-assisted alkaline extraction method (c), and cellulose fiber extracted without ultrasound (d). Figure 3 : Figure 3: The FTIR spectrum of the cellulose extracted from rice straw without ultrasound (a) and using the ultrasound-assisted alkaline extraction method (b). Figure 4 : Figure 4: TGA curves of cellulose and lignin obtained with and without ultrasound-assisted alkaline extraction method. Table 2 : The yield of lignin obtained by alkaline and ultrasonicassisted alkaline extraction of rice straw with 2 M NaOH at 90 ∘ C for 1.5 h. Table 3 : The molecular weight and the polydispersity of lignin obtained at different ultrasonic times.
3,907.4
2017-10-25T00:00:00.000
[ "Agricultural And Food Sciences", "Materials Science" ]
Experimental Analysis of Engine Performance and Exhaust Pollutant on a Single-Cylinder Diesel Engine Operated Using Moringa Oleifera Biodiesel : In this investigation, biodiesel was produced from Moringa oleifera oil through a transesterification process at operating conditions including a reaction temperature of 60 ◦ C, catalyst concentration of 1% wt., reaction time of 2 h, stirring speed of 1000 rpm and methanol to oil ratio of 8.50:1. Biodiesel blends, B10 and B20, were tested in a compression ignition engine, and the performance and emission characteristics were analyzed and compared with high-speed diesel. The engine was operated at full load conditions with engine speeds varying from 1000 rpm to 2400 rpm. All the performance and exhaust pollutants results were collected and analyzed. It was found that MOB10 produced lower BP (7.44%), BSFC (7.51%), and CO 2 (7.7%). The MOB10 also reduced smoke opacity (24%) and HC (10.27%). Compared to diesel, MOB10 also increased CO (2.5%) and NO x (9%) emissions. Introduction The growth of the human population and a higher quality of living have increased global energy consumption. One of the most significant consumers of energy is the transportation field [1,2]. Transportation is heavily dependent on gasoline and diesel engines. Nevertheless, compared to gasoline, diesel engines are more cost-effective and energy-efficient [3,4]. Diesel has also become preferable because of its higher fuel efficiency, energy density, and lower carbon dioxide (CO 2 ) emissions [5,6]. Thus, diesel engines provide higher mileage [7]. However, factors such as the increasing price of world crude oil, the decline in fossil fuel, and the increase in greenhouse gas emissions have forced researchers and scientists to find renewable and sustainable energy resources [8][9][10]. Furthermore, the health issues resulting from the exhaust of fossil fuel engines are causing alarm across the world [11,12]. Therefore, scientists and researchers are now searching for more renewable, sustainable and cleaner alternatives to replace fossil fuels [13,14]. Scientists and researchers are looking for ways to develop alternative fuels to deal with escalating energy demands [15][16][17]. In this regard, biodiesel or fatty acid methyl ester (FAME) is a potential substitute for petroleum-derived diesel in vehicles [18,19]. Biodiesel is usually produced by transesterification of edible oil or animal fats [20,21]. However, nowadays, biodiesels are also produced from the transesterification of non-edible oils, waste cooking oil, macroalgae, animal fats, and microalgae [22]. Thus, the sources used to produce biodiesel are sustainable and renewable [23]. Moreover, the biodiesel feedstock can be replenished by cultivating crops and rearing livestock. In contrast, the sources of fossil fuel are non-renewable [24]. Biodiesel blends have been used without making any significant modifications to diesel engines. Biodiesels are potential alternatives for diesel because of their chemical and physical properties [25,26]. Biodiesel utilization in unmodified diesel engines slightly increases brake-specific fuel consumption and NO x emissions. However, biodiesel consumption significantly decreases CO, unburned hydrocarbon (HC), and particulate emissions due to more oxygen and the lack of aromatic content in biodiesels [27,28]. Various research studies have been performed on engines to examine the performance and emission characteristics of Calophyllum inophyllum and palm biodiesel blends [29]. A number of experimental studies have also been performed on the production of MOB and its physicochemical properties. However, there are no comparative studies to date regarding engine performance and the emission characteristics of MOB and its blends of 10% and 20% with diesel in an SCD. This provided the motivation and purpose of this study, which may also potentially assist in the future generation of alternative fuel. Therefore, the engine performance and emission characteristics resulting from regular fossil diesel and all Moringa oleifera methyl ester blended fuels were investigated. Liaquat, Masjuki [30] experimentally examined exhaust gas emissions from a compression ignition engine fueled with palm biodiesel. A Bosch gas analyzer was used to analyze the engine exhaust emission parameters for 250 h at a 2000 rpm engine speed. A significant reduction in CO and CO 2 emissions was recorded as the biodiesel concentration increased in blends. This was due to an excess amount of oxygen, which results in complete combustion occurring in the combustion chamber. Ozsezen and Canakci [31] examined the performance and emission characteristics of the CI engine palm oil methyl ester (biodiesel) blended with pure diesel. As a result, BSFC and brake power (BP) increased by 7.5 and 2.5%, respectively. A significant reduction of 86.89% in CO, 14.29% in HC, and 67.65% in smoke opacity were observed for palm biodiesel. However, the palm oil methyl ester enhanced NO x emissions by 22.1%. Sharon, Karuppasamy [32] used different palm biodiesel concentrations in diesel using a KIRLOSKAR TV-1 diesel engine. During the test, the engine's load was changed from 20% to 100% at a constant engine speed of 800 rpm. The BSFC for palm biodiesel and pure diesel was found to be 0.315 and 0.2755 kg/kWh, respectively, at full load conditions. Biodiesel blends, B25, B50 and B75, showed a slightly higher BSFC of 2.6%, 8.9% and 9.3%, respectively, than pure diesel. Ong, Masjuki [33] used Calophyllum inophyllum biodiesel in their study to examine engine performance and the emission characteristics of a CI engine. According to their experimental results, the B10 blend showed a slight improvement in BTE as compared to diesel. However, EGT and BSFC were lower for this blend. Shehata and Razek [34] reported on the performance and emission characteristics of neat SOME at different engine speeds and loads. Resultantly, BSFC increased while BP, BTE, and torque were decreased as compared to diesel. For emissions, NO x was reduced, but CO and CO 2 were increased. Roy, Wang [35] experimented using COME to monitor the performance and emission characteristics of a four-stroke two cylinders CI engine. The results suggested that BSFC of 10% COME blended fuel showed no significant increment, but further increasing biodiesel concentration in diesel fuel caused a slight increase in the BSFC, up to 2.3% compared to pure diesel. For emissions, CO emission was reduced for all percentage ratios of blended fuels, while similar trends were observed for NO x emission from the B10 blend and pure diesel. However, an increasing percentage of COME in blended fuel increased the NO x emission. Agarwal and Dhar [36] explored the performance, combustion and emission characteristics of Karanja oil methyl ester blended fuel (10%, 20%, and 50%). With regard to the engine performance, BSFC and EGT increased while BTE decreased as compared to diesel fuel. A significant reduction in HC and smoke opacity was observed with a slightly escalation in NO x emissions as compared to high-speed diesel. Both B10 and B20 blends delivered almost the same performance and emission characteristics. Moringa oleifera Lamarck is a member of the Moringaceae family, a tropical plant that is easy to disseminate and grows to a height of around 5 m-10 m. It is widely grown in tropical countries and is mainly distributed in India, Bangladesh, Pakistan, Africa, South America, Arabic countries, the Philippines, Thailand, and Malaysia. The seeds of Moringa oleifera contain 40% of oil by weight, and the oil produced is a golden yellow color [37]. Several researchers have reported that Moringa oleifera oil contains a high oleic acid volume, that is, approximately 70% of the total fatty acid summary [38]. Compared to other feedstocks, Moringa oleifera oil is from a non-edible source, which gives it good potential for conversion into biodiesel without affecting food industries [39]. Rajaraman et al. [40] have discussed blended Moringa oleifera methyl ester (B20 and B100) and analyzed the engine performance and emission characteristics using a direct injection CI engine at full load conditions. The performance results show that the brake thermal efficiency of Moringa oleifera blended fuel decreased compared to standard diesel fuel due to its high viscosity and density, as well as the lower calorific value of the blended fuel. The emission results show that Moringa oleifera blended fuel produces lower CO, HC NOx, and PM than regular diesel fuel. The current energy emergency has negatively affected the worldwide economy. The economies of numerous non-industrial nations have become uncompetitive because of the lack of usable energy. The present study is an effort to reduce the consumption of conventional fossil fuels. Moringa oil is derived from the seeds of Moringa oleifera, a small tree local to the mountains that can be used to prepare biodiesel via the transesterification process. Biodiesel Preparation The Moringa oleifera biodiesels were produced through an alkaline-catalyzed transesterification process. Firstly, the Moringa oleifera crude oil was mixed with 25% vol. of methanol and 1% wt. of KOH. A temperature of 60 • C and a stirring speed of 1000 rpm were maintained for 2 h. These conditions, were used to ensure that a homogenous mixture of Moringa oleifera oil, methanol, and potassium hydroxide was obtained, and so that the transesterification process would produce a desirable yield rate. Once the transesterification process was finished, the biodiesel was separated via a separating funnel. After 12 h, the product was transformed into two layers. Two immiscible layers of liquid formed in the separating funnel, the top layer was the methyl ester (biodiesel), and the bottom layer consisted of impurities and glycerin. The bottom layer was drained from the separating funnel, and following this, 50% vol. of distilled water at a temperature of 60 • C was used to spray and wash each methyl ester. Next, the methyl ester was rinsed with hot DW until Moringa oleifera methyl ester was cleaned of all impurities. Then, by using a rotary evaporator, methyl ester was dried and then purified via filter paper. After the purification process, MOME was mixed with diesel at various ratios to produce the biodiesel blends. The blends prepared in this study were as follows: MOB10, MOB20, and diesel. A total of three samples were prepared for the study, comprising two samples of biodiesel blends and one sample of pure diesel. Composition of Biodiesel The FAC of the MOME was analyzed using a gas chromatography (GC) system, Agilent 7890 series, USA. Specifications and operating mode of GC system are summarized in Table 1. The FAC of the MOME is presented in Table 2. The amount of esters, methyl linoleate, monoglycerides, diglycerides, triglycerides, and free and total glycerin was measured according to the EN14103 standard. Physiochemical Characteristics of Biodiesel It was imperative to measure Moringa oleifera biodiesel (MOME) characteristics and their blends (MOB10 and MOB20) to assess the quality and suitability of these fuels for diesel engines. Each biodiesel has different physicochemical properties depending on feedstock type and biodiesel production process, post-production treatment, and fatty acid composition of the biodiesel. Hence, different biodiesel and biodiesel blends shows different effects on the CIDE's performance and exhaust emissions. In this study, the physicochemical properties (i.e., density, viscosity index, flash point, acid number, oxidation stability, pour point, cloud point, and CFPP and kinematic viscosity) of MOME and the blends were measured using ASTM standards. Results of the measured properties are summarized in Table 3. A Stabinger viscometer (Model: SVM 3000, Anton Paar, UK) was utilized to measure density (at 15 and 40 • C) and kinematic viscosity (at 40 and 100 • C). A bomb calorimeter (Model: C2000 Basic, IKA, UK) was utilized for calorific value measurement. The cetane index of MOME and the blends was calculated based on the recovered temperature values at 10%, 50%, and 90% (T 10 , T 50 , and T 90 ) and the fuel density at 15 • C (D) according to ASTM D4737 standard test methods, which is given by the equation in [41]. Engine Setup A naturally aspirated, single-cylinder, four-stroke, direct injection diesel engine with an eddy current dynamometer was used in this study. Technical specifications for the tested engine are listed in Table 4. The experimental layout of the test engine is displayed in Figure 1. Engine tests were carried out in full load conditions in triplicates, and the engine speed varied from 1000 to 2400 RPM with an interval of 200 rpm. The exhaust emission parameters (smoke opacity, NO x , HC, and CO) were analyzed using an AVL exhaust gas analyzer (Model: DiCom 4000, AVL Ditest, Austria). In Table 5, the technical specifications of the used gas analyzer (AVL exhaust gas analyzer) are listed. First, the neat diesel fuel was utilized to bring the engine to a stable operating condition. Once this condition was reached, the biodiesel blended fuel was used for investigation. The engine was run for a few minutes, and then the residual diesel was drained. Data acquisition was performed after the drainage of residual diesel. This practice was repeated for each biodiesel blend. After one test was completed for the biodiesel blend, the engine was operated via diesel. This practice helped to drain the residual biodiesel blend used in the previous test from the fuel line. BTE and BSFC Brake thermal efficiency (BTE) is defined as the brake power of an internal combustion engine as a function of the heat input obtained from fuel burning. BTE is calculated using the formula given below: where, BP is brake power, m is mass flow rate and C v is the calorific value of the tested fuel. Measurement of the fuel efficiency of any engine that burns the fuel and generates rotational or shaft power is BSFC. Smoke Opacity, HC, CO and NO x Smoke opacity is defined as the amount of light concealed by the particulate matter or soot particles omitted from the combustion of diesel. Smoke opacity reflects the presence of soot in the exhaust gases. Smoke meters, also known as opacity meters, measure the amount of light blocked in the smoke emitted by vehicles. The smoke in engine exhaust depends mainly on the combustion process, formation of the air-fuel mixture, amount of fuel injected before the ignition process, and oxygen content of fuel [42]. In general, incomplete fuel combustion leads to higher smoke opacity. Smoke opacity is influenced by the engine speed, engine load, fuel viscosity, cetane number, air turbulence, and spray pattern in the cylinder [43,44]. HC is produced in the diesel engine when there is an overrich mixture or over-lean mixtures. Physicochemical properties of the fuel, fuel injection, and engine operating conditions also play a vital role in forming HC emissions. Incomplete combustion leads to CO formation. The lower oxygen content of diesel results in higher CO emissions. On the other hand, vegetable oil-based biodiesels have a higher oxygen content in their chains, which leads to complete combustion, and hence, lower CO emissions. NO x emissions are influenced by the fuel's spray characteristics and oxygen content, and adiabatic flame temperature. Spray fuel characteristics refer to the size and momentum of fuel droplets, degree of mixing between fuel droplets with air, penetration rate and evaporation, and radiant heat transfer rate [45,46]. Brake Power (BP) The performance of CI diesel engines relies on the characteristics of the fuel utilized for the testing engine and fuel injection system. The fuel characteristics include kinematic viscosity, density, oxygen content, and calorific value [48,49]. Figure 2 shows the brake power (BP) of Moringa oleifera biodiesel blends and diesel at different engine speeds. According to the results, BP increases progressively with engine speed until 2200 rpm and then decreases. Consequently, diesel fuel has the highest BP (5.43 kW) at 2200 rpm. In contrast, the MOB20 blend has the lowest BP (4.68 kW). The average BP is higher for diesel than MOB10 and MOB20 by 12.18% and 17.32%, respectively. The average BP is lower for MOB20 than MOB10 and diesel by 6.85% and 7.17%, respectively. This may be attributed to the larger HHV of biodiesel blends [50]. The MOB10 blend has the highest HHV in comparison with other biodiesel blends examined in this study. Besides, the fuel's physicochemical properties affect the spray formation during fuel injection, which in turn, affects combustion [51]. Lower viscosity and density of the MOB10 blend may result in loss of engine power due to more significant fuel pump leakage than other fuel blends [52]. Generally, fuels with higher viscosities can reduce fuel pump leakages [53]. Figure 3 shows the BSFC of Moringa oleifera biodiesel blends and diesel at various engine speeds. Diesel fuel shows lower BSFC as compared to biodiesel blends. The MOB20 blend has the highest average BSFC, with a value of 0.6115 kg/kWh. The MOB10 and MOB20 blends have a higher average BSFC than diesel by 7.03% and 12.75%, respectively. In general, biodiesels have a larger HHV due to the fuel-borne oxygen. Hence, a higher amount of fuel mass needs to be injected from the fuel injection pump into the engine due to biodiesel's higher density than diesel. More biodiesel needs to be injected into the combustion chamber for the same power output as diesel according to volumetric efficiency. The higher kinematic viscosity of Moringa oleifera biodiesel blends is the leading cause of poor air-fuel mixing resulting from slower fuel atomization. Higher density and lower calorific values than diesel are factors that lead to the higher BSFC for biodiesel blends, especially those containing higher concentrations of biodiesels [54]. Brake Thermal Efficiency (BTE) At full load conditions, the BTE increases, but it declines with an increasing compression ratio; it acts similar to the indicated thermal efficiency. Figure 4 illustrates the engine brake thermal efficiencies for MOB10, MOB20, and diesel fuels. According to our observations, the average brake thermal efficiency for MOB10 was 2% higher than pure diesel. However, the average brake thermal efficiency for MOB20 was 3.45% lower as compared to pure diesel. The curves were plotted by averaging three readings. Various researchers have found similar results whereby the brake thermal efficiency of the biodiesel blends was comparable with pure diesel's thermal efficiency [55,56]. In addition, they have found that preheating biodiesel fuel before injection increases the brake thermal efficiency. Figure 5 shows HC emissions for MOB and its blends with diesel at different engine speeds. Average HC emission is higher for diesel than that for MOB10 and MOB20 by 6.71% and 8.79%, respectively. Furthermore, the fuel blends containing 20% of biodiesel have higher HC emissions at low speeds compared to those containing 10% of biodiesel. Moreover, it can be observed that each tested fuel had higher HC emissions when the engine was running at lower speeds. Conversely, the amount of HC emissions decreased when the engine's speed was higher. The lean air-fuel mixture is the primary reason for more HC emissions at lower engine speeds as well as poor fuel distribution. The lower temperature and presence of excess air are responsible for lean air-fuel mixtures [42]. Over-rich and over-lean air-fuel mixtures are typical during heterogeneous combustion in diesel engines, which leads to HC emissions. The oxygen content of biodiesels generally leads to lower HC emissions than diesel at high engine speeds due to improved fuel combustion [57]. Figure 6 shows the CO 2 emissions of MOB blends and diesel at various engine speeds. CO 2 emissions from the engine's exhaust reached a maximum value with MOB20 and were reduced when the biodiesel concentration in the fuel was decreased. The average CO 2 emission values for MOB10, MOB20, and diesel were 5.693%, 6.124%, and 6%; the curves were plotted by averaging three readings. MOB20 showed higher CO 2 emissions than diesel and MOB10 due to more oxygen in MOB20 relative to neat diesel and MOB10. The higher amount of oxygen in the biodiesel increased the oxidation and combustion process. Due to the higher amount of oxygen, the excess amount of CO is converted to CO 2 [52]. Carbon Monoxide Emissions (CO) According to previous reports, oxygenated fuels reduce up to 30% of CO emissions compared to diesel-however, the magnitude of the reduction depends on the engine type and age, and ambient conditions [58,59]. Figure 7 displays CO emissions of Moringa oleifera biodiesel blends and diesel at various engine speeds. It can be observed that the MOB20 blend produces the highest amount of CO emissions at an engine speed of 1400 rpm. On the other hand, the MOB20 blend produces the lowest CO emissions at 2400 rpm and the lowest average CO emission in this study. The average CO emission of diesel is 0.82% higher than that for MOB20. However, the average CO emission is 1.99% lower than that for MOB10. In general, for the same blend ratio, the CO emissions decreased as the engine's speed changed from a lower to a higher value for all fuels. This is due to higher oxygen content and the higher cetane number of biodiesel fuel than diesel fuel. Higher cylinder pressure and temperature promote complete combustion at high engine speed, especially for biodiesel fuel that contains higher oxygen content. This enables the conversion of CO to CO 2 , reducing the amount of CO emission [60][61][62]. Nitrogen Oxide Emissions (NO x ) Many studies have shown that biodiesel fuels produce higher engine NO x emissions compared to diesel [63][64][65][66][67][68][69][70]. Figure 8 displays NO x emissions of MOB blends and diesel at various engine speeds. Several factors influence the production of NO x, and one of them is the oxygen content. In general, vegetable oil-based biodiesels have higher oxygen content (with a difference of 12% relative to diesel) as well as low nitrogen content. This results in higher NO x emissions when there is an increase in the combustion chamber temperature, which improves the combustion process [66]. The MOB20 blend has the highest NO x emissions (416 ppm) at an engine speed of 2400 rpm. Moringa oleifera biodiesel has more oxygen content as compared to neat diesel fuel. Besides, NO x emissions increase with an increase in the concentration of biodiesel in fuel blends. Average NO x emissions are lower for diesel compared to MOB10 and MOB20 by 4.71% and 8.12%, respectively. Abedin et al. [71] found that fuel blends containing 10% and 20% of palm biodiesel reduce NO x emissions by approximately 3.3%. Rahman et al. [70] discovered that a fuel blend containing 10% biodiesel produces higher NO x emissions by 9% relative to diesel. In general, biodiesels have a higher adiabatic flame temperature because of their high unsaturated fatty acid content, leading to more NO x emissions [69]. The higher viscosity and density of biodiesels are also responsible for higher NO x emissions [33]. Figure 9 displays the smoke opacity for Moringa oleifera biodiesel and its blends with diesel tested at different engine speeds. For diesel, the average smoke opacity is higher than MOB10 and MOB20 by 33.49% and 22.73%, respectively. The MOB10 blend has the lowest average smoke opacity (32.2%) compared to MOB20 and neat diesel. At higher engine speeds, the smoke opacity of MOB blends increased significantly. Several studies have shown that the smoke opacity was lower due to more oxygen contents in biodiesel. A lower ratio of carbon-hydrogen and non-availability of aromatic compounds in the biodiesel reduced the smoke emissions [72]. According to Gumus and Kasifoglu [73], more oxygen in biodiesel blends can reduce smoke exposure in exhaust gasses. Zhang et al. [74] found that combustion of biodiesel blends occurs earlier than diesel. Smoke emissions are reduced due to advanced injection timing, which results from the combustion process's quick start. In contrast, diesel has higher sulfur content than biodiesel blends, which is the main reason for high smoke opacity [75]. Conclusions The performance and exhaust emission characteristics of Moringa oleifera biodiesel blends were analyzed in this study. The results of the experimental investigation show that the MOB10 blend is the best blend ratio based on the following criteria: • At optimum speed, the BTE for MOB10 and MOB20 was 2.54% higher and 3.45% lower, respectively, than that of pure diesel. • MOB10 and MOB20 blends had a higher average BSFC than diesel by 7.03% and 12.75%, respectively, due to the higher density and lower calorific values of biodiesel blends. • MOB10 produced slightly lower BP when compared to diesel, by 0.26 kW. The MOB20 blend was the worst performer, producing less usable power than diesel by 0.36 kW. • The average HC emission for MOB10 and MOB20 were lower than diesel, with a difference of 8 ppm. • The average NO x emission for blended fuels was significantly higher than the neat diesel, and the MOB20 blend produces more NO x emissions due to increased oxygen content in fuel blends. • MOB10 produced lower smoke opacity than those of neat diesel and MOB20 due to good combustion. • Therefore, MOB10 is suitable to use in conventional compression-ignition diesel engines. Future Recommendation: The NO x emissions slightly increased in the combustion of biodiesel blends compared to conventional diesel. The researchers could pursue this work using different fuel additives such as nanoparticles or alcohols to reduce NO x emissions. Conflicts of Interest: The authors declare no conflict of interest.
5,733
2021-07-30T00:00:00.000
[ "Environmental Science", "Engineering" ]
A Fast Compact Finite Difference Method for Fractional Cattaneo Equation Based on Caputo–Fabrizio Derivative )e Cattaneo equations with Caputo–Fabrizio fractional derivative are investigated. A compact finite difference scheme of Crank–Nicolson type is presented and analyzed, which is proved to have temporal accuracy of second order and spatial accuracy of fourth order. Since this derivative is defined with an integral over the whole passed time, conventional direct solvers generally take computational complexity of O(MN2) and require memory of O(MN), with M and N the number of space steps and time steps, respectively. We develop a fast evaluation procedure for the Caputo–Fabrizio fractional derivative, by which the computational cost is reduced to O(MN) operations; meanwhile, only O(M) memory is required. In the end, several numerical experiments are carried out to verify the theoretical results and show the applicability of the fast compact difference procedure. Introduction Fractional diffusion equations have become a strong and forceful tool to describe the phenomenon of anomalous diffusion, and more research works have been obtained in the last decades [1][2][3][4][5][6]. However, since the fractional derivative is nonlocal and has weak singularity, it is impossible to solve fractional diffusion equations analytically in most cases. Instead, seeking numerical solutions is becoming an indispensable tool for research work about fractional equations. Different from the traditional derivative of the integer order, the fractional derivative depends on the total information in the correlative region, and this is the socalled nonlocal properties. Just because of this, it consumes computational time extremely to solve fractional equations. We hope to develop effective numerical schemes, which not only have better stability and higher accuracy but also require less storage memory and save computational cost. About stability and convergence analysis of the numerical schemes for fractional equations, the readers can refers to [7,8] for spatial fractional order equation, [9][10][11][12][13][14][15][16][17][18] for temporal fractional diffusion equations, and [19][20][21][22] for space-time-fractional equations. About the complexity, i.e., storage requirement and computation cost of an algorithm, researchers devote themselves to reduce storage requirement and computational time by analyzing the particular structure of coefficient matrices arising from the discretization system or reutilizing the intermediate data skillfully. We call these algorithms fast methods, including fast finite difference methods [23][24][25][26][27][28], fast finite element methods [29], and fast collocation methods [30,31]. A fast method for Caputo fractional derivatives is proposed [32,33]. Lu et al. [34] presented a fast method of approximate inversion for triangular Toeplitz tridiagonal block matrix, which is successfully applied to the fractional diffusion equations. Comparatively, there is less research work about the fast method for temporal fractional derivative than that for spatial fractional operators. where 1 < α < 2; Ω � (a, b) for one-dimensional case, and Ω � (a, b) × (c, d) for two-dimensional case; f(x, t) is the source term; ϕ(x) and ψ(x) are the prescribed functions for initial conditions; and z α u/zt α is a new Caputo fractional derivative without singular kernel, which is defined in the next section. Our purpose is to establish a fast finite difference scheme of high order for this equation. We will extract the recursive relation between the (k + 1) time step and the k time step of the finite difference solution. e computational work is significantly reduced from O(MN 2 ) to O(MN), and the memory requirement from O(MN) to O(M), where M and N are the total numbers of points for space steps and time steps, respectively. For improving the accuracy, a compact finite difference scheme is established. eoretical analysis shows that the fast compact difference scheme has spatial accuracy of fourth order and temporal accuracy of second order. Several numerical experiments are implemented, which verify the effectiveness, applicability, and convergence rate of the proposed scheme. is paper is organized as follows: some definitions and notations are prepared in Section 2. e compact finite difference scheme is described and then the stability and convergence rates are rigorously analyzed for the scheme in Section 3. e compact finite difference scheme is extended to the case of two space dimensions in Section 4. Fast evaluation and efficient storage are established skillfully in Section 5. Some numerical experiments are carried out in Section 6. In the end, we summarize the major contribution of this paper in Section 7. Some Notations and Definitions We provide some definitions which will be used in the following analysis. First, let us recall the usual Caputo fractional derivative of order α with respect to time variable t, which is given by By replacing the kernel function (t − s) − α with the exponential function exp(− α(t − s/1 − α)) and 1/(Γ(1 − α)) with M(α)/1 − α, Caputo and Fabrizio [35] proposed the following definition of fractional time derivative. Remark 2. An open discussion is ongoing about the mathematical construction of the CF operator. Ortigueira and Tenreiro Machado [37] indicated that the CF fractional derivative is neither a fractional operator nor a derivative operator, the authors of [38,39] showed that this operator cannot describe dynamic memory, and Giusti [40] indicated that this operator can be expressed as an infinite linear combination of Riemann-Liouville integrals with integer powers. As responses to these criticisms, Atangana and Gómez-Aguilar [41] pointed out the need to account for a fractional calculus approach without an imposed index law and with nonsingular kernels. Furthermore, Hristov [42] indicated that the CF operator is not applicable for explaining the physical examples in [37,40]; instead, he suggested that the CF operator can be used for the analysis of materials that do not follow a power-law behavior. e authors of [43] believe that models with CF operators produce a better representation of physical behaviors than do integer-order models, providing a way to model the intermediate (between elliptic and parabolic or between parabolic and hyperbolic) behaviors. To obtain the accuracy of the fourth order in spatial directions, the following lemma is necessary. Compact Finite Difference Scheme for One-Dimensional Fractional Cattaneo Equation In order to construct the finite difference schemes, the interval [a, b] is divided into subintervals with where h � (b − a)/M and Δt � T/N are the spatial grid size and temporal step size, respectively. . e values of the function u at the grid points are denoted as u k j � u(x j , t k ), and the approximate solution at the point ( . We also introduce the following notations for any mesh function v ∈ V h : and define the average operator It is easy to see that where I is the identical operator. We also denote the discrete inner products and norms are defined as By summation by parts, it is easy to see that For the average operator A, define Additionally, let V Δt � v|v � (v 0 , v 1 , · · · , v N ) be the space of grid function defined on Ω Δt . For any function v ∈ V Δt , a difference operator is introduced as follows: 3.1. Compact Finite Difference Scheme. We will consider the time-fractional Cattaneo equation equipped with the Caputo-Fabrizio derivative. Vivas-Cruz et al. [43] gave the theoretical analysis of a model of fluid flow in a reservoir with the Caputo-Fabrizio operator. ey proved that this model cannot be used to describe nonlocal processes since it can be represented as an equivalent differential equation with a finite number of integer-order derivatives. e finite difference methods usually lead to stencils through the whole history passed by the solution which consume too much computational work. In this paper, we will establish a high-order finite difference scheme and propose a procedure to reduce the computational cost. In [43], the authors proposed a recurrence formula of discretized CF operator and obtained an algorithm which can be considered a stencil with a one-step expression without the need of integrals over the whole history. It seems that the procedure in our paper and the algorithm in [43] are different in approach but equally satisfactory in result. For obtaining effective approximation with high order, we introduce the numerical discretization for the fractional By the initial and boundary value conditions, we have A compact finite difference scheme can be established by omitting the truncation term R k+ (1/2) i and replacing the exact solution u k i in equation (21) with numerical solution u k i : Stability Analysis. e following Lemma about M n is useful for the analysis of stability. Lemma 2 (see [45]). For the definition of M n , M n > 0 and M n+1 < M n , ∀n ≤ k, are held. Multiplying hδ t u k+1 i on both sides of equation (24) and summing up with respect to i from 1 to M − 1, the following equation is obtained: Observing equation (13), we have By the triangle inequality and Lemma 2, we obtain 4 Mathematical Problems in Engineering Let Summing up with respect to k from 0 to N − 1 leads to 2 1 , and then Theorem 1. For scheme (24), we have the following stable conclusion: Optimal Error Estimates. Combining equations (21) and (23) with (24), we get an error equation as follows: i on both sides of equation (33) and summing up with respect to i from 1 to M − 1, we get By the triangle inequality and Lemma 2, we obtain Combining equation (34) with (35), we have By the definition of Q in stability analysis, the inequality (36) can be rearranged as Mathematical Problems in Engineering Summing up with respect to k from 0 to N − 1, we get Observing that the initial error e 0 � 0 implies Q(e 0 ) � 0. en, we have Theorem 2. Suppose that the exact solution of the fractional Cattaneo equation is smooth sufficiently, then there exists a positive constant C, independent of h, k, and Δt such that where Compact Finite Difference Scheme in Two Dimensions In this section, the following fractional Cattaneo equation in two dimensions will be considered: ϕ(x, y) and ψ(x, y) are the given functions, and z α u/zt α is defined by the new Caputo fractional derivative without singular kernel. In order to construct the finite difference schemes, the and Δt � T/N are the spatial grid and temporal step sizes, respectively. denotes the values of function u at the grid points, and u k i,j denotes the values of the numerical solution at the point (x i , y j , t k ). For any mesh function v ∈ V h , we use the following notations: and define the average operator It is clear that We also denote A For any gird function u, v ∈ V 0 h , the discrete inner product and norms are defined as follows: For the average operator A x A y , define Compact Finite Difference Scheme. At the node (x i , y j , t k+(1/2) ), the differential equation is rewritten as For the approximation of the time-fractional derivative, we have the following approximation [45]: where the truncation error R k+ (1/2) i,j � O(Δt 2 ) and Furthermore, we also have zu zt Substituting (48) and (50)∼(52) into (47) leads to and there exists a constant C, depending on the function u and its derivatives such that By the initial and boundary conditions, we have Omitting the truncation error R k+ (1/2) i,j and replacing the true solution u k i,j with numerical solution u k i,j , a compact finite difference scheme can be obtained as follows: Mathematical Problems in Engineering Stability Analysis Definition 4 (see [46]). For any gird function u ∈ V 0 h , define the norm e lemmas below is useful in the subsequent analysis of stability. Multiplying h x h y δ t U k+1 i,j on both sides of equation (56) and summing up w.r.t. i, j from 1 to (M x − 1) and from 1 to (M y − 1), respectively, the following equation is obtained: Observing Lemma 4, we have By the triangle inequality and Lemma 2, we obtain Combining equation (60) with (61)∼(63),we get Mathematical Problems in Engineering Let Summing up with respect to k from 0 to N − 1, we get Noting Theorem 3. For the compact finite difference scheme (56), the following stability inequality holds: Similar to the stability, the convergence can also be analyzed. Theorem 4. Suppose that the exact solution of the fractional Cattaneo equation is sufficiently smooth, then there exists a positive constant C independent of h, k, and Δt such that where e k i,j � u k i,j − u k i,j and h � max h x , h y . Efficient Storage and Fast Evaluation of the Caputo-Fabrizio Fractional Derivative Since time-fractional derivative operator is nonlocal, the traditional direct method for numerically solving the frac- Let (71) Table 5: Considering h � 0.001, the discrete l ∞ error and convergence rates of u with different α for Example 1. Δt zu(x, t) zt u(x, 0) � sin(πx), zu zt t�0 � sin(πx), In Tables 1 and 2, we take Δt � h 2 and h � �� Δt √ to examine the discrete l ∞ -norm (l 2 -norm) errors and corresponding spatial and temporal convergence rates, respectively. We list the errors and convergence rates (order) of the proposed compact finite difference (CD) scheme, which is almost O(Δt 2 + h 4 ) for different α. Additionally, Table 3 shows the CPU time (CPU) consumed by direct compact (DCD) scheme and fast compact difference (FCD) scheme, respectively. It is obvious that the FCD scheme has a significantly reduced CPU time over the DCD scheme. For instance, when α � 1.5, we choose h � 0.1 and Δt � 1/50, 000 and observe that the FCD scheme consumes only 94 seconds, while the DCD scheme consumes 3692 seconds. We can find that the performance of the FCD scheme will be more conspicuous as the time step size Δt decreases. In Figure 1, we set h � 0.1 and α � 1.75 and change the total number of time steps N to plot out the CPU time (in seconds) of the FCD scheme and DCD scheme. We can observe that the CPU time increases almost linearly with respect to N for the FCD scheme, while the DCD scheme scales like O(N 2 ). Tables 4 and 5 show the discrete l ∞ errors and convergence rates of the compact finite difference scheme for Example 1. e space rates are almost O(h 4 ) for fixed Δt � 2 − 13 , and the time convergence rates are always O(Δt 2 ) for fixed h � 0.001. We can conclude that the numerical convergence rates of our scheme approach almost to O(Δt 2 + h 4 ). Example 2. e example is described by Note that the exact solution of the above problem is We apply the fast compact difference scheme to discretize the equation. In Figure 2, we set c � 0.001, α � 1.5, and M � N � 100 and plot exact and numerical solutions at time T � 1 for Example 2 with different x 0 . For x 0 � 0.5, α � 1.5, and M � N � 100, we also plot exact and numerical solutions with the different c in Figure 3. In Figure 4, for h � 0.1 and α � 1.5, we vary the total number of time steps N to plot out the CPU time (in seconds) of the FCD scheme and DCD scheme. e numerical experiments verified our theoretical results. In Table 6, by equating Δt � h 2 and fixing x 0 � 0.5, we compute the discrete l ∞ error and convergence rates with different fractional derivative orders α and different c. It shows that the compact finite difference scheme has space accuracy of fourth order and temporal accuracy of second order. We set h � �� Δt √ and fix c � 0.01, and the discrete l 2 error and convergence rates with different α and x 0 are displayed in Table 7. Example 3. If the exact solution is given by u(x, t) � e t sin(πx)sin(πy), we have different f(x, y, t) for different α accordingly: In Figure 5, h � 0.1 and α � 1.75 are fixed, and the total number of time steps N vary to plot out the CPU time (in seconds) of the FCD procedure and DCD procedure, and it presents an approximately linear computation complexity for FCD procedure. We set Δt � h 2 in Table 8, and h � �� Δt √ in Table 9, the discrete l ∞ error, discrete l 2 error, and convergence rates with different derivative orders α are presented. e fourth-order space accuracy and secondorder temporal accuracy can be observed clearly. Conclusion In this paper, we develop and analyze a fast compact finite difference procedure for the Cattaneo equation equipped with time-fractional derivative without singular kernel. e timefractional derivative is of Caputo-Fabrizio type with the order of α(1 < α < 2). Compact difference discretization is applied to obtain a high-order approximation for spatial derivatives of integer order in the partial differential equation, and the Caputo-Fabrizio fractional derivative is discretized by means of Crank-Nicolson approximation. It has been proved that the proposed compact finite difference scheme has spatial accuracy of fourth order and temporal accuracy of second order. Since the fractional derivatives are history dependent and nonlocal, huge memory for storage and computational cost are required. is means extreme difficulty especially for a long-time simulation. Enlightened by the treatment for Caputo fractional derivative [32], we develop an effective fast evaluation procedure for the new Caputo-Fabrizio fractional derivative for the compact finite difference scheme. Several numerical experiments have been carried out to show the convergence orders and applicability of the scheme. Inspired by the work [43], the topic about modelling and numerical solutions of porous media flow equipped with fractional derivatives is very interesting and challenging and will be our main research direction in the future. Data Availability All data generated or analyzed during this study are included in this article. Conflicts of Interest e authors declare that they have no conflicts of interest.
4,257.4
2020-03-19T00:00:00.000
[ "Mathematics", "Engineering", "Physics" ]
Relationship Between B-Vitamin Biomarkers and Dietary Intake with Apolipoprotein E є4 in Alzheimer’s Disease Abstract The potential for B-vitamins to reduce plasma homocysteine (Hcy) and reduce the risk of Alzheimer’s disease (AD) has been described previously. However, the role of Apolipoprotein E є4 (APOE4) in this relationship has not been adequately addressed. This case-control study explored APOE4 genotype in an Australian sample of 63 healthy individuals (female = 38; age = 76.9 ± 4.7 y) and 63 individuals with AD (female = 35, age = 77.1 ± 5.3 y). Findings revealed 55 of 126 participants expressed the APOE4 genotype with 37 of 126 having both AD and the APOE4 genotype. Analysis revealed an increased likelihood of AD when Hcy levels are >11.0 µmol/L (p = 0.012), cysteine levels were <255 µmol/L (p = 0.033) and serum folate was <22.0 nmol/L (p = 0.003; in males only). In females, dietary intake of total folate <336 µg/day (p=0.001), natural folate <270 µg/day (p = 0.011), and vitamin B2 < 1.12 mg/day (p = 0.028) was associated with an increased AD risk. These results support Hcy, Cys, and SF as useful biomarkers for AD, irrespective of APOE4 genotype and as such should be considered as part of screening and managing risk of AD. Introduction Dementia affects an estimated 47.7 million adults worldwide, with the majority of new cases (7.7 million per year) occurring in economically less developed countries. 1,2 Alzheimer's disease (AD) is the most common form of dementia, characterized by a gradually increasing level of cognitive impairment associated with a parallel reduction in quality of life. 3 The societal and financial burden of AD are substantial and presents unique challenges for the public health and aged care sectors. 4 The etiology of AD is multifactorial and includes neuronal apoptosis resulting from the aggregation of amyloid-b (Ab), the formation of intraneuronal neurofibrillary tangles by abnormally hyperphosphorylated tau proteins, 5 and a reduction in cerebral glucose metabolism. 6 The impact of non-modifiable (including genetics, age, and gender 7 ) and modifiable (nutrition, physical activity, and education 8 ) risk factors on AD is becoming well recognized. However, the combined effects of modifiable and non-modifiable risk factors for AD pathology is still poorly understood. As quantifiable cognitive decline associated with AD appears approximately 12 years before clinical diagnosis, 9 there is an urgent public health need to identify those at high risk and intervene to slow progression and prevent the onset of AD. Nutrition can both positively and negatively influence cognition in the elderly, as evidenced by the association of B-vitamin deficiency with AD and other dementias. 8,10,11 The B-vitamins include folic acid (both synthetic and natural forms), vitamin B 2 (riboflavin), vitamin B 6, and vitamin B 12 (including its synthetic form, cyanocobalamin) as essential precursors for coenzymes involved in the one-carbon metabolism pathway of homocysteine (Hcy), and thiol biosynthesis. 12 Thiols are plasma sulfhydryl-containing amino acids (Hcy, cysteine [Cys], cysteinyl-glycine [CysGly], and glutathione [GSH]) that play a vital role in cardiovascular health and cognition. 13,14 Elevated Hcy levels were identified as a strong predictor of incident AD, 8 while adequate dietary intake of folate and vitamin B 12 (B 12 ) plays a major role in the methylation and transsulfuration pathways and contribute to the maintenance of reduced Hcy levels. 15 Hence, Hcy, folate, and B 12 have been identified as important blood-based biomarkers of nutritional status and AD risk. 8,10 Elevated levels of plasma Hcy along with low levels of folic acid and B 12 are prevalent in individuals with AD. 16 Elevated plasma Hcy is involved in AD through the promotion of oxidative stress, leading to neuronal damage and impairment of blood-brain barrier (BBB) permeability. 17 While folic acid and B 12 possess antioxidant properties with the capacity to counteract such damage, 16 the oxidative stress associated with AD pathology may also be due to Ab-induced oxidative stress which increases plasma Hcy levels by depleting 5-methyltetrahydrofolate (5-MTHF). 16 SF is also proposed as a useful biomarker of Ab accumulation. 18 Nevertheless, Hcy is a known independent risk factor for the development of AD, as is low serum folate (SF). 19 The Framingham Study 20 identified plasma Hcy levels greater than 14 mmol/L to be associated with doubled risk of developing AD. However, despite consistent reductions in Hcy from various formulations of B-vitamin supplementation, controversy remains surrounding their ability to prevent or reduce symptoms of cognitive decline or incidence of dementia due to the heterogeneity of study design and the likelihood that B-vitamins may benefit those with low blood levels prior to the intervention. 21,22 Noticeable cognitive decline associated with AD presents several years after the onset of the related pathologies and it is plausible that B-vitamin interventions in older adults are implemented too late to offer protection against further decline or even symptomatic relief due to the damaging effects of decades of elevated plasma Hcy and nutritional deficiency. 21,23 Apolipoprotein E (APOE) plays an integral role in the brain through its support of synaptic plasticity, cholesterol metabolism, and the management of neuroinflammation. 24 The Apolipoprotein E ¾4 (APOE4) allele is the most common known genetic risk factor for AD, providing 3-fold increased odds in individuals with one copy and an approximately 15-fold increase in those with two copies. 25 The allele has been reported to lower the age of onset of AD, 24 is an established risk factor for coronary heart disease, 24 and is a genetic indicator of reduced life expectancy. 26 A lower concentration of APOE is associated with impaired clearance of Ab from the brain, with even more pronounced effects in APOE4 carriers with AD. 27 Lower plasma APOE concentration is also associated with smaller hippocampal size in individuals with AD, particularly in APOE4 carriers. 28 The effect of diet and nutrient intake on APOE levels is not currently well defined, with dietary interventions such as the Mediterranean diet (MD) reporting mixed results in APOE4 carriers. 29,30 In APOE4 carriers, adherence to the MD (traditionally rich in B-vitamins) is associated with better cognitive performance compared to a contemporary Western Diet. 30 However, in a study of executive function, findings suggested that an MD-based intervention may not be as successful in APOE4 carriers compared with non-carriers. 29 Higher overall dietary energy intake increases the risk of AD in APOE4 carriers, 31 while cognitive performance and hippocampal APOE can be moderated by dietary fat intake. 32 As APOE4 carriers may have a genetic disposition to increased fatty acid mobilization and utilization, significant questions remain surrounding the optimal dietary recommendations for APOE4 carriers. 33 The relationship between one-carbon metabolism, B-vitamin status, and APOE4 genotype in AD has only recently started to receive significant attention. 34 As APOE4 forms a weak complex with Ab that may result in Ab accumulation, 35 investigation of a potential increased AD risk associated with APOE4 and poor B-vitamin status is warranted. To date, several studies have failed to find an increased risk of cognitive dysfunction in APOE4 carriers relative to plasma thiol status. [36][37][38] However, the association between low serum B12 status and impaired cognition in APOE4 carriers has been established. [38][39][40] The present study aimed to investigate the role of biomarkers of B-vitamin status including Hcy and dietary B-vitamin intake in sporadic AD relative to APOE4 genotype focused in a case-control study of healthy older adults Australians and individuals clinically diagnosed with AD. The suitability of biomarkers of Bvitamin status and dietary B-vitamin intake as diagnostic covariates was also assessed. Study design and subject details The study was designed as a pair-matching case-control study with the cases and controls recruited from two distinct cohort studies. Between 2007 and 2008, 126 older adults (73 females and 53 males) aged between 65 and 83 years who resided in the New South Wales (NSW) Central Coast region of Australia were recruited as part of a comparison pilot study for retirement living. The AD cohort consisted of 63 individuals (35 females and 28 males; mean age 77.1 ± 5.3 years), recruited over a similar time period for a folate-related study and clinically diagnosed with AD using the criteria set by the National Institute of Neurological and Communicative Disorders and Stroke and by the Alzheimer's Disease and Related Disorders Association (NINCDS-ADRDA). The healthy control cohort consisted of 63 community-dwelling individuals (38 females and 25 males; mean age 76.9 ± 4.7 years) from a high socio-economic area retirement village. Each control subject was matched with dementia (case) based on age (between 65 and 83 years) (±1 year). To determine that the two groups were matched by age and different based on MMSE score, the student's t-test was conducted. The samples were selected randomly from each of the cohorts and individuals were excluded if they fell outside the age range or if a suitable matching control subject was not identified. Matching by gender was attempted, but not possible because of the differences in sex distribution between the two cohorts and the size of the cohorts. The AD group, described above, received a diagnosis from practicing neurologists associated with the study during visits to NSW Central Coast clinical practices. All participants had completed at least seven years of formal education. Ethics approval Northern Sydney Central Coast Health Committee (approval numbers 04/ 19 & 06/224 for the control and AD groups, respectively) and the University of Newcastle Human Research Ethics Committee (approval numbers H-782-0304 & H-2008-0418 for the control and AD groups, respectively) apply. The University of Newcastle and the University of Canberra, Human Research Ethics Committees had formal reciprocal arrangements for approval to use study data and samples. Cognitive testing During the clinical assessment, all participants completed a face-to-face MMSE, 41 self-completed a Hospital Anxiety and Depression Scale score (HADS), 42 and the neurologist obtained a brief medical history. Written informed consent for participation in this study, from participants that scored less than 24 on MMSE, was obtained by a registered proxy who was assessed as having the necessary cognitive capacity. 41 Dietary intake of B-vitamins All participants completed estimation of daily intake of nutrients during an interviewer-administered validated food frequency questionnaire (FFQ), 23,42,43 adapted from the Commonwealth Scientific and Industrial Research Organization version (CSIRO). 44 During the interview, participants disclosed current supplements use, if any. If a participant was unable to provide adequate responses, the participant's carer was asked to provide the food intake information. The FFQ data were analyzed using Foodworks TM v3.02 Professional software (Xyris Software, QLD, Australia). This software incorporates most food items consumed by Australians at the time of data collection. It is important to note that all data and samples in this study were collected before mandatory folic acid fortification was introduced in Australia in late 2009. Blood collection and processing Blood was collected (approximately 20 mL) from each participant by phlebotomists following an overnight fast of at least 10 hours duration. Plasma samples were collected in lithium heparin tubes and serum samples were collected in tubes containing a clot activator. Samples were processed and stored at À80 C until analysis. 43 Biochemical assays Total plasma thiol levels (Hcy, Cys, Cys-Gly, and GSH) were determined by high-performance liquid chromatography (HPLC) with a fluorescence detector after 4-Fluoro-7-sulfobenzofurazan (SBD-F) ammonium salt derivatization at the Molecular Nutrition Laboratories at the University of Newcastle (Ourimbah, NSW, Australia). 42,45 The red blood cell folate (RBCF), SF and serum B12 were measured using a standardized automated Access Immunoassay System as part of routine analysis at either the Institute of Clinical Pathology and Medical Research (ICPMR) at Westmead Hospital (Sydney, NSW, Australia) or the Gosford Hospital Pathology Laboratory (Gosford, NSW, Australia). APOE4 analysis was performed following the manufacturer's instructions at the University of Canberra (Canberra, ACT, Australia) using commercially available Enzyme-linked Immunosorbent Assay (ELISA) kits to determine plasma APOE (SKU: ab20874. Abcam, Cambridge, United Kingdom) and serum APOE4 (SKU: K4699-100. Biovision, Milpitas, CA, United States) concentrations. Serum samples that fell within the detection range were determined to be positive for APOE4 genotype. Each kit was verified using the sample of a known APOE genotype collected before analysis by a qualified nurse. The fresh sample was immediately processed and analyzed using standardized procedures and stored appropriately at À80 C for inter-and intra-assay variabilities. Inter-and intraassay coefficients of variation for both kits were less than 10%. Statistical analysis Power calculations were based on a-priori statistical power analysis. A sample size of 59 AD patients and 59 age-and sex-matched healthy subjects (þ10% for missing values) was established as adequate in order to evaluate two-sided odds ratios equal to 1.20 and differences at least 20% in primary (MMSE) and secondary (Hcy and SF levels) outcomes, achieving statistical power greater than 0.80 at 0.05 probability level (p-value). All variables were examined prior to analysis to determine suitability for parametric or non-parametric methods using histograms and both Kolmogorov-Smirnov, and Shapiro-Wilk tests of normality. Descriptive statistics for normally distributed continuous variables are reported as a mean ± standard deviation, while non-normally distributed variables are reported as median values (1st, 3rd quartiles). Student's t-test for independent samples was used to evaluate differences between groups for normally distributed variables and the Mann-Whitney test was used for non-parametric variables. Chi-square test of independence was performed to examine the association between AD status and MMSE categories. A receiver operator characteristic (ROC) curve was used to test the discriminatory power of variables and area under the curve (AUC) of biomarkers and dietary intake relative to AD and APOE4 genotype in the study sample. Predictor values based on Swets, 46 distinguish predictive accuracy as: 'low' (0.500 < AUC 0.700); 'moderate to high' (0.700 < AUC 0.900); and 'very high to perfect' (0.900 < AUC 1.00). Youden index was calculated to determine the optimal cut-off points. Due to the variability of the number of decimal points reported in the biochemical assay results, we presented numerical data as suggested by Cole 47 throughout this manuscript. The level of significance was defined as alpha <0.05 and no adjustments were made for multiple comparisons. All statistical analysis was performed using IBM SPSS version 23.0 207 (Armonk, NY: IBM Corp). Results The total sample included 126 individuals and consisted of 38 females in the healthy control group and 35 in the AD group. A flowchart of the two groups is presented in Figure 1. The median age in the control group was 78.0 years (74.0, 81.0) and the median age was 79.0 years (73.0, 82.0) in the AD group. Analysis of group differences between (n ¼ 63) individuals clinically diagnosed with AD and (n ¼ 63) community-dwelling controls free of cognitive impairment indicated that the clinical status of AD was significantly associated with HADS depression score Dietary intake of B-vitamins Overall, there was no significant difference in dietary B-vitamin intake between the control and AD groups (p > 0.05) except for dietary folate intake in females. Females diagnosed with AD had significantly lower (p < 0.001) intake of total folate in the control group (439 ± 172 mg) compared with the AD group (321 ± 87.6 mg). Natural folate intake was also higher (p ¼ 0.007) in the control group (308 ± 103 mg) compared with the AD group (245 ± 78.1 mg). Also, levels of estimated dietary intake of B-vitamins (Table 2) did not vary between groups or sub-groups for vitamins B1 (thiamine), B2, B3 and synthetic folate (all, p > 0.05). Plasma thiols and other blood biomarkers The plasma thiol and blood biomarker analysis revealed differences between groups in SF, Hcy and CysGly (Table 3). SF concentration was significantly lower (p ¼ 0.008) in all AD participants (18. biomarkers did not vary between groups or gender for serum B12, RBCF, Cys, GSH, and plasma APOE expression (all, p > 0.05). Plasma thiols and blood biomarkers by APOE genotype In total, 55 participants (43.7%) expressed the APOE4 genotype and 37 participants (29.4%) were clinically diagnosed with AD and possessed the APOE4 genotype. After examining differences between groups for APOE genotype and gender, there were no differences for serum B12, RBCF, Cys, CysGly, and GSH (all, p > 0.05) ( Table 4). The SF was significantly lower in males possessing APOE4 when compared to males without (APOE4-: Predictor value of plasma thiols, blood biomarkers and B-vitamin status by receiver operating characteristic curve To evaluate the predictor value of the biomarkers using our case-control data set, we utilized ROC curves to determine the diagnostic potential of the covariates relative to AD status. Using these models, we estimated the threshold for diagnostic ability and predictor value and presented the statistically significant values (p < 0.05) in Table 5. The cut-off predictor values of significant covariates are presented in Table 5. The analysis revealed that plasma Hcy levels greater than 11.0 mmol/L were associated with an AD diagnosis (AUC [95% CI]: 0.629, p ¼ 0.012). This association was not significant in the APOE4þ with AD subgroup APOE4þ with AD sub-group, respectively. This level of SF in males was associated with a moderate to high ability to predict an AD diagnosis (AUC [95% CI]: 0.735, p ¼ 0.003) (Figure 2A). Similar ability of SF was found in the APOE4þ with AD sub-group (AUC [95% CI]: 0.819, p ¼ 0.002) ( Figure 2B). Different effects of dietary intake and risks were seen between genders, wherein females, average daily dietary intake of total folate, natural folate, and vitamin B 2 was associated with an increase in the likelihood of an AD diagnosis. Total folate intake of under 336 mg/day increased the chance of an AD diagnosis (AUC [95% CI]: 0.736, p ¼ 0.001) ( Figure 2C). Natural folate intake of less than 270 mg/day also increased the likelihood of an AD diagnosis (AUC [95% CI]: 0.687, p ¼ 0.011). In addition, vitamin B2 intake of under 1.12 mg/day was associated with an increased likelihood of AD diagnosis (AUC [95% CI]: 0.661, p ¼ 0.028). These covariates were not significantly associated with an AD diagnosis in the APOE4þ with AD subgroup (all, p > 0.05). Discussion This study represents an exploration of the potential associations between biomarkers of B-vitamin status, dietary intake of B-vitamins and APOE4 genotype in an Australian AD cohort. The data suggested that biomarkers of B-vitamin status, particularly Hcy, Cys, and SF, are significantly associated with AD as potential diagnostic tools in an elderly cohort. Furthermore, both increased Hcy levels and APOE4 expression were strongly associated with AD. This is consistent with the findings of Miwa et al. 49 that show both Hcy and APOE4 contribute to the development of AD. Novel findings surrounding sex differences, folate levels, and APOE4 were also identified. Reduced dietary folate intake in elderly females with AD may function as a possible predictor for AD, regardless of APOE genotype. In males with AD, the SF was lower compared with healthy males and could be used as a predictor variable for AD. The link between increased plasma Hcy levels and AD is well established, however, to our knowledge, only a few studies 39,50,51 have investigated the relationship of Hcy with APOE4 in humans. Our study identified that 43.7% of the total participants possessed the APOE4 genotype and 29.4% of all participants had both the APOE4 genotype and AD. While these figures may be considered high relative to worldwide prevalence, 25 an Australian study 52 previously reported a frequency of 53% in individuals (n ¼ 80) with both the APOE genotype and AD in a clinic-based sample. Increased plasma Hcy levels were found across groups, which was associated with AD status but not the APOE4 genotype. Specifically, plasma Hcy levels over 11.0 mmol/ L were identified to be a significant predictor variable for AD, but not in individuals with both the AD and the APOE4 genotype. This Hcy threshold aligns well with previous findings that low dietary B-vitamin intake and high Hcy levels may be used as predictors of cognitive decline. 11,15 APOE4 is the most widely accepted and potentially most potent genetic risk factor for AD, while hyperhomocysteinemia may be a significant biomarker for those without the APOE4 genotype. 17 Minagawa et al. 17 proposed that the thiol component present in plasma Hcy interacts with Cys residues of the more abundant Apolipoprotein E ¾3 (APOE3), yet it does not appear to interact with APOE4. The interaction between Hcy and APOE3 interferes with dimerization and impairs high-density lipoprotein (HDL) production. As HDL function is typically enhanced in carriers of APOE3 compared to those with APOE4, 53 this mechanism may explain an increased risk of elevated Hcy in APOE3 carriers. In addition, plasma Hcy may decrease APOE expression, 54 potentially contributing to the reduced clearance of Ab independent of APOE genotype, which could add to oxidative stress and increase the risk of AD. These potential mechanisms provide plausible explanations for the finding in this analysis, proposing that Hcy represent an important tool for measuring AD regardless of APOE genotype. These data support an independent association between plasma Hcy levels and APOE4 in AD, alongside sex differences explored as predictors of an AD diagnosis. The association between inadequate total dietary folate intake and AD risk is well established. 11,19,55,56 In this current study, the predictor abilities of total folate (over 336 mg/day), natural folate (over 270 mg/day), and vitamin B 2 (over 1.12 mg/day) were associated with a reduced chance of an AD diagnosis in females. However, these associations were not significant in individuals with both AD and the APOE4 genotype. Further findings emphasize the importance of the inclusion of dietary natural folate and vitamin B 2 as a protective strategy for AD in females as it may offer protection in the prodromal phase of the disease development. The only biomarker in our analysis to reveal moderate-to-high predictor value was SF in males, however, our findings support the importance of the transsulfuration pathway in AD etiology. Our sample confirms elevated Hcy as an important risk factor for AD and also found lower Cys to be a potential predictor variable. Cys is a component of glutathione, an important endogenous antioxidant in the brain and while no differences were observed with GSH in our sample, low Cys has been linked to mortality and frailty in older adults. 57 The reaction of elevated Hcy to Cys is dependent on dietary intake of and hepatic conversion of vitamin B 6 into pyridoxal-5-pyrophospate and deficiencies in B 6 can lead to increased Hcy. However, we do not have vitamin B 6 data to further assess this relationship. In this study, male participants diagnosed with AD were estimated to be consuming 468 mg/day of total folate compared with 435 mg/day in HC, yet SF was lower in males with AD. In men, SF greater than 22.0 nmol/L was a diagnostic predictor of the reduced likelihood of AD in the AD group. No differences in RBCF were observed (p > 0.05) for either gender. The difference in folate consumption between males and females with AD may at least be partially explained by higher energy intake in the males. Although reduced SF in males is not a unique finding, 11 more research is required to explore the potential sex differences between total folate intake and SF levels, as well as the risk of lower Cys. Mandatory fortification of key foods with synthetic folate, together with other factors such as education, may be contributing to the decreasing AD incidences in more developed, but not developing countries. 58 However, countries such as the United Kingdom and New Zealand and many developing countries are still considering the merits of mandatory folic acid fortification. In Australia, folic acid fortification commenced in 2009 and was shown to reduce plasma Hcy levels and incidence of hyperhomocysteinemia while increasing SF and RBCF. 23 Beckett et al. 23 also found an increased SF and RBCF levels; however, the authors attributed this to the effect of synthetic folate fortification and not natural folate consumption. Long-term supplementation of older adults with B-vitamins has been shown to reduce Hcy levels, albeit there is mixed evidence surrounding the delay of cognitive decline. 21 However, one RCT has shown benefit to B-vitamin supplementation in individuals with mild cognitive impairment with baseline Hcy of above 11.3 mmol/L, 59 in accordance with our threshold value of the likelihood of AD diagnoses with Hcy above 11.0 mmol/L. Benefits of folic acid fortification may be due to increased consumption during the prodromal phase of AD development as both short and long-term trials of folic acid supplementation in younger individuals have reported improved cognitive outcomes. 60,61 Therefore, it is plausible that there could be a precise therapeutic dose of folic acid necessary to prevent cognitive decline in older adults. However, larger scale RCTs in at-risk individuals are required before general supplementation recommendations are considered, as too much folate may be detrimental. 56 Combined data from three cohorts have reported poorer cognitive function in elderly with low B12 status and high RBCF, further complicating the potential of a recommended intake of B-vitamins in individuals at-risk. 62 The current study included a case-control design and ROC curve analysis of relevant biomarkers of B-vitamin status and dietary intake of B-vitamins. Implementation of ROC curve analysis is considered ideal for case-control studies, particularly when comparing disease susceptibility genes in AD, such as APOE. 63 This study revealed a relatively high number of individuals expressing the APOE4 genotype, making it suitable for a robust comparative analysis. Biomarkers in this study were blood-based and the majority of them can be tested at pathology labs in Australia. The use of CSF-based biomarkers can be relatively reliable in predicting AD diagnosis but necessitates an intrusive lumbar puncture that is considered too invasive for routine disease screening. However, blood work is routinely practised, cost-effective, and is well tolerated in the community for the basis of health management and screening for disease. Hence, the ability to establish AD risk using blood biomarkers is a promising approach for early detection and screening for AD. The APOE4 is a strong genetic risk factor, yet not all individuals who develop AD carry the APOE4 allele, particularly those with only one copy. Therefore, studies that investigate multiple biomarkers may allow increased precision in the prediction of AD risk and could benefit individuals and communities alike. Interestingly, a recent small study (n ¼ 17) using a ROC curve analysis found SF and red blood cell hemoglobin to be useful biomarkers of Ab accumulation in the brain with more sensitivity and specificity than APOE genotype or folate alone. 18 Future research targeting a combination of biomarkers namely APOE, Hcy, and SF in larger samples is required to support a clinical diagnosis of AD using a simple blood test. Future studies should also include an analysis of omega-3 fatty acid status due to a possible beneficial synergy with B-vitamins in mild cognitive impairment. 64 One of the limitations of the study is the use of estimated dietary data by using FFQ, as this method is prone to the underestimation of energy intakes and omittance of unhealthy eating habits. 65 The FoodWorks TM Professional v3.02 software contains historical databases, before the mandatory folate fortification of wheat flour in Australia and provides incomplete information for the B 6 and B 12 content of many foods. For this reason, these values were not analyzed as dietary intake. However, the values of serum B 12 in our sample did not differ between groups and were above that of the recommended clinical thresholds set in Australia. In females, folate intake below the recommended dietary intake is associated with an increased risk of mild cognitive impairment and probable dementia, but no such association was identified with B 12 intake in a large prospective longitudinal cohort. 55 However, the analysis of our sample allowed for the estimation of dietary folate intake before the mandatory fortification of folate was introduced in Australia. Our study provides valuable insight into folate status for countries considering mandatory folic acid fortification. Our study was also unable to determine how many copies of the APOE4 allele each participant possessed. Therefore, valuable analysis discriminating between individuals possessing one or two copies of the APOE4 allele was not possible. We were also unable to match cases and control by sex; however, the distribution of sexes is similar. Furthermore, only limited sociodemographic data was available for this retrospective analysis. Finally, the retrospective nature of this study and its cross-sectional nature can only identify associations and as previously stated, would require a larger prospective study to confirm its findings. In conclusion, our findings do not support an association between biomarkers of B-vitamin status and dietary intake of B-vitamins, relative to APOE4 genotype in a case-control study of healthy and clinically diagnosed individuals with AD. We have uncovered associations that may aid in estimation of an AD diagnosis through the use of plasma Hcy levels, SF intake in males, and dietary intake of folate and vitamin B 2 in females. Future studies using larger sample sizes may aid in further defining these relationships and their potential role in the screening of AD. Take away points In this case-control study of elderly Australians, the presence of the Apolipoprotein E ¾4 genotype was not associated with B-vitamin biomarkers of Alzheimer's disease. Elevated blood homocysteine, low cysteine, and low serum folate were associated with the likelihood of an Alzheimer's diagnosis. Lower dietary folate intake in females was associated with the likelihood of an Alzheimer's diagnosis. The effect of fortification of the Australian diet with folate (which began after this data was collected) on dietary folate intake and serum folate leaves and subsequent Alzheimer's risk warrants further study.
6,871.2
2019-03-29T00:00:00.000
[ "Medicine", "Biology" ]
Individual Microparticle Manipulation Using Combined Electroosmosis and Dielectrophoresis through a Si3N4 Film with a Single Micropore Porous dielectric membranes that perform insulator-based dielectrophoresis or electroosmotic pumping are commonly used in microchip technologies. However, there are few fundamental studies on the electrokinetic flow patterns of single microparticles around a single micropore in a thin dielectric film. Such a study would provide fundamental insights into the electrokinetic phenomena around a micropore, with practical applications regarding the manipulation of single cells and microparticles by focused electric fields. We have fabricated a device around a silicon nitride film with a single micropore (2–4 µm in diameter) which has the ability to locally focus electric fields on the micropore. Single microscale polystyrene beads were used to study the electrokinetic flow patterns. A mathematical model was developed to support the experimental study and evaluate the electric field distribution, fluid motion, and bead trajectories. Good agreement was found between the mathematic model and the experimental data. We show that the combination of electroosmotic flow and dielectrophoretic force induced by direct current through a single micropore can be used to trap, agglomerate, and repel microparticles around a single micropore without an external pump. The scale of our system is practically relevant for the manipulation of single mammalian cells, and we anticipate that our single-micropore approach will be directly employable in applications ranging from fundamental single cell analyses to high-precision single cell electroporation or cell fusion. Introduction The ability to precisely manipulate single microparticles and cells is important in many micro-and nano-scale fluidic devices [1][2][3]. Electrokinetic transport based on electroosmosis (EO), dielectrophoresis (DEP), and electrophoresis (EP) is a widely used manipulation technique in microfluidics due to its implicit simplicity, low cost, and ease of fabrication [4][5][6]. EO is induced by ionic cloud migration in response to electric fields that are applied tangentially to an electrode surface [7]. Electroosmotic micropumps (EOP) can create constant pulse-free flows in low Reynolds number flow (in which a traditional external pump system may work inefficiently) without the requirement of moving parts [4]. The flow rates and pumping pressure of EOPs have a quick and precise response to electric input, making the suitable for use with microanalysis systems [6]. DEP occurs when a polarizable particle is suspended in a spatially nonuniform electric field [8]. If the particle moves in the direction of an increasing electric field, the behavior 2 of 14 is referred to as positive DEP (pDEP), while if it moves away from the high electric field regions, it is known as negative DEP (nDEP). Dielectrophoresis can be used to manipulate, transport, separate, and sort different types of particles based on the frequency-dependent relative polarizabilities of the particle and medium [9][10][11]. To date, the vast majority of DEP-based systems can be classified as electrode-based dielectrophoresis (eDEP), insulatorbased dielectrophoresis (iDEP) [12], and light induced DEP [13][14][15]. In iDEP chips, where the gradient of the electric field is formed by geometrical constrictions within insulating substrates instead of metallic microelectrodes, the electrodes are positioned remotely and do not contact the particles or cells directly. The most common designs for iDEP feature two-dimensional (2D) microchannels connected to inlet and outlet liquid reservoirs and exposed to with nonuniform electric fields. Three-dimensional (3D) variants have garnered increasing attention due to their lower voltage requirements, reduced Joule heating, and superior extensibility [12]. One critical configuration of 3D iDEP systems makes use of porous membranes, as the insulating structure. Several relevant studies have reported the trapping and agglomeration of a wide array of particles, ranging from biomolecules to cells, using dielectrophoresis and porous membranes. For example, Kovarik and Jacobson employed a track-etched nanomembrane with conical pores (130 nm in diameter at the tip, 1 µm in diameter at the base, and 10 µm long) for the trapping of polystyrene particles and Caulobacter crescentus cells [16]. Cho et al. report dielectrophoretic trapping of E. coli cells in a membrane-based system (a SU-8 photoresist with a thickness of 200 µm) [17]. However, despite significant interest in porous membrane-based iDEP techniques, few prior studies have addressed the fundamental local electrokinetic behavior around a single micropore in a thin dielectric film. In this study, we fabricate a silicon nitride film with a single micropore (2-4 µm in diameter), ensembled in an axisymmetric 3D chamber which locally focuses the electric field on the micropore. We have used this device to study the flow pattern of single microscale polystyrene beads in relation to an electric field focusing micropore. Our experiments demonstrate that the combination of electroosmotic flow and DEP forces induced by direct current has significant potential as a means to trap, agglomerate, repel, and rotate the beads without an external pump. A finite element (FEM) based mathematical model was developed in support of the experimental study, to predict the particle movements around a single micropore as a result of electroosmotic flow and DEP forces. We find that the mathematical model can capture the experimental results with high fidelity. The scale of our system is practically relevant for the manipulation of single mammalian cells, and we anticipate that our single-micropore approach will be directly employable in applications ranging from fundamental single-cell analyses, to high-precision single cell electroporation, or cell fusion. Micro-Pore Chip Fabrication The experiments were performed on a dielectric film with a single-micropore. Low-stress silicon nitride (Si 3 N 4 ) was chosen as the film material because it is a well-established mask material for typical silicon etchants, and is optically translucent under microscopy [18,19]. Figure 1 shows the step-by-step fabrication process of the dielectric film with the micropore. This dielectric film is the core component of the chip used in this study. The process begins with a <100> n-type, single-side polished, single-crystal silicon (SCS) wafer. Low pressure chemical vapor deposition (LPCVD) was used to deposit a 1.0 µm, low stress silicon nitride layer on each side of the wafer ( Figure 1A). Then, the photoresist was spun on the polished side of the wafer, and the micropore was patterned with a mask aligner (Karl Suss MA6 Mask Aligner) ( Figure 1B). The patterned micropore was etched through the silicon nitride film by a plasma etcher (Lam6 Oxide Rainbow Etcher) ( Figure 1C). Typical diameters of the micropore range from 2.2 to 4 µm ( Figure 1J). After stripping off the photoresist left on the polished side of the wafer, a new layer of photoresist was deposited and patterned on the unpolished side of the wafer to open a window (1.31 mm × 1.31 mm) for the Potassium Hydroxide (KOH) etch step ( Figure 1D). Then, the unpolished side window was opened with plasma etching ( Figure 1E). Once the micropore and the unpolished side window were patterned on the silicon nitride layer, the wafers were dipped into a 24% v/w KOH solution at 80 • C in order to completely etch the exposed silicon ( Figure 1F). A well was formed originating from the unpolished side and terminating at the LSN film. Finally, a 0.1 µm silicon dioxide (SiO2) layer, which serves as the electrically insulating layer between the silicon wafer and liquid, was thermally grown over the exposed silicon on the well's side wall after the KOH etching ( Figure 1G) [18]. Figure 1H,I shows the pictures of the polished side and unpolished side of the diced chip. Figure 1J shows the scanning electron microscope (SEM) picture of a typical micropore. aligner (Karl Suss MA6 Mask Aligner) ( Figure 1B). The patterned micropore was etched through the silicon nitride film by a plasma etcher (Lam6 Oxide Rainbow Etcher) ( Figure 1C). Typical diameters of the micropore range from 2.2 to 4 µm ( Figure 1J). After stripping off the photoresist left on the polished side of the wafer, a new layer of photoresist was deposited and patterned on the unpolished side of the wafer to open a window (1.31 mm × 1.31 mm) for the Potassium Hydroxide (KOH) etch step ( Figure 1D). Then, the unpolished side window was opened with plasma etching ( Figure 1E). Once the micropore and the unpolished side window were patterned on the silicon nitride layer, the wafers were dipped into a 24% v/w KOH solution at 80 °C in order to completely etch the exposed silicon ( Figure 1F). A well was formed originating from the unpolished side and terminating at the LSN film. Finally, a 0.1 µm silicon dioxide (SiO2) layer, which serves as the electrically insulating layer between the silicon wafer and liquid, was thermally grown over the exposed silicon on the well's side wall after the KOH etching ( Figure 1G) [18]. Figure 1H,I shows the pictures of the polished side and unpolished side of the diced chip. Figure 1J shows the scanning electron microscope (SEM) picture of a typical micropore. Figure 2A shows a schematic illustration of the experimental setup, and Figure 2B shows the corresponding pictures. The chip (polished film side face down) is sandwiched between two PDMS hollow discs, and sealed with indium tin oxide (ITO) coated glasses (Nanocs, NY, USA) at the ends of the discs, forming two chambers with a diameter of 3 mm and a height of 2 mm. The bottom chamber is filled with deionized water and the top chamber is filled with an aqueous suspension of diluted polystyrene (PS) beads (0.1% Figure 2A shows a schematic illustration of the experimental setup, and Figure 2B shows the corresponding pictures. The chip (polished film side face down) is sandwiched between two PDMS hollow discs, and sealed with indium tin oxide (ITO) coated glasses (Nanocs, NY, USA) at the ends of the discs, forming two chambers with a diameter of 3 mm and a height of 2 mm. The bottom chamber is filled with deionized water and the top chamber is filled with an aqueous suspension of diluted polystyrene (PS) beads (0.1% w/w, 10 µm in diameter, Sigma-Aldrich Co. St. Louis, MO, USA). The conductivities of the top and bottom liquid, measured by a conductivity meter (Elite PCTS tester, Thermo Fisher Scientific, Waltham, MA, USA), were found to be similar (2.3 × 10 −4 S/m). The whole device is placed on an insulated slide and observed on an inverted microscope. The ITO coated glasses were connected to a Waveform generator (Model WW1072, Tabor Electronics) as the power supply. A mounted microscope camera, connected to a computer, recorded the movement of the particles during the experiment. We employed DC current; however, no appreciable electrolytic gas generation was observed during the experiment, probably because of the low ionic content of the deionize water and the short duration of the experiments. The measured current is about 139 nA when 10 V was applied. Experimental Set-Up w/w, 10 µm in diameter, Sigma-Aldrich Co. St. Louis, MO, USA). The conductivities of the top and bottom liquid, measured by a conductivity meter (Elite PCTS tester, Thermo Fisher Scientific, Waltham, MA, USA), were found to be similar (2.3 × 10 −4 S/m). The whole device is placed on an insulated slide and observed on an inverted microscope. The ITO coated glasses were connected to a Waveform generator (Model WW1072, Tabor Electronics) as the power supply. A mounted microscope camera, connected to a computer, recorded the movement of the particles during the experiment. We employed DC current; however, no appreciable electrolytic gas generation was observed during the experiment, probably because of the low ionic content of the deionize water and the short duration of the experiments. The measured current is about 139 nA when 10 V was applied. Theoretical Estimation for the Movement of the Beads To better understand the movement of the PS beads during experimentation, we developed a three-dimensional cross-section FEM model using COMSOL Multiphysics. The electric field distribution, fluid flow, and particle trajectories in the liquid were calculated. As Figure 3A shows, the Si3N4 film (consisting of boundaries 5, 6, 7) separates the top chamber (well) from the bottom chamber (under the film), which were both filled Theoretical Estimation for the Movement of the Beads To better understand the movement of the PS beads during experimentation, we developed a three-dimensional cross-section FEM model using COMSOL Multiphysics. The electric field distribution, fluid flow, and particle trajectories in the liquid were calculated. As Figure 3A shows, the Si 3 N 4 film (consisting of boundaries 5, 6, 7) separates the top chamber (well) from the bottom chamber (under the film), which were both filled with DI water. The two chambers were connected via the micropore, which in the model was assigned a diameter of 4 µm. The parameters used in this study are listed in Table S1 of the Supplementary Materials. Figure 3B shows the FEM model mesh. Figure 3C shows the mesh of the film from bottom view and Figure 3D shows the magnification of mesh around the micropore. The position of the pore in the model is (0 mm, 0 mm, −0.528 mm). with DI water. The two chambers were connected via the micropore, which in the model was assigned a diameter of 4 µm. The parameters used in this study are listed in Table S1 of the Supplementary Materials. Figure 3B shows the FEM model mesh. Figure 3C shows the mesh of the film from bottom view and Figure 3D shows the magnification of mesh around the micropore. The position of the pore in the model is (0 mm, 0 mm, −0.528 mm). Electrical Field Distribution The electric field distribution in the geometry of Figure 3A was solved for two boundary conditions. Electrical potentials of either 10 V or −10 V were applied on the surface of the top electrode (boundary 2). Ground was set on boundaries marked 10. The remaining boundaries were insulated. The conductivity of the DI water used in the experiment was measured to be 2.3 × 10 −4 S/m, and this value was also used in the mathematical model. The governing equation is the conservation of current: where ∇ • ()is the divergence operator and represents the local current density vector. The current density only has the conductive component at steady state and is given by: represents the local electric field, is the conductivity. The electric field is linked to the potential field, U, by the relationship: Electrical Field Distribution The electric field distribution in the geometry of Figure 3A was solved for two boundary conditions. Electrical potentials of either 10 V or −10 V were applied on the surface of the top electrode (boundary 2). Ground was set on boundaries marked 10. The remaining boundaries were insulated. The conductivity of the DI water used in the experiment was measured to be 2.3 × 10 −4 S/m, and this value was also used in the mathematical model. The governing equation is the conservation of current: where ∇·() is the divergence operator and J represents the local current density vector. The current density only has the conductive component at steady state and is given by: E represents the local electric field, σ is the conductivity. The electric field is linked to the potential field, U, by the relationship: The field equation is solved for the geometry and electrode locations in Figure 3A. The Fluid Flow Model The fluid flow was also calculated. The fluid motion is governed by the incompressible Navier-Stokes equation: Here, η refers to the dynamic viscosity (kg/(m·s)), u is the velocity (m/s), ρ equals the fluid density (kg/m 3 ), and p denotes to the pressure (Pa). Most solid surfaces in contact with an electrolyte, acquire a surface charge. In response to the spontaneously formed surface charge, ions are accumulating at the liquid-solid interface. Known as an electrical double layer, it forms because of the ions located on the surface face the solution. When an electric field is applied, the electric field generating the electroosmotic flow displaces the charged liquid in the electrical double layer [20]. This scheme imposes a force on the charged solution close to the wall surface, and the fluid starts to flow in the direction of the electric field. The velocity gradients perpendicular to the wall give rise to viscous transport in this direction. In the absence of other forces, the velocity profile eventually become almost uniform in the cross-section perpendicular to the wall [21]. Our model replaces the thin electric double layer with the Helmholtz-Smoluchowski relation between the electroosmotic velocity and the tangential component of the applied electric field [22]: In this equation, ε w = ε 0 ε r denotes the fluid's electric permittivity (F/m), ζ 0 represents the zeta potential at the channel wall (V), and U equals the potential (V). This equation applies to all boundaries except for the entrance and the outlet. Particle Trajectories For the particles tracing the fluid flow, multiple particles (10 µm in diameter) were released from a grid position above the pore (Blue dot line in Figure 3A, X range from −0.5 mm to 0.5 mm, Y range from −0.5 mm to 0.5 mm, Z is −0.1 mm) with an initial velocity of zero. We set the boundary condition on the silicon nitrate substrate to be sticky so that if a particle falls onto the film, that particle could no longer move. The particles were subject to Stokes drag forces, dielectrophoresis forces, gravitational forces, and buoyancy forces and electrophoretic forces. The Stokes drag force was governed by the equation: where m p is the mass of the particle, u is the velocity of the fluid, v is the velocity of the particle, and τ p is the particle velocity response time, given by: In which ρ p is the density of the particle, d p is the diameter of the particle. The gravitational and buoyancy forces were calculated by following the equation: where g is the gravity vector. The DEP force acting on the PS beads was calculated from the following equations: where r p is the radius of PS beads, ε f is the permittivity of fluid, E is the applied electric field. The Clausius-Mossotti factor of PS beads (perfectly spherical particles), K(ω) is a ratio of complex permittivity: where ω is the frequency, ε is the dielectric constant, and σ is the electrical conductivity of the medium. When no frequency component is involved, DC-DEP can be estimated as the residual of this factor when frequency goes to zero: where σ p and σ f is the real conductivity of the particle and fluid respectively [24]. In our experiment, σ p is much smaller then σ f , which will induce negative dielectrophoresis. The electrophoretic forces were calculated by following the equation: The zeta potential of the PS beads was set to be −40 mV [25]. Figure 4A shows the electric field distribution in DI water when +10 V was applied to the top electrode surface (boundary 2) and the bottom surface was grounded. Higher magnification of the electric fields is shown in the right panel of Figure 4D. It shows a typical electric field distribution in the fluid near the pore, which has a maximum electric field of 37.3 kV/cm at the edge of the pore. The electric fields decrease radially from the edge of the pore (the point of singularity) outward, similar in effect to those featured in our previous studies [26,27]. In other words, the micropore locally focuses the electric field. Micromachines 2021, 12, x FOR PEER REVIEW 8 of 14 Figure 4E,F. As Figure 4B reveals, the fluid above the dielectric film flows from the sides towards the pore's edge but is then pushed vertically upwards as it approaches the center of the pore. The fluid motion in the bottom chamber is in an opposite direction to that in the top chamber. When a voltage of −10 V was applied on the top surface ( Figure 4C), the fluid above the center of the pore is pulled downward to the pore and then flows from the pore's edge to the sides. A detailed look at the fluid motion at the edge of the pore ( Figure 4E,F) indicates that the maximum velocity is found at the edge of the pore, corresponding to the location of the maximum electric field in Figure 4B,C. According to our calculation, the Joule heating induced temperature rise is only 0.2 °C when +10 V was applied, so flow induced by Joule heating was determined to be negligible compared to the electroosmotic flow. Figure 5 shows results simulating the movement of the 10 µm PS beads released from the grid above the pore. The particles were driven by drag forces, DEP forces, gravity forces, and buoyancy forces. Figure 5A-C show that when the top surface electrode potential is +10 V from different time point, the particles directly above the pore are repelled from the pore by drag force due to the upward fluid flow from the pore center. However, The particles around the pore (not directly above) were attracted to the pore ( Figure 5C). The flow was reversed when a potential of −10 V was applied to the top surface electrode. As Figure 5D-F show, particles directly above the pore were pulled towards the pore by the downward fluid to the pore center. However, as particles approach the pore, they are repelled ( Figure 5F). Figure 4E,F. As Figure 4B reveals, the fluid above the dielectric film flows from the sides towards the pore's edge but is then pushed vertically upwards as it approaches the center of the pore. The fluid motion in the bottom chamber is in an opposite direction to that in the top chamber. When a voltage of −10 V was applied on the top surface ( Figure 4C), the fluid above the center of the pore is pulled downward to the pore and then flows from the pore's edge to the sides. A detailed look at the fluid motion at the edge of the pore ( Figure 4E,F) indicates that the maximum velocity is found at the edge of the pore, corresponding to the location of the maximum electric field in Figure 4B,C. According to our calculation, the Joule heating induced temperature rise is only 0.2 • C when +10 V was applied, so flow induced by Joule heating was determined to be negligible compared to the electroosmotic flow. Figure 5 shows results simulating the movement of the 10 µm PS beads released from the grid above the pore. The particles were driven by drag forces, DEP forces, gravity forces, and buoyancy forces. Figure 5A-C show that when the top surface electrode potential is +10 V from different time point, the particles directly above the pore are repelled from the pore by drag force due to the upward fluid flow from the pore center. However, The particles around the pore (not directly above) were attracted to the pore ( Figure 5C). The flow was reversed when a potential of −10 V was applied to the top surface electrode. As Figure 5D-F show, particles directly above the pore were pulled towards the pore by the downward fluid to the pore center. However, as particles approach the pore, they are repelled ( Figure 5F). The Forces Applied on the Particles In our mathematical model, the particles in the solution were affected by drag forces, DEP forces, gravity, and buoyancy forces, and electrophoretic forces. The surface charge of the particles is dependent on various parameters such as the conductivity and pH of the solution [28]. The electrophoretic force could be ignored when compared to the drag force and DEP force ( Figure 6B,E). To better understand the particle flow pattern evolution and the forces experienced by single microparticle in the single pore configuration we show two typical results of local force and flow pattern in Figure 6. Figure 6 shows the particle's trajectory and its experienced forces when it is released from 0.1 mm, 0 mm, or −0.4 mm with the electrode on the top having a potential of +10 V ( Figure 6A-C) and −10 V ( Figure 6D-F). The color of the trajectory shows the velocity of the particle. In Figure 6A, the particle is first pushed away from the x axis and then dragged back when the x position is 0.12 mm. Next, it accelerates towards the pore and then decelerates rapidly near the pore until it stops. Figure 6B shows the magnitude of zcomponent of the drag force, DEP force and force of gravity affecting the particle during the whole processing, as a function of Z position. Figure 6C shows the shows the magnitude of z-component of the drag force and DEP force as a function of X position. Figure 6B,C shows that when the particle is far away from the pore, the nDEP force is negligible compared to the drag force and force of gravity, which means the particle's trajectory is mainly affected by the electroosmotic flow, gravity, and buoyancy forces. However, when the particle approaches the pore, the nDEP force increased exponentially with distance from the pore. The nDEP has an opposite direction with the drag force and finally makes the particle stop. When we released the particle from the same position but reversed the current ( Figure 6D-F), the particle was first attracted to the pore in the direction of the flow, and then pushed away when it reached the position of 0.075 mm, 0 mm, −0.482 mm. Both the drag force and nDEP force pushed the particle away close to this point. The complex pattern of forces and flows was evident from these calculations. However, it is also obvious that the voltages imposed as boundary conditions and the initial locations of particles can be used to precisely place microparticles relative to the single micropore. The Forces Applied on the Particles In our mathematical model, the particles in the solution were affected by drag forces, DEP forces, gravity, and buoyancy forces, and electrophoretic forces. The surface charge of the particles is dependent on various parameters such as the conductivity and pH of the solution [28]. The electrophoretic force could be ignored when compared to the drag force and DEP force ( Figure 6B,E). To better understand the particle flow pattern evolution and the forces experienced by single microparticle in the single pore configuration we show two typical results of local force and flow pattern in Figure 6. Figure 6 shows the particle's trajectory and its experienced forces when it is released from 0.1 mm, 0 mm, or −0.4 mm with the electrode on the top having a potential of +10 V ( Figure 6A-C) and −10 V ( Figure 6D-F). The color of the trajectory shows the velocity of the particle. In Figure 6A, the particle is first pushed away from the x axis and then dragged back when the x position is 0.12 mm. Next, it accelerates towards the pore and then decelerates rapidly near the pore until it stops. Figure 6B shows the magnitude of zcomponent of the drag force, DEP force and force of gravity affecting the particle during the whole processing, as a function of Z position. Figure 6C shows the shows the magnitude of z-component of the drag force and DEP force as a function of X position. Figure 6B,C shows that when the particle is far away from the pore, the nDEP force is negligible compared to the drag force and force of gravity, which means the particle's trajectory is mainly affected by the electroosmotic flow, gravity, and buoyancy forces. However, when the particle approaches the pore, the nDEP force increased exponentially with distance from the pore. The nDEP has an opposite direction with the drag force and finally makes the particle stop. When we released the particle from the same position but reversed the current ( Figure 6D-F), the particle was first attracted to the pore in the direction of the flow, and then pushed away when it reached the position of 0.075 mm, 0 mm, −0.482 mm. Both the drag force and nDEP force pushed the particle away close to this point. The complex pattern of forces and flows was evident from these calculations. However, it is also obvious that the voltages imposed as boundary conditions and the initial locations of particles can be used to precisely place microparticles relative to the single micropore. Experimental Results The experimental study allowed us to continuously track the motion of the particles (see Supplementary material SV1). We analyzed the recorded trajectories of the PS beads during the experiment and compared this to the mathematical model. Figure 7 shows the PS bead trajectories under the microscope when the beads were on the film initially and +10 V was applied on the top electrodes. Images were extracted at t = 2, 4, 8, 10, 13, 14 s from Video S1. The beads were attracted to the pore in succession, initially increasing in velocity before slowing down to a halt at the equilibrium position. There they form a ring around the pore, which was the same as the result of the FEM mathematical model (Figures 5 and 6). The PS beads could not get closer to the pore, and as such, did not seal it, because of the strong nDEP force. This phenomenon suggests that this approach may have significant potential for the continuous aggregation of microparticles, because the particles were simultaneously robustly captured and prevented from blocking the pore, whereas with a conventional hydrodynamic method (pump), agglomeration stops after the pore is sealed. Experimental Results The experimental study allowed us to continuously track the motion of the particles (see Supplementary Materials SV1). We analyzed the recorded trajectories of the PS beads during the experiment and compared this to the mathematical model. Figure 7 shows the PS bead trajectories under the microscope when the beads were on the film initially and +10 V was applied on the top electrodes. Images were extracted at t = 2, 4, 8, 10, 13, 14 s from Video S1. The beads were attracted to the pore in succession, initially increasing in velocity before slowing down to a halt at the equilibrium position. There they form a ring around the pore, which was the same as the result of the FEM mathematical model (Figures 5 and 6). The PS beads could not get closer to the pore, and as such, did not seal it, because of the strong nDEP force. This phenomenon suggests that this approach may have significant potential for the continuous aggregation of microparticles, because the particles were simultaneously robustly captured and prevented from blocking the pore, whereas with a conventional hydrodynamic method (pump), agglomeration stops after the pore is sealed. When the potential of the top electrode was switched to −10 V, the particles already in contact with the film were repelled away from the pore ( Figure 8A). This result verified the predictions in Figure 5. Upon switching the potential back to +10 V, the particles were once again attracted to the pore. Thus, by manipulating the magnitude and direction of the applied electric field, the beads could be moved to precise radial positions. It was interesting to observe that when the particles remained floating in the fluid ( Figure 8B), they were observed to rotate around the pore. As the beads rotated in the x-z plane, perpendicular to the film, they appeared to be oscillating around the pore on the x-y plane (A-A', B-B'-B"). In these experiments, a portion of the bead trajectories became out of focus as they moved beyond the depth of the field. The beads continued rotating until they came into contact with the film and stopped at the equilibrium position, as Figure 5 depicts. When the potential of the top electrode was switched to −10 V, the particles already in contact with the film were repelled away from the pore ( Figure 8A). This result verified the predictions in Figure 5. Upon switching the potential back to +10 V, the particles were once again attracted to the pore. Thus, by manipulating the magnitude and direction of the applied electric field, the beads could be moved to precise radial positions. When the potential of the top electrode was switched to −10 V, the particles already in contact with the film were repelled away from the pore ( Figure 8A). This result verified the predictions in Figure 5. Upon switching the potential back to +10 V, the particles were once again attracted to the pore. Thus, by manipulating the magnitude and direction of the applied electric field, the beads could be moved to precise radial positions. Discussion A simple experimental technique supported by a mathematical model was used to develop a fundamental understanding of the flow patterns of microscale PS beads in a fluid flow affected by a focused DC electric field around a single micropore on a dielectric Si 3 N 4 film. Both the experimental and mathematical analysis gave fundamental insights into the flow pattern of single noncharged microparticles around a single electric field focusing micropore. A wealth of interesting behavior was observed. It was particularly interesting to observe that particles aggregate near a point of force equilibrium on the dielectric field around the pore. From a practical point of view, our results suggest that in a single micropore configuration the combined effects of DC electroosmotic flow and dielectrophoresis can be utilized to predictably translate, capture, or aggregate small microparticles. This technique eliminates the need for external hydrodynamic resources (such as a pump), and requires no moving parts. Furthermore, the requisite electrodes can be mounted at relatively large distances from the particles in question, a noted strength of iDEP-based techniques. The technique is also uniquely immune to the blockages that can hinder traditional hydrodynamic techniques, as the competing forces acting upon the particle hold it in stable mechanical equilibrium a finite distance from the micropore itself, preventing particles from blocking the pore and disrupting electrical continuity between the top and bottom of the chip. This phenomenon suggests that this technique may be equally suitable for both the manipulation and trapping of a single particle and the continuous uninterrupted aggregation of particles, which in turn may be sortable based on their dielectric properties [27]. Perhaps most importantly, the complex competing electrokinetic effects play in this system are reliably captured by our mathematical modeling, opening the door to robust optimization of particle manipulation parameters based on desired applications. The experiments presented herein demonstrate that a wide suite of manipulations may be achieved by the strategic combination of dielectrophoresis and DC electroosmotic flow, including attraction or repulsion of particles by control of the direct current direction when the particles are on the film, rotation of floating particles, and stable trapping or aggregation of particles at a predictable equilibrium distance from the micropore. While this study presents only proof-of-concept experimental validation of these phenomena, the modeling principles developed herein can be employed in order to further refine specific experimental manipulations as desired. The disadvantages of the present chip design include the fact that particles that stick to the film cannot be displaced by the electroosmotic flow once they are in contact with the film (Figure 8A), and the somewhat limited manipulation distance, a consequence of the decreasing strength of both electroosmotic flow and DEP force with increasing distance from the pore. Therefore, a microfluidics channel will be needed in future iterations of the chip to limit the distribution of particles. In future, alternative current (AC) with different frequencies will be applied to enhance the controllability of the DEP force, potentially enabling selective attraction or repulsion of different particles. Different fluids and particles must also be tested with the chip in order to better understand how the motion of the particles may vary under differing conditions, and how the electrical properties of the particle itself may alter its responses to electrokinetic stimuli. Based on its unique design and high degree of mathematical predictability, we anticipate that this chip design may have significant potential for biological applications, including single-cell electroporation (in which a single cell can be captured on one side of the film, electroporated, and then induced with DNA or a protein from the other side of the film), cell fusion, or any manner of fundamental single-cell analyses. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/mi12121578/s1, Table S1: The parameters used in the FEM model, Video S1: The motion of the particles.
8,437.8
2021-12-01T00:00:00.000
[ "Physics", "Biology" ]
Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km) has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km. Introduction The assessment of the positional accuracy of cartographic products has always been of great importance.However, nowadays it is a matter of renewed interest because of the need for greater spatial interoperability, supporting new ways of mapping based on Volunteered Geographic Information (VGI) and Spatial Data Infrastructures (SDI).The products derived from these new ways of mapping require the integration of spatial data (inputs) from sources with heterogeneous levels of quality.These levels of quality must be well-understood, not only in order to develop the integration process successfully, but also to determine whether the final quality of the output data fits the usersŕ equirements.This is why it is necessary to implement more efficient accuracy assessment procedures, which give us a fast and easy evaluation of the quality of these new cartographic products.From our point of view, this can only be achieved by increasing the levels of automation of such procedures. Traditionally, positional accuracy has been evaluated by means of the positional discrepancies between the apparent location of a spatial entity recorded in a Geospatial Data Base (GDB) and its true (real world) location.However, the task of identifying the true location of a spatial entity by means of topographic field surveying is often not technically or economically feasible (although this will largely depend on the size and accessibility of the geographical area to be evaluated, see as an example [1]).This all decreases the final efficiency of the accuracy assessment processes.In order to overcome such inconveniences, positional accuracy could also be defined by measuring the differences between the location of a spatial entity stored in a GDB (tested or assessed data source) and its location determined by another GDB (reference data source) of higher accuracy.Thus, if the accuracy of the second GDB is high enough, then the unmeasured difference between its information and the real world location can be ignored.This way of defining positional accuracy, proposed by Goodchild and Hunter [2], has inspired the development of new approaches aimed at increasing the automation level of the accuracy assessment procedures.Such approaches are based on spatial data matching mechanisms, which thus acquire a determining role in automatically identifying homologous spatial entities between the two data sources (tested and reference). Spatial data matching is a relevant research field in Geographic Information Sciences, with many direct and indirect applications.Among them, we highlight data conflation and data quality evaluation.The term conflation is used to describe procedures related to combining geographical information of several scales and precisions, transferring attributes from one dataset to another or adding missing features [3], with the main goal of obtaining an enriched product that is "better" than the previous two [4][5][6][7].Overall, conflation procedures are commonly used in computer science and remote sensing fields [8][9][10][11], and mainly in the cartographic updating of urban areas [12][13][14]. On the other hand, and following Xavier et al. [15], spatial data matching can also be used in data quality assessment approaches, such as inconsistency detection [16,17], positional accuracy [18][19][20][21], completeness [22], and thematic accuracy [23].In the case of positional accuracy, these same authors propose the development of a web service for quality control, whose key process is a simple matching process [24][25][26].Overall, it could be said that all the matching mechanisms used in quality assessment approaches share a common characteristic: all of them use objective measures for evaluating the degree of similarity between two GDBs.These measures can be classified according to the nature of the measured quantity: geometry, topology, attributes, context, and semantics [15].In the specific case of the matching mechanisms applied to positional accuracy assessment, similarity measures employed to match elements are related to the geometric properties of spatial features [21]. On the basis of the above, in our previous studies [19,21] we proposed a matching-based methodology for automating the positional accuracy assessment of a GDB, by using another GDB as reference source and polygonal features as spatial entities on which to determine the degree of similarity (or dissimilarity) between both data sets.Polygons are closed by lines, so their positional behavior can be analyzed through their boundaries using buffer-based methods.Specifically, we used buildings from urban areas, because they are a huge set of polygonal features in any GDB, and therefore the positional assessment derived from them will be deemed statistically significant.In addition, they have a wide spatial distribution in cartography and more temporal permanence than other polygonal features.The proposed matching mechanism determined a set of homologous polygons between both GDBs, using a weighted combination of geometric measures.Thus, the assigned weights among measures were calculated from a supervised training process using a genetic algorithm (GA) [27] (this process was externally evaluated by fifteen reviewers selected from a pool of internationally recognized experts who certified its robustness and quality-see [21]).Specifically, the geometric measures employed quantified the absolute location of the polygons by means of their overlapping areas and geometric properties, such as the length of the perimeter and the area of a polygon.In addition, some shape measures were employed for assessing the geometric form of polygons, such as the number of concave and convex angles, moment of inertia, and the area of the region below the turning function.This mechanism also produced a Match Accuracy Value (MAV) (see [19,21]).The MAV was obtained as a linear combination of the geometric measures computed for two homologous polygons, and the weights resulting from the supervised training phase.Such an indicator was relevant for our work, since the setting of a threshold value for it (by means of a confusion matrix and with a certain confidence level) allowed us to select only 1:1 corresponding polygon pairs among all the possible correspondences, thus avoiding the acceptance of both erroneously-matched polygons (false positive) that appear in the cases of 1:n or n:m correspondences (multiple matching cases often associated with generalization processes) and unpaired polygons (null matching cases derived from completeness or updating problems).Figure 1 illustrates all the possible correspondences between polygons after applying our matching mechanism.Having obtained the homologous polygons (1:1 corresponding polygons pairs), we used two positional accuracy assessment methods, based on buffer generation on the polygons' perimeter lines: he simple buffer overlay method (SBOM) [2] and the double buffer overlay method (DBOM, originally the buffer statistics overlay method developed by Tveite and Langaas [28]).These methods allowed us to compute the displacement between two polygonal features and represent two basic and different cases because of the different relationships in the assessment: the first is a line-bufferbased option, and the second is a buffer-buffer-based option.Finally, the results obtained by applying the methods described above demonstrated the viability of the proposed approach, because they confirmed the results obtained by means of the traditional positional accuracy assessment procedures, using GPS data acquisition applied to the same geographical area [19]. However, despite all this, there are still important aspects with relation to the Automatic Positional Accuracy Assessment (APAA) of urban GDBs that have not yet been determined or addressed.Such is the case with sample size and sample distribution.Specifically, in the case of the two buffer-based methods mentioned above (SBOM and DBOM), there are no recommendations about adequate sample size.In addition, the previous studies dealing with positional accuracy assessment by means of such methods are very scarce, and employ only linear elements, such as roads or coast lines, as control elements [2,[28][29][30][31][32][33].Thus, after identifying and extracting these elements from both GDBs (tested data source and reference data source), they are manually edited and matched, so the level of automation is null.Table 1 summarizes some of these previous studies (sorted by publication date).As shown in the table, most of these authors provide the sample size employed in their work.However, none of them explain how that parameter was determined.Therefore, as in these previous studies, we consider it important to establish specific criteria as well as guidance in order to define sample size when APAA methods are employed for assessing the positional accuracy of GDBs, because this parameter might influence the uncertainty of the estimated values.Having obtained the homologous polygons (1:1 corresponding polygons pairs), we used two positional accuracy assessment methods, based on buffer generation on the polygons' perimeter lines: he simple buffer overlay method (SBOM) [2] and the double buffer overlay method (DBOM, originally the buffer statistics overlay method developed by Tveite and Langaas [28]).These methods allowed us to compute the displacement between two polygonal features and represent two basic and different cases because of the different relationships in the assessment: the first is a line-buffer-based option, and the second is a buffer-buffer-based option.Finally, the results obtained by applying the methods described above demonstrated the viability of the proposed approach, because they confirmed the results obtained by means of the traditional positional accuracy assessment procedures, using GPS data acquisition applied to the same geographical area [19]. However, despite all this, there are still important aspects with relation to the Automatic Positional Accuracy Assessment (APAA) of urban GDBs that have not yet been determined or addressed.Such is the case with sample size and sample distribution.Specifically, in the case of the two buffer-based methods mentioned above (SBOM and DBOM), there are no recommendations about adequate sample size.In addition, the previous studies dealing with positional accuracy assessment by means of such methods are very scarce, and employ only linear elements, such as roads or coast lines, as control elements [2,[28][29][30][31][32][33].Thus, after identifying and extracting these elements from both GDBs (tested data source and reference data source), they are manually edited and matched, so the level of automation is null.Table 1 summarizes some of these previous studies (sorted by publication date).As shown in the table, most of these authors provide the sample size employed in their work.However, none of them explain how that parameter was determined.Therefore, as in these previous studies, we consider it important to establish specific criteria as well as guidance in order to define sample size when APAA methods are employed for assessing the positional accuracy of GDBs, because this parameter might influence the uncertainty of the estimated values. The objective of this paper is to analyse the influence of sample size in terms of uncertainty when estimating the planimetric positional accuracy of urban data (buildings) belonging to territorial GDBs, by means of APAA methods based on buffered polygons (SBOM and DBOM). The rest of the paper is divided into the following main sections: the next section presents the buffer-based methods applied and their adaptation to the line-closed case.Then the two urban GDBs used are presented, together with the positional accuracy estimation obtained from them.The following section explains the simulation process applied in order to estimate the positional accuracy for different sample sizes.Results are presented and discussed in the last section.Finally, general conclusions are presented. The Buffer-Based Methods Used and Their Adaptation to the Line-Closed Case Euclidean distance, or the Euclidean metric, is the typical measure of positional accuracy when point-to-point relations between two spatial data sets are used.In this sense, APAA methods are not an exception, as demonstrated by Ruiz-Lendínez et al. [20].Thus, after automatically identifying homologous points between previously-matched polygons (using a metric for comparing their turning functions), these authors apply a standard based on the Euclidean distance between points to assess the positional accuracy of the tested data source.However, Euclidean distance encounters significant difficulties when applied to linear elements to determine their relative positional accuracy, because only when these elements are parallel does this measure achieve a complete meaning.Therefore, the distance between non-point features is a difficult concept; although there are several proposals developed for positional accuracy assessment using linear elements based on distance measurements (the Hausdorff distance method [29,34] and the mean distance method [35][36][37]), when using polygons their positional behavior and geometrical similarity can be more accurately and efficiently analyzed through their boundaries, using buffer-based methods (because of the lack of the above-mentioned difficulties).Specifically, and as already mentioned, the two buffer-based methods selected and their base references are the SBOM [2], and the DBOM [28,38].According to [2,28,38], both methods give a quantitative assessment of the geometric accuracy of a line relative to another line of higher accuracy.In addition, they are iterative, and both the size of the first buffer w o and the value by which it is increased ∆w (step size) must be set on the basis of the spatial accuracy of the reference GDB (whose value is, in principle, well-known), and the approximate spatial accuracy of the tested GDB (this value can be estimated on the basis of information provided by the producer [28,38]).The assignment of adequate values to the aforementioned parameters (w o and ∆w) will allow us to achieve a fast stabilization of the distribution function, which acts as a signature of the tested GDB (Figures 2c and 3c).Finally, it must be noted that the value of ∆w may change depending on the level of detail required.Therefore, at the beginning of the buffer operation (when more detail is usually required), ∆w usually takes smaller values than at the end of it (when coarser steps are used). The Single Buffer Overlay Method (SBOM) Based on buffer generation on the line of the source of greater accuracy (Q), this method determines the percentage of the controlled line X that is within this buffer (Figure 2a).By increasing the width w of the buffer, we obtain a probability distribution of inclusion of the controlled line inside the buffer of the source of greater accuracy.The same can be done with all the linear entities in a control sample or a complete GDB, obtaining an aggregated distribution curve that shows a distribution function of the uncertainty of each database for several levels of confidence. Originally proposed by Goodchild and Hunter [2], this method has been applied in a wide range of studies, such as the evaluation of road networks [39], accuracy assessment of conflation of raster maps and orthoimagery [40], and positional accuracy control of GDBs [32].This method overestimates the error, because the measures of the error distances are perpendicular to the buffered line; however, the displacement can actually be more complex.In our case, this method had to be adapted to a line-closed case (polygons) (Figure 2b).This adaptation was essentially based on buffer generation on the perimeter of the polygon belonging to the source of greater accuracy Q.After this, the percentage of the perimeter of the controlled polygon X within this buffer was computed.Finally, as in the case of linear entities, we were able to obtain an aggregate distribution function of the uncertainty for several levels of confidence when all polygons from a sample were used.the percentage of the perimeter of the controlled polygon X within this buffer was computed.Finally, as in the case of linear entities, we were able to obtain an aggregate distribution function of the uncertainty for several levels of confidence when all polygons from a sample were used. The Double Buffer Overlay Method (DBOM) Originally proposed by Tveite and Langaas [28], this consists of the generation of buffers (with a width of w) around the two lines-X from the tested source and Q from the source of greater accuracy, denoted as XB and QB, respectively-and analyses the situations that arise when buffers intersect in space (Figure 3a).Thus, four different types of areas result from the buffer and overlay operations: areas that are inside both the buffer of X (XB) and the buffer of Q (QB) (XB ∩ QB, also denoted as common region), areas that are inside XB and outside QB (XB ∩ ), areas that are outside XB and inside QB ( ∩ QB), and finally areas that are outside XB and outside QB ( ∩ , also denoted as an outer region).Although the area (XB ∩ QB) compared to the total area from XB or XQ could be used as a good measure of accuracy, we have used another indicator proposed by the above authors for evaluating the deviation of line X from line Q: the average displacement (AD) for a buffer width w (Equation ( 1)).This measure estimates displacement or similarity by using the area inside QB of the X line, because for similar lines the area (XB ∩ QB) predominate, but as the lines become more different, the area of the other two will increase as a function of the size of the displacements. ∩ (1) In their study, Tveite and Langaas applied this method using the 1:250,000 National Map Series of Norway as the reference dataset to assess the digital chart of the world at a 1:1,000,000 scale produced by the Defense Mapping Agency (DMA), United States.As in the case of SBOM, this method had to be adapted to the line-closed case (polygons) (Figure 3b).This adaptation consisted of The Double Buffer Overlay Method (DBOM) Originally proposed by Tveite and Langaas [28], this consists of the generation of buffers (with a width of w) around the two lines-X from the tested source and Q from the source of greater accuracy, denoted as XB and QB, respectively-and analyses the situations that arise when buffers intersect in space (Figure 3a).Thus, four different types of areas result from the buffer and overlay operations: areas that are inside both the buffer of X (XB) and the buffer of Q (QB) (XB ∩ QB, also denoted as common region), areas that are inside XB and outside QB (XB ∩ QB), areas that are outside XB and inside QB (XB ∩ QB), and finally areas that are outside XB and outside QB (XB ∩ QB, also denoted as an outer region).the percentage of the perimeter of the controlled polygon X within this buffer was computed.Finally, as in the case of linear entities, we were able to obtain an aggregate distribution function of the uncertainty for several levels of confidence when all polygons from a sample were used. The Double Buffer Overlay Method (DBOM) Originally proposed by Tveite and Langaas [28], this consists of the generation of buffers (with a width of w) around the two lines-X from the tested source and Q from the source of greater accuracy, denoted as XB and QB, respectively-and analyses the situations that arise when buffers intersect in space (Figure 3a).Thus, four different types of areas result from the buffer and overlay operations: areas that are inside both the buffer of X (XB) and the buffer of Q (QB) (XB ∩ QB, also denoted as common region), areas that are inside XB and outside QB (XB ∩ ), areas that are outside XB and inside QB ( ∩ QB), and finally areas that are outside XB and outside QB ( ∩ , also denoted as an outer region).Although the area (XB ∩ QB) compared to the total area from XB or XQ could be used as a good measure of accuracy, we have used another indicator proposed by the above authors for evaluating the deviation of line X from line Q: the average displacement (AD) for a buffer width w (Equation ( 1)).This measure estimates displacement or similarity by using the area inside QB of the X line, because for similar lines the area (XB ∩ QB) predominate, but as the lines become more different, the area of the other two will increase as a function of the size of the displacements. ∩ (1) In their study, Tveite and Langaas applied this method using the 1:250,000 National Map Series of Norway as the reference dataset to assess the digital chart of the world at a 1:1,000,000 scale produced by the Defense Mapping Agency (DMA), United States.As in the case of SBOM, this method had to be adapted to the line-closed case (polygons) (Figure 3b).This adaptation consisted of Although the area (XB ∩ QB) compared to the total area from XB or XQ could be used as a good measure of accuracy, we have used another indicator proposed by the above authors for evaluating the deviation of line X from line Q: the average displacement (AD) for a buffer width w (Equation ( 1)).This measure estimates displacement or similarity by using the area inside QB of the X line, because for similar lines the area (XB ∩ QB) predominate, but as the lines become more different, the area of the other two will increase as a function of the size of the displacements. In their study, Tveite and Langaas applied this method using the 1:250,000 National Map Series of Norway as the reference dataset to assess the digital chart of the world at a 1:1,000,000 scale produced by the Defense Mapping Agency (DMA), United States.As in the case of SBOM, this method had to be adapted to the line-closed case (polygons) (Figure 3b).This adaptation consisted of the generation of buffers around the perimeter lines of the two polygons (X and Q) and the subsequent calculation of the average displacement. The Two Urban Geospatial Databases As has already been stated, this study takes as its starting point our previous work [19], in which one official GDB was assessed by means of another one of higher accuracy.This section presents the constraints to which these two GDBs were subjected, and a general description of each of them. With regard to the first aspect, there were three basic conditions that needed to be fulfilled by the two GDBs in order to apply our APAA procedure.These conditions or constraints, to which any other GDB under evaluation should be subject, are included in what have been termed the acceptance criteria: 1. Coexistence criterion (CC): all the elements (in our case, buildings represented by means of polygons) used to apply our APAA procedure must exist in both GDBs.This basic principle, which seems obvious, is not always fulfilled in the real world, since two GDBs generated at different scales are normally not at the same generalisation level [28]. 2. Independence criterion (IC): the two GDBs must be independently produced, and in turn, neither of them can be derived from another cartographic product of a larger scale through any process, such as generalisation, which means that their quality has not been degraded. 3. Interoperability criterion (IOC): It is necessary to ensure the interoperability between both GDBs according to the following aspects: a. Geometric interoperability: In this case, it is defined in purely cartographic terms, so it will hereinafter be referred to as cartographic interoperability.Two GDBs occupying the same geographic region must be comparable, both in terms of reference system and cartographic projection.b. Semantic interoperability: there must be no semantic heterogeneity between both GDBs-that is, differences in intended meaning of terms in specific contexts [15].In this aspect, interoperability must occur at two different levels: schema and feature.c. Topological interoperability: the topological relationships must be preserved.In this sense, it can be stated that topological interoperability is a consequence and hence the basis of the two previously described interoperability processes (geometric and semantic) [7]. With regard to the description of both cartographic products, the GDBs used were two official cartographic databases in Andalusia (Southern Spain).Specifically, as the tested source we used the BCN25 ("Base Cartográfica Numérica E25k") and as the reference source we used the MTA10 ("Mapa Topográfico de Andalucía E10k"). The MTA10 is produced by the Institute of Statistics and Cartography of Andalusia (Spain) and referenced to the ED50 datum.The MTA10 is a topographic vector database with complete coverage of the regional territory, and is considered to be the official map of Andalusia.It is composed of 2750 sheets obtained by manual photogrammetric restitution.This product includes a vector layer of buildings (city blocks), which contain a sufficient quantity of geometrical information to be able to compute both the shape and geometric measures employed for assessing the geometric form of polygons.This dataset was used as the reference source because of its higher, a priori, positional accuracy.Thus, its declared positional accuracy is RMSE = 3 m [41]. The BCN25 is produced by the National Geographic Institute of Spain [42] and references the European Terrestrial Reference System 1989 ETRS89 datum.The dataset of the BCN25 is composed of 4123 sheets of 5 of latitude by 10 of longitude, which cover the whole national territory of Spain.In addition, and just as in the previous case, the map is presented as a set of vector covers distributed by layers, including a vector layer of buildings (city-blocks) that contains the same type of geometrical information as the MTA10, thus allowing us to determine the degree of similarity between both data sets at polygons level.Finally, the BCN25's planimetric accuracy has been estimated at around 7.5 m, although this varies depending on the type of entity considered [42].Therefore, the BCN25 was used as the tested source. The urban areas selected were included in three sheets of the MTN50k (National Topographic Map of Spain, at scale 1:50,000) (Figure 4a). Figure 4b shows two examples of polygonal features corresponding to buildings belonging to MTA10 and BCN25.The urban areas selected were included in three sheets of the MTN50k (National Topographic Map of Spain, at scale 1:50,000) (Figure 4a). Figure 4b shows two examples of polygonal features corresponding to buildings belonging to MTA10 and BCN25. Characterization of the BCN25 Positional Accuracy by Means of the Single Buffer Overlay Method and the Double Buffer Overlay Method This subsection presents the degree of fulfilment of the constraints to which our GDBs were subjected (acceptance criteria), the characteristics of the pairs of polygons used, and the positional accuracy characterization of the BCN25 by means of a distribution function of the results obtained by means of the SBOM and the DBOM. Firstly, with regard to the degree of fulfilment of the acceptance criteria in our approach, and thanks to the constraint imposed by the matching accuracy indicator (MAV) (which allowed us to work with only 1:1 corresponding polygons pairs among all the possible correspondences), the CC specifications were met.In addition, both GDBs were independently produced, which means that the tested source (BCN25) is not derived from the reference source (MTA10), in compliance with IC requirements.In order to meet the last set of conditions (IOC), it was necessary to transform the tested GDB (BCN25) from the ETRS89 to the ED50 reference system, in order to ensure cartographic interoperability between both GDBs.This transformation was carried out following the methodology of minimum curvature surface (MCS) developed by the National Geographic Institute of Spain [43].The results were accurate to approximately 15 cm.If one considers that the global relative accuracy of the ED50 network is 10-20 cm, the results of the transformation using MCS were below the quality threshold of the network.Therefore, after MCS transformation the positional differences between the two GDBs due to the datum shift had no significance compared with the planimetric accuracy of the BCN (tested source), and the BCN25 and the MTA10 were finally interoperable from an cartographic point of view [19].On the other hand, since we worked with the same type of spatial feature and representation model in both data sets, both semantic interoperability at the highest level (schema interoperability) and semantic interoperability at the object level (feature interoperability) were Characterization of the BCN25 Positional Accuracy by Means of the Single Buffer Overlay Method and the Double Buffer Overlay Method This subsection presents the degree of fulfilment of the constraints to which our GDBs were subjected (acceptance criteria), the characteristics of the pairs of polygons used, and the positional accuracy characterization of the BCN25 by means of a distribution function of the results obtained by means of the SBOM and the DBOM. Firstly, with regard to the degree of fulfilment of the acceptance criteria in our approach, and thanks to the constraint imposed by the matching accuracy indicator (MAV) (which allowed us to work with only 1:1 corresponding polygons pairs among all the possible correspondences), the CC specifications were met.In addition, both GDBs were independently produced, which means that the tested source (BCN25) is not derived from the reference source (MTA10), in compliance with IC requirements.In order to meet the last set of conditions (IOC), it was necessary to transform the tested GDB (BCN25) from the ETRS89 to the ED50 reference system, in order to ensure cartographic interoperability between both GDBs.This transformation was carried out following the methodology of minimum curvature surface (MCS) developed by the National Geographic Institute of Spain [43].The results were accurate to approximately 15 cm.If one considers that the global relative accuracy of the ED50 network is 10-20 cm, the results of the transformation using MCS were below the quality threshold of the network.Therefore, after MCS transformation the positional differences between the two GDBs due to the datum shift had no significance compared with the planimetric accuracy of the BCN (tested source), and the BCN25 and the MTA10 were finally interoperable from an cartographic point of view [19].On the other hand, since we worked with the same type of spatial feature and representation model in both data sets, both semantic interoperability at the highest level (schema interoperability) and semantic interoperability at the object level (feature interoperability) were guaranteed. Secondly, among all pairs of polygons obtained we used only all those pairs matched with an MAV higher or better than 0.8 (we must note that this indicator ranges between 0 and 1-for further details, see [19]).As mentioned in Section 1, the choice of this threshold value was made in order to avoid the acceptance of both erroneously-matched polygons (false positive or error of commission), which appear in the cases of 1:n or n:m correspondences, and unpaired polygons (1:0 correspondences), and was computed by assigning a confusion matrix for each BDG matching procedure.Following the results of this process, the fixed threshold guaranteed the absence of these types of errors at the 95% confidence level.The principal characteristics of both datasets and the selected pairs of polygons (MTA10 buildings and BCN25 buildings) are summarized in Table 2. Finally, and with regard to the characterization of the BCN25's positional accuracy, Figure 5a presents the aggregated distribution functions obtained by applying the SBOM to GDB, using buffer w with widths from 1 to 20 m.Specifically, 1 m was the size of the first buffer w o , and the values by which it was increased (∆w, step size) were 0.1 m (for values of w between 1 and 3 m), 0.2 m (for values of w between 3 and 5 m), 0.5 m (for values of w between 5 and 10 m), and 1 m (for values of w between 10 and 20 m).The aggregated curves obtained with this method show a distribution function of the uncertainty of the BCN25 for several levels of confidence.These distributions were computed by means of a specific software tool called Matching Viewer v2016 [44].Figure 5a shows values of around 10 m for a 95% level of confidence.On the other hand, Figure 5b shows the evolution of the distance estimated by means of Equation (1) (DBOM).This distance (6.5 m) can be considered stabilized from a buffer width of 7 m.types of errors at the 95% confidence level.The principal characteristics of both datasets and the selected pairs of polygons (MTA10 buildings and BCN25 buildings) are summarized in Table 2. Finally, and with regard to the characterization of the BCN25's positional accuracy, Figure 5a presents the aggregated distribution functions obtained by applying the SBOM to GDB, using buffer w with widths from 1 to 20 m.Specifically, 1 m was the size of the first buffer , and the values by which it was increased (∆w, step size) were 0.1 m (for values of w between 1 and 3 m), 0.2 m (for values of w between 3 and 5 m), 0.5 m (for values of w between 5 and 10 m), and 1 m (for values of w between 10 and 20 m).The aggregated curves obtained with this method show a distribution function of the uncertainty of the BCN25 for several levels of confidence.These distributions were computed by means of a specific software tool called Matching Viewer v2016 [44].Figure 5a shows values of around 10 m for a 95% level of confidence.On the other hand, Figure 5b shows the evolution of the distance estimated by means of Equation (1) (DBOM).This distance (6.5 m) can be considered stabilized from a buffer width of 7 m. Method This section covers the two main methodological aspects of our research: how samples of different size were obtained from the initial population, and the statistical basis of the comparison between the estimations and population values.We must note that the initial population included the subset of pairs of polygons matched with an MAV higher or better than 0.8.In addition, the parameter used for determining the different sample sizes was the length of the polygons' perimeter, Method This section covers the two main methodological aspects of our research: how samples of different size were obtained from the initial population, and the statistical basis of the comparison between the estimations and population values.We must note that the initial population included the subset of pairs of polygons matched with an MAV higher or better than 0.8.In addition, the parameter used for determining the different sample sizes was the length of the polygons' perimeter, measured on the polygons from the reference BDG, which was the MTA10.The reason of this last choice (in comparison with other sampling strategies based on the number of individuals) is that buffer methods are based on perimeter lines, and these, in turn, are characterized by their length.Therefore, the length of the polygons' perimeter is the most representative variable of the SBOM and DBOM methods. Simulation Process In order to extract samples from the initial population, a simulation process was used.The main purpose of this process is to help us understand the relationship between estimated and actual values depending on sample size, in order to obtain empirical knowledge about the sample size to use when assessing positional accuracy by means of the automatic procedure described in Section 1. Simulation can be defined as the construction of a mathematical model capable of reproducing the characteristics of a phenomenon, system, or process, in order to obtain information or solve problems [45].Specifically, for this process we applied the Monte Carlo method [46], which requires a large amount of random executions.Therefore, our approach reproduces the APAA procedure applied to synthetic samples of polygons generated by means of a simple random sampling.Thus, the variability of the estimated planimetric accuracy of the tested GDB is obtained when applying the SBOM and the DBOM to different sample sizes. In addition, the great advantage of using polygons (buildings) as control elements is that compared to other spatial features (lines which represent roads or coastlines), their spatial distribution may be easily controlled.In this way, the spatial distribution of control elements is always adequate, and does not affect the validity of the results.In our case, and with regard to the sampling procedure, samples (pairs of buildings represented by pairs of homologous polygons) were randomly collected from among both the urban areas and scattered rural areas that comprise our initial population.On the other hand, the number of pairs of polygons that comprise each of the samples depends on the total length L, with L being the result of adding up the individual perimeter of each polygon belonging to the MTA10.We must bear in mind that the lengths of the perimeters of two homologous polygons (each of them extracted from a different source, the MTA10 and the BCN25, respectively) are similar but not equal. The simulation procedure was supported by the software tool called Matching Viewer v2016 [44], and consisted of the simulation of samples of different size L.These samples of different sizes (L = 5, 10, 15 . . . 100 km, where the step is ∆L = 5 km and L o = 5) were randomly extracted from our initial population (pairs of polygons matched with an MAV higher or better than 0.8) in order to then apply them to both the SBOM and the DBOM.Specifically, for each sample size L, m samples were extracted from the initial population.Because the process is iterated m times, mean and deviation values for each parameter of interest can be computed.This process belongs to the statistical resampling technique known as bootstrap [47].The parameters L o and ∆L were adjusted, taking into account that 5 km represents approximately 1% (4.6 km) of the total length of perimeters. The detailed process and its parameters are as follows: 1. An initial sample size L = L o = 5 km is considered. 2. A simple random sampling is applied to the initial population, in order to extract a sample of size L.Here the individual perimeters of the polygons belonging to the MTA10 are added up.When the sum of the perimeter lengths exceeds the length L, the last pair of polygons included in the sample that cause this excess of length is not considered.In this sense, we must note that their exclusion had a minimal impact both on the sample size and the final results, since the mean value of the polygons' perimeter belonging to MTA10 is 138 m (Table 2), representing between 2.75% and 0.14% of L when L ranges from 5 to 100 km. 3. For both the SBOM and the DBOM, the observed distribution function (ODF) is obtained from each sample m of length L. The ODF (L,m) of the sample is compared to the population distribution function (PDF).Then, by means of the Kolmogorov-Smirnov test, the f -value and p-value are derived for each comparison case (see below).4. 5. For each sample size L, the mean values of the m iterations are derived for the f -and p-values. 6. Increase L by the step ∆L = 5 km, and repeat steps 2-5 until L = L max = 100 km. Comparisons After completing the simulation process, the comparisons between the estimated values and population values were carried out.Specifically, as stated in step number 3, the similarity between the two distribution functions (in our case, between PDF and ODF) was addressed by means of statistical tests of significance, like the Kolmogorov-Smirnov test [48,49].Following these last authors, the results obtained by applying this test can be expressed by means of two statistic indicators: an f -value and a p-value.The first represents the maximum distance between two distribution functions and ranges in the interval [0, 1].Thus, f -values that are close to the unit represent large discrepancies between distribution functions, while f -values close to 0 imply small discrepancies between distribution functions.On the other hand, p-values are closely linked to f -values, because they are a probabilistic measure of them.Thus, a p-value close to the unit means a great level of confidence on the corresponding f -value.For instance, when applying the Kolmogorov-Smirnov test to two distribution functions with a great similarity between them and with a high probability meeting this criterion, the f -value and p-value obtained for the pair may be 0.1 and 0.95. Finally, we must note that we have followed the procedure described by Gibbons and Chakraborti [50] in order to develop the statistical calculations, and the p-value was approximated numerically using the method outlined by Press et al. [51]. The flowchart of the proposed method is shown in Figure 6. Comparisons After completing the simulation process, the comparisons between the estimated values and population values were carried out.Specifically, as stated in step number 3, the similarity between the two distribution functions (in our case, between PDF and ODF) was addressed by means of statistical tests of significance, like the Kolmogorov-Smirnov test [48,49].Following these last authors, the results obtained by applying this test can be expressed by means of two statistic indicators: an f-value and a p-value.The first represents the maximum distance between two distribution functions and ranges in the interval [0, 1].Thus, f-values that are close to the unit represent large discrepancies between distribution functions, while f-values close to 0 imply small discrepancies between distribution functions.On the other hand, p-values are closely linked to fvalues, because they are a probabilistic measure of them.Thus, a p-value close to the unit means a great level of confidence on the corresponding f-value.For instance, when applying the Kolmogorov-Smirnov test to two distribution functions with a great similarity between them and with a high probability meeting this criterion, the f-value and p-value obtained for the pair may be 0.1 and 0.95. Finally, we must note that we have followed the procedure described by Gibbons and Chakraborti [50] in order to develop the statistical calculations, and the p-value was approximated numerically using the method outlined by Press et al. [51]. The flowchart of the proposed method is shown in Figure 6. Results The results, by which we analyze the similitude between the distribution functions (PDF and ODF), are presented by means of two types of graphical representations where the horizontal axis represents sample size (km) and the vertical axis a probability value.The first type presents the results Results The results, by which we analyze the similitude between the distribution functions (PDF and ODF), are presented by means of two types of graphical representations where the horizontal axis represents sample size (km) and the vertical axis a probability value.The first type presents the results for the frequency distance f -value (Figure 7a,b) and the second one presents the results for the associated p-value (Figure 7c,d).In addition, three different curves are represented in each graph: one of them corresponds to the mean value (represented by means of a continuous line); and the other two correspond to the 5% and 95% percentiles (represented by means of dashed lines).Finally, these graphic representations are used both for the SBOM (Figure 7a,c) and for the DBOM (Figure 7b,d). Regarding the behaviour of the curves obtained, the first and more straightforward feature observed is that both mean f -value curves and mean p-value curves are different.In the case of mean f -value curves this difference is due to the fact that the signature given by the ODF performs in a different manner for the SBOM and for the DBOM.Obviously, the difference between mean p-value curves is due to the fact that f -values are different.In any case, these differences show that for a given sample size L, the SBOM gives better estimations than the DBOM.With regard to the shape and positioning, the 5% and 95% percentile curves are not equidistant to mean values curves (both for f -values and for p-values).In the case of f -values this means that values greater than the mean have more dispersion, while the opposite happens in the case of p-values.In addition, mean f -values, mean p-values and their associated percentile curves show a behaviour which is coherent with the supposed behaviour of an estimation process where sample size increases: f -values decrease when sample size L increases while p-values increase when sample size L increases.On the other hand, we must note that there are significant variations between f -values of the 5% and 95% percentiles when the sample size L is increased from 5 km to 100 km.Thus, for the SBOM the maximum deviation (L = 5 km) is 0. Finally, and focusing our attention on the issue addressed in this paper, the curves shown in Figure 7 can be employed in order to give some guidance on the influence of sample size on APAA methods when they are used for evaluating the quality of urban GDBs.Obviously, these curves have been obtained from two specific urban GDBs (the MTA10 and the BCN25).However, they show a pattern of behavior that, in our opinion, could also be derived from other cases and scales.It will be sufficient to apply a simulation process similar to that presented here.The curves obtained can be Finally, and focusing our attention on the issue addressed in this paper, the curves shown in Figure 7 can be employed in order to give some guidance on the influence of sample size on APAA methods when they are used for evaluating the quality of urban GDBs.Obviously, these curves have been obtained from two specific urban GDBs (the MTA10 and the BCN25).However, they show a pattern of behavior that, in our opinion, could also be derived from other cases and scales.It will be sufficient to apply a simulation process similar to that presented here.The curves obtained can be used in two different ways: • In order to define a sample size that will assure a certain value of mean discrepancy f between the sample (the ODF) and the population (the PDF); • In order to define a sample size that will assure, with a probability of 95%, that the maximum discrepancy between the sample (the ODF) and the population (the PDF) is f. Figure 8 (extracted from Figure 7) shows a practical example of the two cases described.In the first case (case 1), we wish to determine a sample size that assures a mean discrepancy of 10% between the sample and the population.In order to compute this value on the graph, we have to obtain the point where the line corresponding to level 0.1 (the f -value) crosses the continuous line (the mean).After obtaining this point, we have to observe the abscissa axis value (sample size) that belongs to it.In this case, L = 40 km (Figure 8a).In addition, the p-value is 90% (Figure 8b).In the second case (case 2), we wish to determine a sample size that assures, with a probability of 95%, that the maximum discrepancy between the sample (the ODF) and the population (the PDF) is 10%.Following a similar procedure to the above, we have to obtain the point where the line which corresponds to level 0.1 (the f -value) crosses with the dashed line (the 95% percentile value).After obtaining this point, we have to observe the abscissa axis value (sample size) that belongs to it.In this case L = 90 km (Figure 8a).After obtaining this point, we have to observe the abscissa axis value (sample size) that belongs to it.In this case, L = 40 km (Figure 8a).In addition, the p-value is 90% (Figure 8b).In the second case (case 2), we wish to determine a sample size that assures, with a probability of 95%, that the maximum discrepancy between the sample (the ODF) and the population (the PDF) is 10%.Following a similar procedure to the above, we have to obtain the point where the line which corresponds to level 0.1 (the f-value) crosses with the dashed line (the 95% percentile value).After obtaining this point, we have to observe the abscissa axis value (sample size) that belongs to it.In this case L = 90 km (Figure 8a).Obviously, for the case of the DBOM the procedure to follow is the same as described above.In that case, we must employ Figure 7b.Finally, taking into account the mean value of the polygons' perimeter shown in Table 2, we are able to roughly estimate the number of polygons that comprise the sample that meets the requirements outlined in the above example. Conclusions In our previous studies [19][20][21], we proposed an APAA methodology for GDBs, using polygonal features and two buffer-based positional accuracy assessment methods based on buffer generation on their perimeter lines: the simple buffer overlay method (SBOM) and the double buffer overlay method (DBOM).However, important aspects, such as the sample size, had not yet been adequately addressed until now. This study addresses the influence of sample size on the variability of results derived from our APAA methodology.To that end, we employed the same two official urban GDBs used in our previous studies (the MTA10 and the BCN25), in which more than 450 km of length of perimeter (measured on 3356 pairs of polygons) were evaluated. Our method has been based on a simulation process (supported by the software tool Matching Viewer v2016), which has consisted of the simulation of samples (randomly extracted from our initial Obviously, for the case of the DBOM the procedure follow is the same as described above. In that case, we must employ Figure 7b.Finally, taking into account the mean value of the polygons' perimeter shown in Table 2, we are able to roughly estimate the number of polygons that comprise the sample that meets the requirements outlined in the above example. Conclusions In our previous studies [19][20][21], we proposed an APAA methodology for GDBs, using polygonal features and two buffer-based positional accuracy assessment methods based on buffer generation on their perimeter lines: the simple buffer overlay method (SBOM) and the double buffer overlay method (DBOM).However, important aspects, such as the sample size, had not yet been adequately addressed until now. This study addresses the influence of sample size on the variability of results derived from our APAA methodology.To that end, we employed the same two official urban GDBs used in our previous studies (the MTA10 and the BCN25), in which more than 450 km of length of perimeter (measured on 3356 pairs of polygons) were evaluated. Our method has been based on a simulation process (supported by the software tool Matching Viewer v2016), which has consisted of the simulation of samples (randomly extracted from our initial population of polygons matched) of different size L (from 5 km to 100 km).For each sample size the simulation was iterated 1000 times.Taking into account that the results obtained by the means of the SBOM and the DBOM are expressed as distribution functions, the similarities between the various ODFs (obtained from each sample m of length L) and the PDF have been analyzed by means of the Kolmogorov-Smirnov test.The evolution of the two statistic indicators provided by this test (f -value and p-value) has allowed us to • Gain a certain understanding of the sample size required under several different conditions concerning the mean distance value or maximum distance value between the ODFs and the PDF.This last has been confirmed by a practical example. • Compute the variability of the estimation between the limits (sample sizes) of our simulation process.Specifically, this variability was reduced by the order of 4.5 times approximately for both methods. Obviously, and as mentioned above, these results have been obtained from two specific urban GDBs.However, they show a pattern of behaviour that, in our opinion, could also be derived from other cases and scales, and using greater or smaller samples with greater or smaller length steps (∆L), and running the simulation more or fewer times (m).It would be sufficient to apply a simulation process similar to that presented here. With regard to future research, we plan to explore several directions.We plan to employ the number of polygons as the parameter used to determine the different sample sizes instead of the length of perimeter, to employ a new set of GDBs with different polygon densities, and to include new assessment methods.In addition, we will deal with the 1:n case and the n:m case, which is a multiple 1:n case.On the other hand, we are currently working on a funded research project whose aim to demonstrate the viability of our APAA approach, by means of its comparison with traditional control methods when applied to large geographical areas.As mentioned in Section 1, this viability has already been partially demonstrated for a small geographical area. Figure 3 . Figure 3. (a) The double buffer overlay method (DBOM), (b) its adaptation to the line-closed case (polygons), and (c) its average displacement estimation function. Figure 3 . Figure 3. (a) The double buffer overlay method (DBOM), (b) its adaptation to the line-closed case (polygons), and (c) its average displacement estimation function. Figure 3 . Figure 3. (a) The double buffer overlay method (DBOM); (b) its adaptation to the line-closed case (polygons); and (c) its average displacement estimation function. Figure 4 . Figure 4. (a) Selected sheets of the MTN50K of the region of Andalusia (Spain) and its administrative boundaries, and (b) examples of polygonal features belonging to MTA10 and BCN25. Figure 4 . Figure 4. (a) Selected sheets of the MTN50K of the region of Andalusia (Spain) and its administrative boundaries; and (b) examples of polygonal features belonging to MTA10 and BCN25. Figure 6 . Figure 6.Flowchart of the proposed method. Figure 6 . Figure 6.Flowchart of the proposed method. 3382 and the minimum deviation (L = 100 km) is 0.0753, while in the case of the DBOM the maximum deviation (L = 5 km) is 0.7124 and the minimum deviation (L = 100 km) is 0.1656.Therefore, the reductions achieved are between 4.3 (DBOM) and 4.5 (SBOM) times the initial variability (maximum value) which correspond to small samples (L = L o = 5 km).ISPRS Int.J. Geo-Inf.2018, 7, x FOR PEER REVIEW 11 of 15 different manner for the SBOM and for the DBOM.Obviously, the difference between mean p-value curves is due to the fact that f-values are different.In any case, these differences show that for a given sample size L, the SBOM gives better estimations than the DBOM.With regard to the shape and positioning, the 5% and 95% percentile curves are not equidistant to mean values curves (both for fvalues and for p-values).In the case of f-values this means that values greater than the mean have more dispersion, while the opposite happens in the case of p-values.In addition, mean f-values, mean p-values and their associated percentile curves show a behaviour which is coherent with the supposed behaviour of an estimation process where sample size increases: f-values decrease when sample size L increases while p-values increase when sample size L increases.On the other hand, we must note that there are significant variations between f-values of the 5% and 95% percentiles when the sample size L is increased from 5 km to 100 km.Thus, for the SBOM the maximum deviation (L = 5 km) is 0.3382 and the minimum deviation (L = 100 km) is 0.0753, while in the case of the DBOM the maximum deviation (L = 5 km) is 0.7124 and the minimum deviation (L = 100 km) is 0.1656.Therefore, the reductions achieved are between 4.3 (DBOM) and 4.5 (SBOM) times the initial variability (maximum value) which correspond to small samples (L = Lo = 5 km). Figure 7 . Figure 7. Statistical f-value and p-value parameters for the Kolmogorov-Smirnov test for the SBOM (a,c) and the DBOM (b,d) (1000 iterations). Figure 7 . Figure 7. Statistical f -value and p-value parameters for the Kolmogorov-Smirnov test for the SBOM (a,c) and the DBOM (b,d) (1000 iterations). ISPRS Int.J. Geo-Inf.2018, 7, x FOR PEER REVIEW 12 of 15 point where the line corresponding to level 0.1 (the f-value) crosses the continuous line (the mean). Figure 8 . Figure 8.(a) Example of the use of the f-value graphic for determining a mean discrepancy between the sample and the population (case 1) or a maximum discrepancy (case 2), and (b) p-value obtained for case 1. Figure 8 . Figure 8.(a) Example of the use of the f -value graphic for determining a mean discrepancy between the sample and the population (case 1) or a maximum discrepancy (case 2); and (b) p-value obtained for case 1. Table 1 . Review of some previous studies dealing with positional accuracy assessment by means of buffer-based methods. Table 1 . Review of some previous studies dealing with positional accuracy assessment by means of buffer-based methods. Table 2 . Principal characteristics of the geospatial databases (GDBs) with regard to the buildings. Table 2 . Principal characteristics of the geospatial databases (GDBs) with regard to the buildings.
13,048
2018-05-28T00:00:00.000
[ "Mathematics" ]
A clean road to international trade: Environmental regulations and the cleanliness of export enterprises This study explores the impact of environmental regulations on the clean product exports of firms and the mechanism of this impact from the dual perspectives of the extensive and intensive margins. It uses matched micro data and aims to explore the clean road to international trade. According to the benchmark regression, environmental regulations improve the export intensity of clean products. However, their impact on the export probability of clean products is not significant. That is, environmental regulations promote the intensive margin of enterprise clean product exports but do not promote the extensive margin of enterprise clean product exports. The mechanism analysis shows that the cost effect is an important way for environmental regulations to promote the cleanliness of export products. However, the channel of technological innovation is not verified. In addition, compared with command-and-control and voluntary public policy tools, market-inspired environmental regulations have a stronger promoting effect on the cleanliness of export enterprises. This study provides microevidence and a policy basis for improving the environmental policy system and for the sustainable development of international trade. Introduction China's economic progress has been extraordinary in recent years.The rapid increase in export commerce has played a significant part in China's economic development.However, with the expansion of exports and economic aggregate growth, environmental pollution is becoming increasingly serious.To achieve sustainable economic development and reduce the negative impact of enterprises' production behavior on the natural environment, the Chinese government has continuously strengthened its control of enterprises' pollution emissions.China's pollution-intensive products, such as chemical, pulp and iron and steel products, account for a large share of the export market. Although the relationship between environmental regulations and international trade has been extensively researched, past studies have mostly examined the trade effect of environmental regulations at the macro level, with minimal discussion of the dual margins and environmental policy tools.The objective of this research is to investigate the degree and mechanisms of the impact of environmental regulation intensity and different policy tools on the cleanliness of export firms from the dual margin perspective to explore the clean road to international trade.To achieve this objective, this work mainly includes the following content.First, the impacts of environmental regulations on whether firms export clean products and how many clean products firms export are analyzed.Second, this work explores the mechanism of the impact of environmental regulation on the cleanliness of export firms based on the channels of compliance costs and technological innovation.Finally, the heterogeneous effects of environmental regulations on the cleanliness of export firms under different types of policy tools are investigated.This work can help realize a win-win situation of environmental improvement and export trade transformation and promote sustainable economic development. Compared to earlier research, this research makes the following contributions.In terms of the research perspective, this study investigates the impact of environmental policy on whether microenterprises export clean products and how many products they export, and it discusses the clean transformation of export enterprises from the dual perspectives of the extensive and intensive margins.In terms of research data, the empirical analysis of this study is based on matched data from the China Industrial Enterprise, China Enterprise Pollution Emission and China Customs Databases, providing a large sample dataset containing detailed information on enterprise production, pollution emissions and export products.The application of these data improves the representativeness and credibility of the research conclusions.In terms of the research framework, this study introduces the differential characteristics of environmental regulations and further explores how different types of environmental regulation tools affect the cleanliness of export enterprises. The structure of this work is as follows.The second section is a review of the relevant research and proposes the theoretical hypotheses.The research design, which includes the model formulation, indicator development, and data sources, is presented in the third section.The fourth section conducts empirical analysis, including benchmark, robustness, and heterogeneity analyses on environmental regulations and the cleanliness of export firms.The fifth section is a continuation of the topic, including internal mechanisms, policy tools analysis and further discussion.The conclusion and policy implications are presented in the final section. Research review In recent years, a rising number of academics have focused on the link between environmental regulations and international trade.According to a review of available studies, the influence of environmental regulation on export trade is primarily separated into promotion and inhibition theories.The studies are summarized in Table 1.According to promotion theory, environmental regulations can support an improvement in manufacturing technology and increase international competitiveness, hence promoting export trade [1][2][3][4][5][6][7][8].According to inhibition theory, environmental regulations raise firm expenses and stifle the expansion of export trade [9][10][11][12][13][14][15][16]. Theoretical hypotheses Based on the existing theory of the pollution haven and Porter hypotheses, environmental regulations, as an important means of solving problems related to resource allocation and pollution, may affect enterprises' exports through compliance costs and technological innovation effects [17][18][19].From the perspective of the compliance cost effect, when environmental regulations are strengthened, export enterprises must pay more emission charges to environmental protection departments, which increases the production and operating costs of enterprises [20,21].In addition, enterprises may increase their investment in environmental protection to meet stricter local environmental protection standards to promote enterprise cleanliness [22,23].From the perspective of the technological innovation effect, when faced with strict environmental regulations, export enterprises may have a stronger motivation to carry out technological innovation and improve their production efficiency and production processes, thus realizing the clean transformation of export enterprises [24][25][26], compared to when faced with less strict environmental regulations.Therefore, Hypothesis 1 of this research is proposed as follows: Hypothesis 1. Environmental regulations can promote the cleanliness of export enterprises through the effects of compliance costs and technological innovation. According to research, different types of environmental regulations may differ in terms of their environmental protection purposes and government implementation modes [27].The command-and-control environmental regulations used by the government to formulate mandatory regulations to limit pollution discharges achieve the goal of reducing the amount of pollution discharges by setting environmental standards or imposing administrative penalties on enterprises [28].Market-inspired environmental regulations mainly guide enterprises to actively reduce pollution emissions through market forces [29].Since the government does not directly intervene in the emission reduction behavior of enterprises, such behavior is more flexible.Voluntary public environmental regulations are a tool for enterprises to reduce emissions under the pressure of public supervision [30].Command-and-control environmental regulations are both mandatory and timely, but they lack flexibility because they must be carried out by a certain time.Market-inspired environmental regulations give enterprises an increased right to independent choice and have a certain degree of compulsion and flexibility.The level of enforcement of voluntary public environmental regulations is determined by people's environmental awareness and the system for environmental reporting.Therefore, Hypothesis 2 of this study is proposed as follows: Hypothesis 2. Command-and-control, market-inspired, and voluntary public environmental regulations have heterogeneous effects on the cleanliness of export enterprises. Model specification A microeconometric model is primarily concerned with the behavior of firms or people.The sample data are typically panel or cross-sectional data, with a large number of observations and substantial variability among the samples [31,32].In this research, the data are characterized by micro firms, panel data, and big sample observations.As a result, a microeconometric model is utilized to examine the influence of environmental regulations on export firms' cleanliness.The benchmark model employs a two-way fixed effects model to eliminate the interference of unobservable factors, introducing individual fixed effects to control for the interference of factors that do not change over time and time fixed effects to control for the interference of factors that do not change with individual characteristics.The employment of a microeconometric model aids in lowering estimation bias and in increasing the trustworthiness of research outcomes2 .The benchmark regression formula is shown in formula (1).(1) where i represents the enterprise and t represents the year.Cexport it represents the cleanliness of export enterprises, which is mainly related to the extensive and intensive margins of clean product exports.ERI it represents the intensity of environmental regulations at the enterprise level.X it is the control variables.v i and v t represent individual and year fixed effects, respectively.ε it represents the random disturbance term. Indicator construction Cleanliness of export enterprises Cexport it .The extensive margin of clean product exports is measured by whether the clean products of enterprise i are exported in year t.The intensive margin of clean product exports is measured by the export intensity of the clean products of enterprise i in year t.Matching with the HS code of clean products is carried out based on the HS code of the enterprise export product information in the China Customs Database. Environmental regulation intensity ERI it .When measuring environmental regulations at the enterprise level, the method of matching enterprise variables with industry or regional environmental regulations may not accurately reflect the actual environmental regulation intensity faced by enterprises.Regarding the actual situation of Chinese enterprise pollution emissions, using the chemical oxygen demand (COD) removal rate as a form of environmental regulation can help enterprises avoid the impact of pollutants discharged by a few industries, mainly large state-owned enterprises, on the results [33,34].Regarding data availability, the China Pollution Emission Database provides more detailed information on the COD production and emissions of enterprises than other pollution emission data, which can maximize the analysis sample size and representativeness of the estimated results.Therefore, the COD removal rate is used in this research to measure enterprise environmental regulation intensity. Control variables X it .The control variables include capital intensity, the business lifespan, enterprise scale and enterprise productivity.The ratio of fixed capital to the number of workers is used to calculate capital intensity (CI).The difference between the Table 1 Summary of past studies on environmental regulations and international trade. 1 Environmental policy variable International trade variable Outcomes Chen et al. [1] The removal rate of SO2 emissions Export competitiveness + Cheng et al. [2] Two control zones policy Export decision + Lin and Linn [3] European passenger vehicle greenhouse gas emission standards Product attributes + Qiang et al. [4] Environmental regulation intensity Export volume + Sun et al. [5] Cleaner production standards Domestic value-added rate + Sun et al. [6] Voluntary environmental regulations Solar energy industry trade flows + Xu et al. [7] Environmental regulation intensity Global value chain + Yang et al. [8] Occurrence frequency of environment-related words in municipal government work reports Export technology structure + Cherniwchan and Najjar [9] Air quality standards Export decision -Kawabata and Takarada [10] Environmental tax Intermediate trade -Lee and Ho [11] Environmental regulation stringency (0-6) Export diversification -Shi and Xu [12] Eleventh Five-Year Plan Exports volume -Wu et al. [13] Environmental regulation intensity Export-oriented production -Xie and Zhang [14] Environmental regulation intensity FDI -Zhang et al. [15] City Air Pollution Prevention and Control Program Export volume -Zhang et al. [16] Wastewater discharge standard Export values current year and the year of establishment of the enterprise plus 1 is used to calculate the business lifespan (Age).The fixed capital of the enterprise is used to calculate enterprise scale (Scale).The ratio of total output to the number of workers is used to calculate enterprise productivity (LP). Data sources This study uses 2000-2010 microlevel data from the China Industrial Enterprise, China Enterprise Pollution Emission, and China Customs Databases. 3The National Bureau of Statistics' China Industrial Enterprises Database provides yearly statistics.These statistics mostly consist of quarterly and yearly reports provided to the local statistical office by the sample firms.All state-and non-state-owned industrial firms over a certain size are included in the sample.Enterprise information comprises an enterprise's fundamental condition and financial data.Samples with statistical flaws in industrial firm data are deleted and categorized in accordance with fundamental accounting principles.The China Enterprise Pollution Emission Database combines yearly data from China's Ministry of Ecology and Environment with a quarterly survey of significant polluting firms.Major polluters are those whose pollution emissions account for more than 85 % of the total emissions in each county.The statistical elements primarily comprise the names of firms, their locations, and significant pollution emissions.The Chinese Customs Database publishes monthly data collected by Chinese Customs that cover all transactions entering and exiting the customs area.Basic information (e.g., firm name, ownership, and address), commodity information (e.g., variety, price, and quantity), the destination market, the transit mode, and the trading mode are examples of statistical items.The China Customs Database offers monthly data on import and export transaction records, and for this reason, enterprise export data are aggregated into yearly statistics based on the destination country and product HS4 code. The matching process for the data is as follows.First, the data from the China Industrial Enterprise Database are processed based on Brandt, Van Biesebroeck [35], and the panel data of industrial enterprises are formed.Second, the data from the China Enterprise Pollution Emission Database are processed by a similar method to form pollution panel data.Third, based on the enterprise name and unique identification code, industrial enterprise-pollution panel data are created by combining the China Industrial Enterprise and China Enterprise Pollution Emission Databases.Finally, based on Upward, Wang [36], the China Customs Database is merged with the industrial enterprise-pollution panel data. In China, merging methodologies are commonly employed in research on microenterprises.However, due to the limits of the various databases, certain sample selection biases are seen.The combined samples may be biased toward large-scale, polluting industries.The China Industrial Enterprise Database contains only samples of firms larger than a specified size, resulting in some information loss in the combined data of small and medium-sized enterprises.The China Enterprise Pollution Emission Database contains only samples of significant polluting firms, resulting in some information loss of clean enterprises' integrated data.As a result, the combined dataset includes only a subset of the three large datasets.Nonetheless, the combined data remain the largest microsample dataset available for studying Chinese firms' exports and cleanliness transition.These large-sample micro data have universal representative significance and can provide strong data support for the empirical analysis of this research. Benchmark analysis on environmental regulations and the cleanliness of export firms The panel two-way fixed effects model is used to examine how environmental regulations affect the cleanliness of export firms.Table 2 shows the results of the impact of environmental regulations on the extensive margin of clean product exports.As shown in Table 2, the coefficients of ERI are not significant, which indicates that environmental regulations cannot increase the export probability of enterprises' clean products.One possible explanation for this result is that there are high fixed costs in the export of different enterprise products and that the adjustment of production equipment brought by product conversion affects enterprises' export product portfolio.For enterprises, when the cost increase caused by environmental regulations is less than that caused by product conversion, environmental regulations do not encourage enterprises that export pollution-intensive products to export cleaner products. The influence of the control variables on the extensive margin of clean product exports is as expected.The capital intensity variable is negative and significant, indicating that as the capital intensity of enterprises increases, the export probability of clean products decreases.Enterprises with high capital intensity tend to be heavy industrial enterprises and are thus more likely to export pollutionintensive products.The enterprise scale variable is positive and significant, indicating that with the expansion of enterprise scale, the export probability of clean products increases, and larger enterprises are more likely than smaller enterprises to diversify the types of products that they export.The enterprise productivity variable is positive and significant, indicating that with the increase in enterprise productivity, the export probability of clean products increases.Enterprises with high productivity tend to have technological advantages and are more inclined to increase the number of export product categories. The impact of environmental regulations on the intensive margin of clean product exports is further investigated.The results of which are shown in Table 3.The coefficients of ERI are all significantly positive, indicating that environmental regulations improve the export intensity of enterprises' clean products and promote the cleanliness of export enterprises from the intensive margin.One possible explanation for this result is that environmental regulations may reduce the opportunity cost of exporting clean products to a certain extent so that enterprises are more inclined to expand their production scale of clean products and ultimately increase the proportion of exports of such products. Robustness analysis of environmental regulations and the cleanliness of export firms To test the robustness of the explanatory variable, this research measures the intensity of environmental regulations using the investment in industrial pollution control per unit of industrial output value.Columns (1) and (2) in Table 4 analyze the impact of environmental regulations on the cleanliness of enterprises' export products from the dual perspectives of the extensive and intensive margins.Columns (3) and (4) further introduce the industry and region variables, which control for the interference of region and industry factors, for robustness analysis.In addition, provincial capital cities, as political and administrative centers within regions, often have differentiated environmental regulatory policies compared to general regions.To overcome the potential biased impact of this feature, this research further excludes samples from municipalities directly under the central government, provincial capitals, and subprovincial cities.The results in Table 4 show that environmental regulations significantly increase the export intensity of enterprises' clean products but do not increase the export probability of clean products, confirming the conclusions of the benchmark regression. The People's Republic of China's 11th Five-Year Plan's Outline for National Economic and Social Development officially included environmental target constraints in its official assessment targets for the first time.Will this constraint policy interfere with enterprise exports?To exclude the interference of policy shocks, the samples are divided into two stages-before and after policy implementation-for subsample regression, as shown in Table 5.Before policy implementation, environmental regulations do not significantly increase the export probability and proportion of clean products.When environmental targets are formally included in official assessments, environmental regulations still have no significant impact on whether enterprises export clean products, but they significantly increase the export intensity of clean products.This result may be explained by the fact that the inclusion of environmental targets in official assessments improves the enforcement of environmental regulations and reduces the opportunity cost of clean product exports, leading enterprises to be more inclined to export clean products. Heterogeneity analysis of environmental regulations and the cleanliness of export firms In the face of environmental regulations, enterprises with different capital characteristics may have significant differences in terms of clean product exports.Table 6 further investigates how capital characteristics affect the association between environmental Note that *, **, and *** denote significance at the 10 %, 5 %, and 1 % levels, respectively.Standard errors are shown in parentheses. Table 3 Impact of environmental regulations on the intensive margin of clean product exports. Intensive margin of clean product exports regulations and clean product exports.The results show that environmental regulations can increase the export intensive margin of clean products in capital-intensive enterprises but cannot increase the export extensive margin of clean products in these enterprises. For labor-intensive enterprises, the extensive margin and intensive margin of clean product exports are not significantly impacted by environmental regulations.The p value of the coefficient difference test is significant at the 10 % level, which proves that the coefficient difference between groups is significant 4 .One possible explanation for this result is as follows.Compared with capital-intensive enterprises, labor-intensive enterprises export relatively clean products, and environmental regulations do not significantly affect the emission reduction costs of labor-intensive enterprises.Therefore, among these enterprises, environmental regulations have a stronger role in promoting the cleanliness of capital-intensive enterprise exports. In addition, the distinctive feature of Chinese enterprises is the coexistence of various types of ownership, and the operating background and business environment of different types of ownership are clearly different.The results in Table 7 regarding the effects of environmental regulations on the cleanliness of export firms with various ownership structures reveal that environmental regulations can cause non-stated-owned enterprises to increase their export intensive margin of clean products but cannot cause stateowned enterprises to increase their export intensive margin of clean products.For state-owned and non-stated-owned enterprises, environmental regulations do not significantly increase the export extensive margin of clean products.The p value of the coefficient difference test is significant at the 10 % level, which proves that the coefficient difference between groups is significant.One possible explanation for this result is that the business performance and employment guarantee of state-owned enterprises have a significant part in the promotion and assessment of local officials, which may lead to the incomplete enforcement of environmental regulations among state-owned enterprises. The authors would like to thank a reviewer for the suggestion to test the intergroup coefficient differences. W. Du et al. The internal mechanisms of the impact of environmental regulations on the cleanliness of export firms This study confirms that environmental regulations might, to a certain extent, encourage the cleanliness of export operations.To verify the hypothesis, this section uses an interaction effect model to discuss the mechanisms through which environmental regulations promote export enterprises' clean transformation through the effects of compliance costs and technological innovation.The compliance cost effect is measured by enterprise production costs.The technological innovation effect is measured by enterprise patent applications. The mechanism of the compliance cost effect is described in Columns ( 1) and ( 2) in Table 8.Column (2) shows that the interaction coefficient between environmental regulations and compliance costs is significantly positive, indicating that environmental regulations increase the intensive margin of clean product exports by increasing enterprises' production costs.The interaction coefficient between environmental regulations and compliance costs in Column ( 1) is not significant, indicating that environmental regulations do not have a significant impact on the extensive margin of clean product exports for enterprises. Columns ( 3) and ( 4) in Table 8 detail the mechanism underlying the impact of technological innovation.Columns ( 3) and ( 4) show that the interaction coefficients between environmental regulations and technological innovation are not significant.These results do not support the notion that the technological innovation effect of enterprises in the sample interval is the transmission channel through which environmental regulations promote the cleanliness of export enterprises.The conclusions are consistent with those of existing research [37,38].Enterprises need a large amount of capital investment in the early stage of technological innovation and face the risk of innovation failure.Thus, environmental regulations do not necessarily lead to the technological innovation of enterprises. The heterogeneous impacts of environmental regulation tools on the cleanliness of export firms To verify Hypothesis 2, this section analyzes the heterogeneous impact of different environmental regulatory policy tools on the cleanliness of export enterprises.First, the impact of command-and-control environmental regulations on the cleanliness of export enterprises is investigated.At present, the "three simultaneities" system can well represent the application of command-and-control environmental regulations in China.Other systems are relatively short in terms of period, and their scope is relatively restricted.Therefore, the ratio of the "three simultaneities" system of environmental investment to industrial value added is used to measure command-and-control environmental regulations.The findings of the regressions, which are displayed in Column (1) and (2) in Table 9, demonstrate that command-and-control environmental regulations are ineffective in significantly promoting the clean transformation of export firms.This result may be explained by the fact that command-and-control environmental policies primarily influence the pollution emissions of firms by establishing obligatory emission limits and technical requirements.This aspect makes these policies unable to effectively incentivize the clean transformation of export businesses. The impacts of market-inspired environmental regulations on the cleanliness of export firms are examined.One typical marketinspired environmental tool is the sewage charge system in China.Therefore, this study measures market-inspired environmental regulations by the ratio of sewage fees to pollution emissions.The regression results, which are presented in Columns ( 3) and (4) in Table 9, show that market-inspired environmental regulations increase the export intensity of clean products but do not improve the export probability of clean products.One possible explanation for this result is that market-inspired environmental regulations require enterprises to pay fees based on environmental standards to internalize the external costs brought by their pollution behaviors.This aspect provides strong flexibility and incentives. Finally, the impact of voluntary public environmental regulations on the cleanliness of enterprise exports is investigated.Referring to Du et al. [27], voluntary public environmental regulations are calculated by the ratio of environmental complaints in each province to the regional population.Columns ( 5) and ( 6) in Table 9 present the regression results, demonstrating that voluntary public environmental regulations cannot improve the cleanliness of enterprise exports.One possible explanation for this result is that such regulations do not have a mandatory binding effect, and their implementation intensity depends on the public's environmental awareness and the corresponding environmental protection reporting mechanism.At present, China's environmental education level is still not high, and the public's awareness of ecological environmental protection is insufficient. Further discussion 5 With the emergence of pollution problems and the deepening of economic globalization, the interaction between environmental policies and international trade has attracted increasing attention [1,39].Theoretical research on the influence of environmental regulations on firm exports includes mainly the pollution haven hypothesis [3,40,41] and the Porter hypothesis [5,42,43].Research on pollution havens hypothesizes that environmental regulations may lead to an increase in environmental costs [9,44], which in turn can lead to a reallocation of the resources of enterprises.Research on the Porter hypothesis holds that the pressure of environmental regulations may stimulate the technological innovation of enterprises [8,45,46], which in turn offsets the cost increase caused by environmental regulations.This research investigates the impact of environmental policy on whether microenterprises export clean products and how much they export, and it discusses the clean transformation of export enterprises from the dual perspectives of the extensive and intensive margins.The results show that environmental regulations promote the intensive margin of enterprise clean product exports, and production costs and investment in emission reduction equipment are important channels.The results support the pollution haven hypothesis, i.e., that environmental regulations lead to a reallocation of the resources of export enterprises [1,12,47].This research has important theoretical and practical significance for promoting the coordinated development of China's environmental improvement and economic growth. Additionally, this research introduces the differential characteristics of environmental regulations and further explores how different types of environmental regulation tools affect the cleanliness of export enterprises.The results show that compared with command-and-control and voluntary public policy tools, market-inspired environmental regulations have a stronger promoting effect on the cleanliness of export enterprises.The results are consistent with those of existing research [48][49][50], demonstrating that market-inspired environmental regulations are more effective because of the strong flexibility and incentives [49,51].This research provides a scientific basis and microempirical evidence for the selection of environmental policy tools. Conclusions and policy implications With the increased breadth and length of haze in China, environmental preservation has become a top priority for citizens and policymakers alike.Environmental regulation policies are direct and effective methods of pollution management and environmental protection.Only a few studies have examined whether China's environmental rules impede economic development, particularly the expansion of export trade.Research on this topic is beneficial for improving the environmental policy framework and supporting the clean transformation of China's international trade.This research has significant theoretical and practical implications for advancing China's integrated development of environmental improvement and economic growth. This study presents an examination of the influence of environmental regulations on the cleanliness of enterprise exports from the dual perspectives of the extensive and intensive margins based on the China Industrial Enterprise, China Enterprise Pollution Emission, and China Customs Databases, and it reaches the following conclusions.First, environmental regulations can increase the intensive margin of clean product exports but cannot increase the extensive margin.Second, compared with other enterprises, environmental regulations play a stronger role in promoting the clean export transformation of capital-intensive and non-state-owned enterprises.Third, the compliance cost effect is an important mechanism through which environmental regulations can promote the cleanliness of enterprise exports, while as a mechanism, the technological innovation effect is not verified.Finally, market-inspired environmental policies have the strongest promoting effect on the clean transformation of enterprise exports in the sample interval, while the results of command-and-control and voluntary public environmental measures are found to be relatively weak. On the basis of the study findings, the following policy implications are noted.First, the government should gradually improve and develop the environmental regulation system to force the clean transformation and upgrading of China's foreign trade enterprises.The government should take advantage of the way in which environmental regulations advance the export extensive margin of clean products, guide some pollution-intensive product export enterprises to export clean products through environmental regulations, and realize the transformation of the export growth of China's enterprises' clean products under environmental regulation constraints from the intensive margin to both the intensive margin and the extensive margin.Second, the government must develop distinct environmental regulation rules based on various firm characteristics.At present, the economic development level and industry characteristics of different regions in China are quite different.Policymakers should combine the actual situations of different regions, industries and enterprises with different ownership structures, implement special governance and key inspections, improve the punishment of enterprises' illegal emissions, and achieve a synergy between the pollution reduction and trade development of different export enterprises.Third, government departments should provide policy support to enterprises to guide them in achieving a clean transformation through technological innovation under environmental regulation constraints.Regarding the impact of environmental policy, some enterprises may directly introduce emission reduction equipment rather than engaging in technological innovation to achieve environmental standards.Therefore, while strengthening environmental regulations, government departments can appropriately provide enterprises with preferential support policies such as R&D subsidies and innovation tax credits to allow them to better take advantage of the role of technological innovation in the green development of international trade.Finally, the government should improve the market mechanism and supervision mechanism and fully exploit the guiding role of market-inspired environmental policies and the supervisory role of voluntary public environmental regulations.The government should establish a pollution charge system with the environmental protection tax system as the core, further improve the national emission rights trading market, and accelerate the interaction and coordination between the environmental protection tax and emission rights trading system.Additionally, the enterprise environmental information disclosure mechanism and people's awareness of environmental pollution supervision should be improved, and the role of voluntary public environmental regulations should be gradually expanded. This research also has some limitations that warrant further attention in future work.The internal mechanisms of the effect of environmental regulations on the cleanliness of export firms are complex and diverse.This research uses an interaction effect model to analyze the mechanisms from the perspectives of the effects of compliance costs and technological innovation.There may be mechanism recognition bias, and other potential mechanisms may be ignored to some extent.Therefore, how to adopt more accurate mechanism identification methods and test more potential mechanisms will become the direction of further research.Furthermore, additional work is necessary due to the sample data limitations in this research.Because of a lack of crucial indicators and the poor quality of the follow-up period, this research included only data from 2000 to 2010.As a result, the conclusions of this research are constrained by the sample period.Future work can update the sample period in China through on-the-ground investigations and questionnaire surveys, broadening this research and allowing broader conclusions to be drawn. W .Du et al. Table 2 Impact of environmental regulations on the extensive margin of clean product exports. Table 4 Robustness test I: Replacement of the explanatory variable and sample adjustments. Table 5 Robustness test II: Exclusion of policy interference. Table 6 Heterogeneity analysis I: Introducing capital characteristics. Table 7 Heterogeneity analysis II: Introducing ownership characteristics. Table 8 Internal mechanism analysis. Table 9 Impact of heterogeneous environmental regulations on the cleanliness of enterprise exports.
7,301
2023-10-19T00:00:00.000
[ "Environmental Science", "Economics" ]
ON THE RELEVANCE OF QUERY EXPANSION USING PARALLEL CORPORA AND WORD EMBEDDINGS TO BOOST TEXT DOCUMENT RETRIEVAL PRECISION In this paper we implement a document retrieval system using the Lucene tool and we conduct some experiments in order to compare the efficiency of two different weighting schema: the well-known TF-IDF and the BM25. Then, we expand queries using a comparable corpus (wikipedia) and word embeddings. Obtained results show that the latter method (word embeddings) is a good way to achieve higher precision rates and retrieve more accurate documents. INTRODUCTION Document Retrieval (DR) is the process by which a collection of data is represented, stored, and searched for the purpose of knowledge discovery as a response to a user request (query) [1]. Note that with the advent of technology, it became possible to store huge amounts of data. So, the challenge has been always to find out useful document retrieval systems to be used on an everyday basis by a wide variety of users. Thus, DR -as a subfield of computer science-has become an important research area. IT is generally concerned by designing different indexing methods and searching techniques. Implementing an DR system involves a two-stage process: First, data is represented in a summarized format. This is known as the indexing process. Once, all the data is indexed. users can query the system in order to retrieve relevant information. The first stage takes place off-line. The end user is not directly involved in. The second stage includes filtering, searching, matching and ranking operations. Query expansion (QE) [2] has been a research flied since the early 1960. [3] used QE as a technique for literature indexing and searching. [4] incorporated user's feedback to expand the query in order to improve the result of the retrieval process. [5,6] proposed a collection-based term co-occurrence query expansion technique, while [7,8] proposed a cluster-based one. Most of those techniques were tested on a small corpus with short queries and satisfactory result were obtained. Search engines were introduced in1990s. Previously proposed techniques were tested on bigger corpora sizes. We noticed that there was a loss in precision [9,10]. Therefore, QE is still a hot search topic, especially in a context of big data. To measure the accuracy of a DR system, there are generally two basic measures [11]: 1) Precision: the percentage of relevant retrieved documents and 2) Recall: the percentage of documents that are relevant to the query and were in fact retrieved. There is also a standard tool known as the TRECEVAL tool. It is commonly used by the TREC community for evaluating an ad hoc retrieval run, given the results file and a standard set of judged results. In this paper we implement a document retrieval system using the Lucene toolkit [12]. Then we investigate the relevance of query expansion using parallel corpora and word embeddings to boost document retrieval precision. The next section describes the proposed system and gives details about the expansion process. The third one describes, and analyses obtained results. The last section concludes this paper and describes the future work. METHODOLOGY In this section, we present the structure of our Lucene system. First, we describe its core functions. In addition to that we describe some pre-processing operations as well as the evaluation process. System Overview Lucene is a powerful and scalable open source Java-based Search library. It can be easily integrated in any kind of application to add amazing search capabilities to it. It is generally used to index and search any kind of data whether it is structured or not. It provides the core operations for indexing and document searching. Generally, a Search engine performs all or a few of the following operations illustrated by the above Figure. Implementing it requires performing the following actions:  Acquire Raw Content: it is the first step. It consists in collecting the target contents used later to be queried in order to retrieve accurate documents.  Building and analyzing the document: It consists simply in converting raw data to a given format that can be easily understood and interpreted.  Indexing the document: The goal here is to index documents. So, the retrieval process will be based on certain keys instead of the entire content of the document. The above operations are performed in an off-line mode. Once all the documents are indexed, users can conduct queries and retrieve documents using the above described system. In this case, an object query is instantiated using a bag of words present in the searched text. Then, the index database is checked to get the relevant details. Returned references are shown to the user. Note that different weighting schemes can be used in order to index documents. The most used ones are tfidf (the reference of the vectoral model) and BM25 (the reference of the probabilistic model). Typically, the tf-idf [13,14] weight is composed by two terms: the first one measures how frequently a term occurs in a document. It computes the normalized Term Frequency (TF) which is the ratio of the number of times a word appears in a document by the total number of words in that document. the second term known as the inverse document frequency (IDF) measures how important a term is. It computes the ratio of the logarithm of the number of the documents by the number of documents where the specific term appears. BM25 [15] ranks a set of documents based on the query terms appearing in each document, regardless of the inter-relationship between the query terms within a document (e.g., their relative proximity). It is generally defined as follows: Given a query Q, containing keywords q1, ... , qn the BM25 score of a document D is: where N is the total number of documents in the collection, and n(qi) is the number of documents containing qi. Query Expansion Using a Comparable Corpora and Word Embeddings In order to improve system accuracy, we proposed two techniques of query expansion. The first one uses Wikipedia as comparable corpus. The second one uses word embeddings. The main purpose is to make the query more informative while reserving its integrity. Query expansion using a comparable corpus First, we use Wikipedia as a comparable corpus to expand short queries. For this purpose, we tested two slightly different approaches.  Query expansion by summary: We extract key-words from the query using the Rake algorithm [16]; a domain-independent method for automatically extracting keywords. we rank keywords based on their order of importance, we take the most important one. Then, we use it to query Wikipedia. We summarize the first returned page; AKA, we make a short summary of one sentence and we concatenate it to the original query.  Query expansion by content: We extract key-words from the query using the Rake algorithm. we rank keywords based on their order of importance. Then, we take the most important one and we use it to query Wikipedia. Therefore, we concatenate titles of the top returned pages to the original query. Query expansion using word embeddings Word embeddings are also used to expand the queries. We assume that the concept expressed by a given word can be strengthen by adding to the query the bag of words that usually co-occur with it. For this purpose, we use the Gensim implementation of word2vec using three different models: glove-twitter-25, glove-twitter-200, fasttext-wiki-news-subwords-300 and glove-wikigigaword-300 [17]. The Data Set For experiments, we used a subset of the Trec dataset. It is a news corpus. It consists in a collection of 248500 journal article covering many domains such as economics, politics, science and technology, etc. First, we perform pre-processing of our corpus by removing stop words, applying stemming or lemmatization. Stemming is the process of transforming to the root word by removing common endings. Most common widely used stemming algorithms are Porter, Lancaster and Snowball. The latter one has been used in this project. In lemmatization, context and part of speech are used to determine the inflected form of the word and applies different rules for each part of speech to get the root word (lemma). Obtained results using different preprocessing strategies are reported in the next section. The most frequently and important basic measures for document retrieval effectiveness are precision and recall [18]. Precision is simply the probability given that an item is retrieved it will be relevant and recall is the probability given that an item is relevant it will be retrieved. In this work, we use the TRECEVAL program [19] to evaluate the proposed system. It uses the mentioned above NIST evaluation procedures. Results and Discussion Obtained results are reported in Table 1, Table 2, Table 3 and Table 4. Table 1 shows the system accuracy when using non-processed Vs. pre-processed data. It proves that pre-processing helps to achieve better precision rates. While, Table 2 shows results when using different weighting schema: better results are obtained by using the BM25 weighting schema. Notice here that we performed the same pre-processing before conducting experiences using different weighting schema. Table 3 displays obtained results when applying query expansion using a comparable corpus. We conducted two experiences: T (using expanded queries by title), S (expanded queries by topic) and we compared their results to those obtained by our original set of short queries. Notice here that we performed the same pre-processing and we used the same weighting schema. Obtained results show that expanding queries through titles of the top returned Wikipedia pages gives approximately the same precision rates with the original procedure. Whereas, expanding queries through the summary of the Wikipedia top page messes up the system accuracy. Finally, Table 4 shows results when expanding queries using word embeddings. we used the Gensim implementation of word2vec. We tested three different models: glove-twitter-25 (WE1), glove-twitter-200 (WE2), fasttext-wiki-news-subwords-300 (WE3) and glove-wiki-gigaword-300 (WE4). Obtained results show that the system accuracy can be enhanced when taking in consideration the top 5 returned results. To achieve this goal, we should use the appropriate model: WE3 which is trained with a news corpus or WE4 which is trained using very a huge corpus. CONCLUSIONS AND FUTURE WORK In this work, a DR system based on the Lucene toolkit is presented. Different weighting schema are tested. Results show that the probabilistic model (BM25) performs the vectoral one (TFIDF). Also, lead experiments show that query expansion using word embeddings improves the overall system precision. Meanwhile, using a comparable corpus doesn't necessarily lead to the same result. This paper can be improved by:  Testing an interactive query expansion technique: experimental results show that the query expansion using a comparable corpus does not lead to higher precision rates. Actually, the precision rate depends on the efficiency of the Rake key word extractor algorithm. The main idea is to let users validate the automatically extracted keywords used later on during the query expansion process.  Testing an hybrid technique of query expansion: word embeddings can be applied on the result of the interactive query expansion phase. This may boost the system performance since the interactive query expansion will guarantee the use of significant words of the query. Also, using word embeddings will ensure retrieving relevant documents which do not necessarily contain words used in the query. Currently, we are adding a new functionality to our system; We are implementing a multidocument text summarization technique which generates a comprehensive summary of the retrieved set of documents.
2,651.6
2020-02-29T00:00:00.000
[ "Computer Science" ]
Translation Inhibition of the Developmental Cycle Protein HctA by the Small RNA IhtA Is Conserved across Chlamydia The developmental cycle of the obligate intracellular pathogen Chlamydia trachomatis serovar L2 is controlled in part by the small non-coding RNA (sRNA), IhtA. All Chlamydia alternate in a regulated fashion between the infectious elementary body (EB) and the replicative reticulate body (RB) which asynchronously re-differentiates back to the terminal EB form at the end of the cycle. The histone like protein HctA is central to RB:EB differentiation late in the cycle as it binds to and occludes the genome, thereby repressing transcription and translation. The sRNA IhtA is a critical component of this regulatory loop as it represses translation of hctA until late in infection at which point IhtA transcription decreases, allowing HctA expression to occur and RB to EB differentiation to proceed. It has been reported that IhtA is expressed during infection by the human pathogens C. trachomatis serovars L2, D and L2b and C. pneumoniae. We show in this work that IhtA is also expressed by the animal pathogens C. caviae and C. muridarum. Expression of HctA in E. coli is lethal and co-expression of IhtA relieves this phenotype. To determine if regulation of HctA by IhtA is a conserved mechanism across pathogenic chlamydial species, we cloned hctA and ihtA from C. trachomatis serovar D, C. muridarum, C. caviae and C. pneumoniae and assayed for rescue of growth repression in E. coli co-expression studies. In each case, co-expression of ihtA with the cognate hctA resulted in relief of growth repression. In addition, expression of each chlamydial species IhtA rescued the lethal phenotype of C. trachomatis serovar L2 HctA expression. As biolayer interferometry studies indicate that IhtA interacts directly with hctA message for all species tested, we predict that conserved sequences of IhtA are necessary for function and/or binding. Introduction Chlamydiaceae are gram negative, obligate intracellular bacterial pathogens, with different species causing a wide range of diseases in both humans and animals. Chlamydia trachomatis biovars are major pathogens in humans and infect the urogenital tract and the eye in a serovar dependent manner. The urogenital serovars of C. trachomatis are the leading cause of bacterial sexually transmitted disease (STD) worldwide, the complications of which can lead to serious sequelae including pelvic inflammatory disease, ectopic pregnancies and infertility [1,2]. The ocular serovars of C. trachomatis cause trachoma, a chronic follicular conjunctivitis that results in extensive scarring and blindness and are considered the leading cause of infectious preventable blindness in developing countries [3]. C. pneumoniae is the causative agent of human respiratory illnesses and is responsible for approximately 10% of community acquired pneumonia and 5% of bronchitis and sinusitis cases [4]. Chlamydia which cause pathology in animals include C. abortus (abortion and fetal loss in ruminants), C. felis (conjunctivitis and respiratory problems endemic in cats), C. caviae (conjunctivitis in guinea pigs) and C. psittaci (affects conjunctiva, respiratory system and gastrointestinal tract of birds) and can lead to zoonotic infections in humans. C. muridarum infects members of the family Muridae and is often used as a genital infectious model of C. trachomatis genital disease [5]. The chlamydial developmental cycle occurs entirely within a membrane bound parasitophorous vesicle termed an inclusion. Once internalized, Chlamydia undergo dramatic physiological and morphological changes alternating between two distinct forms, the elementary body (EB) and the reticulate body (RB). The metabolically inert EB is the infectious unit, able to survive extracellularly and disseminate to invade susceptible host cells. Upon infection of a host cell the EB differentiates to the noninfectious, metabolically active RB which divides repeatedly by binary fission. Late in the infection, a subset of RBs differentiate into the terminal but infectious EB form while the remaining RBs continue to replicate, resulting in asynchrony of the chlamydial developmental cycle [6]. The terminally differentiated EBs infect neighboring cells upon EB release due to cell lysis or inclusion extrusion [7]. It is not yet clear as to how differentiation between the EB and RB cell forms is regulated. Two proteins central to differentiation are HctA and HctB, both lysine rich, highly basic proteins with primary sequence homology to the eukaryote histone H1 [8][9][10][11]. Both proteins are expressed late in development, concomitant with RB to EB conversion and repress transcription and translation by binding to and occluding the genome [8,9,[11][12][13][14]. Upon infection, the characteristic core of condensed chromatin of the EB is dispersed as differentiation into the pleomorphic RB occurs. Although nucleoid dispersion and gene transcription occurs within the first few hours of infection, HctA and HctB levels remain fairly constant indicating that these two proteins are no longer functioning to condense the genome in early chlamydial developmental forms [14][15][16]. A metabolite produced by the nonmevalonate methylerythritol phosphate (MEP) pathway of isoprenoid synthesis, thought to be 2-C-methyl-D-erythritol 2,4cyclodiphosphate (MEC), was found to disrupt the chromosomal interactions of both HctA and HctB. It is hypothesized that MEC is a general modulator of EB germination [14,16]. Expression of HctA is very tightly regulated and is repressed by the small non-coding RNA (sRNA), IhtA until RB to EB redifferentiation [17]. Bacterial sRNAs regulate the translation or stability of a target messenger RNA during specific developmental stages or stress conditions (reviewed in [18][19][20][21]. IhtA is transcribed early in infection and represses the translation of hctA mRNA without affecting the stability of the mRNA. Late in infection, IhtA transcription decreases allowing HctA synthesis to occur and RB to EB differentiation to proceed. In this study we demonstrate that the regulation of HctA by the sRNA IhtA is conserved in the important chlamydial pathogens of both humans and animals. Results The ihtA Gene Loci is Present and Expressed in Diverse Chlamydial Species The gene encoding ihtA is located on the positive strand in the IGS between the type III secretion system outer membrane ring protein, sctC and tRNA-Thr of C. trachomatis serovar L2 434. The promoter for ihtA is actually embedded in the 39 end of the sctC open reading frame (ORF) [17]. The gene aspC is encoded just downstream of tRNA-Thr on the negative strand. This same genomic organization holds true for all sequenced Chlamydiaceae including the C. trachomatis serovars D (genital specific) and A (ocular specific), as well as the human respiratory pathogen C. pneumoniae, the Muridae species C. muridarum and the guinea pig specific C. caviae (Fig. 1A). It has recently been shown that the IhtA transcript is expressed by C. pneumoniae, C. trachomatis serovar D and C. trachomatis serovar L2b/UCH-1/proctititis during the course of infection [22][23][24][25]. In order to determine if sRNA control of HctA expression is conserved across pathogenic Chlamydia we first sought to confirm the expression of IhtA in C. muridarum and C. caviae. We performed Northern analysis of sRNA samples isolated from host cells infected with C. muridarum and C. caviae using RNA isolated from C. trachomatis serovar D and serovar L2 infection as controls. Expression of IhtA could be detected in all cases (Fig. 1B). We had previously identified the transcription start site (TSS) to be a residue located 8 nt downstream of the beginning of the IGS by primer extension analysis [17]. Albrecht et al, however identified the TSS to be the A residues 6 nt downstream of the previously identified TSS using a deep sequencing approach [24]. This TSS was confirmed in Serovar D by AbdelRahman et al by 59 RACE [22], by our lab in serovar L2 (data not shown) and in C. pneumoniae [25]. Using this consensus TSS and the 39 end identified by AbdelRahman et al in serovar D and our lab in serovar L2 (unpublished data), we predicted the sequence of the IhtA transcript in C. muridarum (105 nt), C. caviae (103 nt) and C. pneumoniae (105 nt). When aligned, the different species ihtA displayed a high level of identity to the ihtA of C. trachomatis serovar L2, with serovar D and C. muridarum at the highest (100% and 96% respectively) and C. caviae and C. pneumoniae at the lowest (70% and 69% respectively) (Fig. 2). Using the RNAfold web server of the Vienna RNA Websuit [26], we predicted the secondary structure of all five species IhtAs to determine if secondary structure was similar. RNAfold predicts both the minimum free energy (MFE) [27] and centroid [28] secondary structures of RNA molecules. The more similar the MFE and centroid structures, the more reliable the prediction [26]. The predicted structures of C. trachomatis serovars L2 and D, C. muridarum and C. caviae IhtA were quite similar, with each structure containing three stem:loops (Fig. 3). As the MFE and centroid structures of each species were identical, only the MFE structure is shown in Fig. 3. The open loops, which in general are the structures free for initial interaction with the sRNA target, were highly conserved among these four species. Interestingly, the loop of stem:loop 1 contains a 6 nt region which is complimentary to to the first 6 nt of the hctA ORF (denoted with an asterisks in Fig. 3). In contrast, the MFE and centroid structural predictions of C. pneumoniae IhtA displayed little similarity indicating a lack of reliability in the predictions ( IhtA Binds Directly to the hctA RNA Message In general sRNA regulatory molecules modulate gene expression via direct base pairing with their target mRNA and more rarely, by altering the activity of a protein which in turn impacts gene regulation [29]. As it is unlikely that E. coli produces a protein specific for IhtA regulation we hypothesize that IhtA represses hctA translation by interacting directly with the hctA message. To determine if the IhtAs of all five species could interact with their cognate hctA mRNA, we measured IhtA to hctA binding in real time using biolayer interferometry (BLI). Briefly, hctA run off transcripts of each species were annealed to a biotinylated DNA oligo and bound to a streptavidin coated optical sensor tip. The sensor was then dipped into a solution containing species specific IhtA run off transcripts and RNA:RNA binding was determined in real time by measuring the change in reflected light through the sensor tip. Antisense serovar L2 IhtA was used as a scrambled nonbinding control in each case. The data was normalized to percent maximum change in reflected light over time and compared to scrambled transcript (Fig. 4). These measurements indicate that IhtA of each species is capable of interaction with its cognate hctA target mRNA in vitro. IhtA Functions to Repress HctA Expression in Diverse Chlamydial Species Translation repression of serovar L2 hctA by the sRNA IhtA can be monitored by assaying for relief of both nucleoid condensation and the repressive growth phenotype induced by HctA overexpression in E. coli [9,17]. Therefor, to determine if sRNA regulation of hctA translation is conserved across Chlamydiaceae we first PCR amplified ihtA from C. trachomatis serovar D, C. muridarum, C. caviae and C. pneumoniae genomic DNA and cloned the resulting fragment into pLac using the primers indicated in Table S1. We have shown previously that IhtA is constitutively expressed in E. coli when the promoter region is included, therefor all ihtA clones included 59UTR [16,17]. Northern analysis of sRNA isolated from overnight cultures indicate that the IhtA transcript of each species tested was constitutively expressed (Fig. 5A). The coding sequence of IhtA's target, hctA was PCR amplified from C. trachomatis serovar D, C. muridarum, C. caviae and C. pneumoniae genomic DNA and subcloned into the pTet vector under the control of the tet promoter. Ectopic expression of each species of HctA resulted in a dramatic condensation of the E. coli nucleoid as monitored by DRAQ5 staining (Fig. 5B). Co-expression of the species hctA with the cognate IhtA relieved this phenotype indicating that each species IhtA could suppress the translation of its cognate hctA. In addition, over-expression of each species HctA resulted in repression of growth (Fig. 5C, light grey bars). In each case this growth repression was relieved to a significant level by co-expression with the cognate IhtA with p values ,0.001 in the case of C. trachomatis serovars L2 and D, C. muridarum and C. caviae and a p value = 0.003 for C. pneumoniae (Fig. 5C, dark grey bars). Although the levels of rescue of C. trachomatis serovar D, C. muridarum, C. caviae and C. pneumoniae do not approach that of C. trachomatis serovar L2, these data taken together suggest a conservation of IhtA function. Conserved Regions of ihtA are Important to Function As indicated in Figures 2, ihtA sequence is quite similar across Chlamydia. We therefor sought to determine if IhtA from C. trachomatis serovar D, C. muridarum, C. caviae and C. pneumoniae could functionally substitute for that of serovar L2 IhtA. To this end, E. coli were co-transformed with C. trachomatis serovar L2 hctA and species specific ihtA and monitored for growth (Fig. 6A). Expression of IhtA isolated from C. trachomatis serovars D and C. muridarum rescued the serovar L2 HctA over-expression growth defect to levels similar to serovar L2 IhtA controls. Co-expression of C. caviae and C. pneumoniae IhtA with C. trachomatis serovar L2 hctA resulted in an intermediate rescue (average of 60% rescue over three separate experiments). Although the IhtA sRNAs from the more distantly related Chlamydia did not rescue growth repression to the same levels as that of L2 IhtA, the L2 HctA growth phenotype was significantly relieved in all cases. The converse experiment in which serovar L2 IhtA was coexpressed with C. trachomatis serovar D, C. muridarum, C. caviae and C. pneumoniae hctA also resulted in relief of HctA induced growth repression (Fig. 6B). Although the growth phenotype was significantly rescued in each case (p values #0.001), the rescue was more variable as was the case for IhtA co-expression with the cognate hctA. Nevertheless, that IhtA is relatively interchangeable indicates that the overall function of IhtA in hctA translation repression is conserved. Additionally, these data suggest that the sequences and/or structural features conserved between species may be important for functional activity. Discussion A defining characteristic of the Chlamydiaceae family is the biphasic developmental cycle. All bacteria in this family share this basic life cycle consisting of specialized cell types for cell binding and entry (EB) and intracellular replication (RB). Differentiation between the two cell types is in part controlled by the expression of the histone-like proteins HctA and HctB. We previously reported the identification of a small non-coding RNA, termed IhtA which acts as a regulatory molecule controlling the expression of the HctA protein at the RB to EB transition point in C. trachomatis serovar L2 [17]. Here we show that ihtA is conserved across all vertebrate pathogenic Chlamydia. IhtA is contained in the intergenic region of the chromosome between sctC and the thr-tRNA in each of these organisms. Although regulation of cell type differentiation is poorly understood, it is appreciated that the expression of HctA is a critical component of the cascade leading to EB differentiation. As the correct timing of HctA expression is critical to the infectious cycle, it could be predicted that exquisite control of hctA translation by IhtA would be a conserved mechanism. Indeed, micro-array analysis and RNA sequencing of RNA isolated from a selection of human chlamydial pathogens demonstrate that IhtA is expressed upon infection of a host cell [22][23][24][25] and Fig. 1). In addition, it has been shown that the expression pattern of IhtA in C. trachomatis serovar D and C. pneumoniae is similar to that of serovar L2, both over a time course of infection and during the RB to EB differentiation process [22,23]. Ectopic expression of hctA cloned from C. trachomatis serovar D, C. muridarum, C. caviae, and C. pneumoniae in E. coli resulted in the characteristic condensed nucleoid and growth repression observed in E. coli expressing serovar L2 HctA. Co-expression of IhtA cloned from these different species relieved both phenotypes, presumably via repression of HctA translation. It is curious that although relief of the growth phenotype of species HctA overexpression by the co-expression of the cognate IhtA was significant (Fig. 4), rescue was not to the same extent as that of the serovar L2 IhtA:hctA partnership. This variable extent of rescue of the species HctA was again evident with serovar L2 interspecies rescue. As E. coli is used as a surrogate system it is difficult to interpret these nuances but several possibilities exist. The HctA protein has a bimodal sequence conservation. The majority of the conserved amino acid sequence in HctA is located in the Nterminal domain of the protein. The first 10 amino acids of HctA of all five species tested are 100% conserved. Outside of this region (the remaining +/2116 aa depending on species) identity between C. trachomatis serovar L2 and C. trachomatis serovar D, C. muridarum, C. caviae and C. pneumoniae falls to 98%, 98%, 84% and 82% respectively. Interestingly, the more divergent C-terminal region contains the DNA binding domain of HctA [30]. This suggests there may be significant differences in the way these proteins interact with E. coli DNA, potentially contributing to the different levels of rescue observed. Although not directly tested it is possible that HctA did not express at consistent levels across species and/or experiments. This seems less likely as all constructs are expressed from the same promoter although we did not account for differential codon usage when expressed in E. coli. Variability in expression across species is certainly true for IhtA (which is expressed from its native promoter in our system) as evidenced by Northern analysis. We show that each species IhtA functionally substituted for serovar L2 IhtA and effectively repressed serovar L2 HctA expression in E. coli. The more distantly related C. caviae and C. pneumoniae did not rescue to wild type levels. As the molar ratio of IhtA:hctA required for full repression of translation, either in the E. coli surrogate system or in vivo where there may be competing targets, is unknown. Therefor as C. caviae and C. pneumoniae IhtA appear not to express as well as C. trachomatis serovars L2 and D and C. muridarum in E. coli, it is difficult to predict if partial rescue of serovar L2 HctA overexpression is due to functionality or dosage. Nevertheless, as IhtA is expressed and regulated in vivo in all chlamydial species tested [14,17,[22][23][24][25] and IhtA relieved growth repression when coexpressed with hctA in E.coli (this manuscript), a conservation of function is suggested. IhtA is a trans-encoded sRNA, present at a genetic location distinct from its target [17]. Trans-encoded sRNAs bind their target mRNAs via short interrupted base pairings which may contain internal bulges and include non-canonical base pairing, thus interacting sequences are often difficult to predict [31][32][33] (reviewed in [29]). The evidence to date suggests that IhtA functions by binding directly to hctA and not through a protein intermediate, however this proposal has not been directly tested. Using biolayer interferometry we show in real time that IhtA from all species tested are capable of interacting directly to its target mRNA, hctA. As interaction between two RNA molecules is mediated in most cases by Watson-Crick base pairing it is likely that the interaction between IhtA and the hctA mRNA occurs through base pairing of conserved residues in both molecules. That IhtA from each species is able to repress serovar L2 hctA translation supports this prediction. The first 31 nucleotides of the hctA ORF of all five species tested are 100% conserved. Over the remaining +/ 2347 nt identity between C. trachomatis serovar L2 and serovar D, C. muridarum, C. caviae and C. pneumoniae falls to 99%, 85%, 74% and 77% respectively. IhtA of each of the five species is highly conserved on a sequence level and the predicted structures of IhtA of four of the five species tested are similar. As noted in the results section, the structure of C. pneumoniae was difficult to predict and the structures predicted are of low confidence. However, in each case, including C. pneumoniae, a 6 nt stretch of IhtA which is complimentary to the first 6 nt of the hctA ORF beginning with and including the ATG start site, resides in what is predicted to be an unpaired open region of IhtA. As these structures are not experimentally determined it is perhaps premature to extrapolate to biological function, however it is appealing to predict that this region is important for direct RNA:RNA interactions leading to inhibition of hctA message translation by occluding the start site. As IhtA and HctA expression is similarly regulated across species during infection, species IhtA directly binds to the cognate hctA in vitro and IhtA co-expressed with hctA in various combinations rescues growth repression to a significant degree, we suggest that the IhtA/hctA interaction is an important conserved regulatory circuit and part of the RB to EB differentiation program shared by chlamydial pathogens. RNA Isolation and Northerns sRNA was isolated from both bacterial cultures and infected HeLa monolayers using the mirVana miRNA Isolation kit as described by the manufacturer (Ambion, Inc.). E. coli expressing IhtA were pelleted and washed twice in ice cold PBS prior to sRNA isolation. sRNA was purified from HeLa 229 cultures infected with C. trachomatis serovar L2 LGV 434, C. muridarum or C. caviae at 24 h PI and C. trachomatis serovar D at 48 h PI. Northern analysis was performed on sRNAs separated on a 10% TBE-urea acrylamide gel and transferred to BrightStar-Plus Nylon membrane (Ambion, Inc.). Membranes were hybridized overnight with the appropriate biotinylated probe at 42uC in ULTRAhyb (Ambion, Inc.). Nonisotopic IhtA probes were generated by PCR amplification of genomic DNA isolated from purified C. trachomatis serovars L2 and D, C. pneumoniae, C. muridarum and C. caviae with species specific primers (Table S1) and biotinylated using a BrightStar Psoralen-Biotin Nonisotopic Labeling Kit Ambion, Inc.). Probed membranes were washed and IhtA detected with the BrightStar BioDetect Nonisotopic detection kit (Ambion, Inc.). Clones The plasmids pTet, pLac, serovar L2 hctApTet and ihtApLac have been described elsewhere [16,17]. To clone the different species hctA, PCR fragments from C. trachomatis serovar D, C. pneumoniae, C. muridarum and C. caviae genomic DNA were generated using the primers indicated in Table S1. hctA fragments from all species except C. pneumoniae were cloned into the Kpn1/ Pst1 sites of pTet. C. pneumoniae hctA was cloned into the Kpn1/Not1 sites. To generate ihtA clones from each species, ihtA and 59UTR was PCR amplified using the primers indicated in Table S1 and ligated into the Kpn1/Pst1 sites of pLac. E. coli Rescue Conditions E. coli rescue assays were performed as previously described with a few modifications [16,17]. DH5aPRO E. coli (Clontech) cultures co-expressing the appropriate hctA and ihtA constructs were grown in triplicate overnight at 37uC in Luria-Bertani (LB) containing 100 mg/ml carbenicillin (cb), 34 mg/ml chloramphenicol (cm) and 50 mg/ml spectinomycin (spec). Cultures were then diluted 1:2000, split into two tubes and one half induced to express HctA with 100 ng/ml anhydrotetracycline (aTc) and incubated with shaking at 30uC for 16 h. There is no need to induce IhtA as expression is constitutive. Growth was determined spectrophotometrically at OD 550 and the ability of a particular construct to rescue the lethal phenotype of HctA was expressed as a percentage of the ratio between the induced and uninduced samples. Staining E. coli from rescue experiments were pelleted and washed twice in 1 ml PBS. Pellets were resuspended in 4% paraformaldehyde and incubated at RT for 20 min. Samples were washed twice in PBS prior to incubation with 1:500 dilution of DRAQ5 (Biostatus) for 30 min. Samples were again pelleted, washed in PBS, resuspended in Mowiol 4-88 (Calbiochem) and mounted on glass slides. Images were acquired using a spinning disk confocal system connected to a Leica DMIRB microscope, equipped with a Photometrics cascade-cooled EMCCD camera, under the control of the Open Source software package mManager (http://www. micro-manager.org/). Images were processed using the image analysis software ImageJ (http://rsb.info.nih.gov/ij/). Biolayer Interferometry Sense IhtA and hctA transcripts were synthesized from the T7 promoter of PCR amplified fragments generated from serovars L2 and D, C. pneumoniae, C. muridarum and C. caviae genomic DNA using the primers described in Table S1. Antisense IhtA (scrambled control) was synthesized from from the T7 promoter of a PCR amplified fragment generated from serovar L2. The hctA transcripts were designed to include 59 UTR starting at the transcription start site (TSS) [37] and an addition 21 nt ''A'' tail used to bind the transcript to the streptavidin biosensor tips. Run off transcripts were prepared using the MEGAshortscript T7 kit as described by the manufacturer (Ambion Inc.). Biolayer interferometry studies of RNA:RNA interactions were performed using the Octet QKe (ForteBio, Menlo Park, CA). To anneal the ligand (hctA message) to the streptavidin biosensor tips, 150 nM hctA transcript, 150 nM 59 biotinylated oligo T (complimentary to the 39 ''A'' tail), 1xRNA Binding Buffer (RBB, 10 mM Tris-HCl pH 8, 125 nM NaCl, 125 mM KCl, 25 mM MgCl2) were combined, heated for 1 min at 90uC and allowed to cool slowly. During this time, SA biosensor tips were equilibrated in RBB buffer for 15 min. RNA annealed to biotinylated oligo was loaded onto the SA tips for 15 min or until saturation. RNA loaded tips were then soaked in RBB buffer for 5 min prior to incubation with 1500 nM IhtA which had been heated at 90uC for 1 min and allowed to cool to RT. The change in internally reflected light attributable to RNA:RNA interactions was collected in real time for 20 minutes using the software provided with the Octet QKe. Supporting Information Table S1 Primers for cloning and in vitro transcription. (PDF)
5,940.2
2012-10-11T00:00:00.000
[ "Biology" ]
Doubly Nonnegative and Semidefinite Relaxations for the Densest k-Subgraph Problem The densest k-subgraph (DkS) maximization problem is to find a set of k vertices with maximum total weight of edges in the subgraph induced by this set. This problem is in general NP-hard. In this paper, two relaxation methods for solving the DkS problem are presented. One is doubly nonnegative relaxation, and the other is semidefinite relaxation with tighter relaxation compare with the relaxation of standard semidefinite. The two relaxation problems are equivalent under the suitable conditions. Moreover, the corresponding approximation ratios’ results are given for these relaxation problems. Finally, some numerical examples are tested to show the comparison of these relaxation problems, and the numerical results show that the doubly nonnegative relaxation is more promising than the semidefinite relaxation for solving some DkS problems. Introduction In this paper, the densest k-subgraph (DkS) problem [1,2] is considered. For a given graph G and a parameter k, the DkS problem consists in finding a maximal average degree in the subgraph induced by the set of k vertices. This problem was first introduced by Corneil and Perl as a natural generalization of the maximum clique problem [3]. It is NP-hard on restricted graph classes such as chordal graphs [3], bipartite graphs [3] and planar graphs [4]. The DkS problem is a classical problem of combinatorial optimization and arises in several applications, such as facility location [5], community detection in social networks, identifying protein families and molecular complexes in protein-protein interaction networks [6], etc. Since the DkS problem is in general NP-hard, there are a few approximation methods [7][8][9] for solving it. It is well-known that semidefinite relaxation is a powerful and computationally efficient approximation technique for solving a host of very difficult optimization problems, for instance, the max-cut problem [10] and the boolean quadratic programming problem [11]. It also has been at the center of some of the very exciting developments in the area of signal processing [12,13]. Optimization problems over the doubly nonnegative cone arise, for example, as a strengthening of the Lovasz-ϑ-number for approximating the largest clique in a graph [14]. The recent work by Burer [15] stimulated the interest in optimization problems over the completely positive cone. A tractable approximation to such problem being defined as an optimization problem over the doubly nonnegative cone. By using the technique of doubly nonnegative relaxation, Bai and Guo proposed an effective and promising method for solving multiple objective quadratic programming problems in [16]. For more details and developments of this technique, one may refer to [17][18][19] and the references therein. It is worth pointing out that the cone of doubly nonnegative matrices is a subset of a positive semidefinite matrices cone. Thus, the doubly nonnegative relaxation is more promising than the basic semidefinite relaxation. Moreover, such relaxation problems can be efficiently solved by some popular package software. In this paper, motivated by the idea of doubly nonnegative relaxation and semidefinite relaxation, the two relaxation methods for solving the DkS problem are presented. One is doubly nonnegative relaxation, and the other is semidefinite relaxation with tighter relaxation. Furthermore, we prove that the two relaxation problems are equivalent under the suitable conditions. Some approximation accuracy results about these relaxation problems are also given. Finally, we report some numerical examples to show the comparison of the two relaxation problems. The numerical results show that the doubly nonnegative relaxation is more promising than the semidefinite relaxation for solving some DkS problems. The paper is organized as follows: we present doubly nonnegative relaxation and a new semidefinite relaxation with tighter relaxation for the DkS problem in Sections 2.1 and 2.2, respectively. In Section 3, we prove that the two new relaxations proposed in Section 2 are equivalent. In Section 4, some approximation accuracy results for the proposed relaxation problems are given. Some comparative numerical results are reported in Section 5 to show the efficiency of the proposed new relaxations. Moreover, some concluding remarks are given in Section 6. Two Relaxations for the Densest k-Subgraph Problem First of all, the definition of the densest k-subgraph (DkS) problem is given as follows. Definition 1 (Densest k-subgraph). For a given graph G(V, E), where V is the vertex set and E is the edge set. The DkS problem on G(V, E) is the problem of finding a vertex subset of V of size k with the maximum induced average degree. Given a symmetric n × n matrix A = (a ij ), the weighted graph with vertex set {1, 2, . . . , n} associates with A in such a way: the edge [i, j] with the weight a ij is introduced in the graph. Then, A is interpreted as the weighted adjacency matrix of the graph with the vertex set V = {1, 2, . . . , n}. Based on Definition 1, the DkS problem consists of determining a subset V 1 ⊆ V consisting of k vertices such that the total weight of edges in the subgraph spanned by V 1 is maximized. To select subgraphs, assign a decision variable y i ∈ {0, 1} for each node (y i = 1 if the node is taken, and y i = 0 if the node is not). The weight of the subgraph given by y is y T Ay. Thus, the DkS problem can be phrased as the 0 − 1 quadratic problem (DkS) It is known that the (DkS) problem is NP-hard [20], even though A is assumed to be positive semidefinite, since the feasible space of the (DkS) problem is nonconvex. For solving this problem efficiently, we present the two new relaxations for the (DkS) problem in the following subsections, based on the idea of approximation methods. Doubly Nonnegative Relaxation Note that the quadratic term y T Ay in the (DkS) problem can also be expressed as A • yy T . By introducing a new variable Y = yy T and taking lifting techniques, we could reformulate the (DkS) problem into the following completely positive programming problem: where C 1+n is defined as follows: and for some finite vectors {z h } h∈H ⊂ R + 1+n \{0}. The following theorem shows the relationship between the (DkS) problem and the (CPP DkS ) problem. Its proof is similar to the one of Theorem 2.6 in [15] and is omitted here. Theorem 1. (i) The (DkS) problem and the (CPP DkS ) problem have the same optimal values of objective functions, i.e., Opt(DkS) = Opt(CPP DkS ); (ii) if (y * , Y * ) is an optimal solution for the (CPP DkS ) problem; then, y * is in the convex hull of optimal solutions for the (DkS) problem. On one hand, according to Definition 2, it is obviously that the (CPP DkS ) problem is equivalent to the (DkS) problem. On the other hand, in view of the definition of convex cone, C 1+n is a closed convex cone, and is called completely positive matrices cone. Thus, the (CPP DkS ) problem is convex. However, since checking whether or not a given matrix belongs to C 1+n is NP-hard, which has been shown by Dickinson and Gijen in [21], the (CPP DkS ) problem is still NP-hard. Thus, C 1+n has to be replaced or approximated by some computable cones. For example, R + n and S + n are both computable cones; furthermore, N + n is also a computable cone. It is worth mentioning that Diananda's decomposition theorem [22] can be reformulated as follows, and its proof can be found in it. Theorem 2. C n ⊆ S + n ∩ N + n holds for all n. If n ≤ 4, then C n = S + n ∩ N + n . The matrices cone S + n ∩ N + n is sometimes called "doubly nonnegative matrices cone". Of course, in dimension n ≥ 5, there are matrices which are doubly nonnegative but not completely positive, the counterexample can be seen in [23]. By using Theorem 2, the (CPP DkS ) problem can be relaxed to the problem which is called the doubly nonnegative relaxation for the (DkS) problem. Some explanations are given below for this relaxation problem. Remark 1. Obviously, the (DNNP DkS ) problem has a linear objective function and the linear constraints as well as a convex conic constraint, so it is a linear conic programming problem. Meanwhile, it is notable that S + 1+n ∩ N + 1+n ⊆ S + 1+n and the types of variables in both the sets are the same, which further implies that the (DNNP DkS ) problem could be solved by some popular package softwares for solving semidefinite programs. New Semidefinite Relaxation It is well-known that semidefinite relaxation is a powerful approximation technique for solving a host of combinatorial optimization problems. In this subsection, we present a new semidefinite relaxation with tighter bound for the (DkS) problem. The idea of the standard lifting is to introduce the symmetric matrix of rank one Y = yy T . With the help of Y, we could express the integer constraints y i ∈ {0, 1} as Y ii = y i , and the quadratic objective function y T Ay as A • Y . Thus, we can get the following equivalent formulation of the (DkS) problem Notice then that the hard constraint in the above problem is the constraint rank(Y) = 1, which is moreover difficult to handle. Thus, we can relax the above problem to the following standard semidefinite relaxation problem by dropping the rank-one constraint For the (I − SDR DkS ) problem, some remarks are given below. Remark 2. (i) Obviously, the (I − SDR DkS ) problem is also a linear conic programming problem, it has the same objective function and the equality constraints with the (DNNP DkS ) problem. The only difference between the (I − SDR DkS ) problem and the (DNNP DkS ) problem is that the (DNNP DkS ) problem has n(n+1) 2 + n nonnegative constraints more than the (I − SDR DkS ) problem. Thus, the bound of the (DNNP DkS ) problem is not larger than the one of the (I − SDR DkS ) problem. In Section 5, we implement some numerical experiments to show the comparison between the (I − SDR DkS ) problem and the (DNNP DkS ) problem from the computational point of view. Note that the (DkS) problem is inhomogeneous, but we can homogenize it as follows. First, let z = 2y − e in the (DkS) problem, it follows that z ∈ {−1, 1} n . Thus, the change of variable y → z gives the following equivalent formulation of the (DkS) problem: Then, with the introduction of the extra variable t, the (DkS) problem can be expressed as a homogeneous problem where 0 is a zero matrix with appropriate dimension. Remark 3. The (DkS) problem is equivalent to the (DkS) problem in the following sense: if t * z * is an optimal solution to the (DkS) problem, then z * (resp. −z * ) is an optimal solution to the (DkS) problem with t * = 1 (resp. t * = −1). By using the standard semidefinite relaxation technique, and letting let S = t z t z T , the (DkS) problem can be relaxed to the following problem: Moreover, again by using the standard semidefinite relaxation technique directly to the (DkS) problem, we have from Z = zz T , The (SDR DkS ) problem and the (SDR DkS ) problem are both standard semidefinite relaxation problems for the (DkS) problem. The upshot of the formulations of these two relaxation problems is that they can be solved very conveniently and efficiently, to some arbitrary accuracy, by some readily available software packages, such as CVX. Note that there is only one difference between these two relaxation problems, i.e., the (SDR DkS ) problem has one equality constraint more than the (SDR DkS ) problem. In Section 5, some comparative numerical results are reported to show the effectiveness of these two relaxations problems for solving some random (DkS) problems, respectively. It is worth noting that the constraint z ∈ {−1, +1} n in the (DkS) problem further implies always holds. Thus, adding Formula (1) to the (SDR DkS ) problem, we come up with the following new semidefinite relaxation problem Obviously, the relationship Opt(II − SDR DkS ) ≤ Opt(SDR DkS ) holds since the feasible set of the (II − SDR DkS ) problem is the subset of the feasible set of the (SDR DkS ) problem and the two problems have the same objective function. Up to now, three new semidefinite relaxation problems for the (DkS) problem are established, i.e., the (SDR DkS ) problem, the (SDR DkS ) problem and the (II − SDR DkS ) problem, in which the upper bound of the (II − SDR DkS ) problem is more promising than the one of the (SDR DkS ) problem. In the following sections, we will further investigate the relationship between these three problems with the (DNNP DkS ) problem. The Equivalence between the Relaxation Problems The previous section establishes the doubly nonnegative relaxation (i.e., the (DNNP DkS ) problem) and the semidefinite relaxation with tighter bound (i.e., the (II − SDR DkS ) problem) for the (DkS) problem. Note that the (DNNP DkS ) problem has n inequality constraints more than the (II − SDR DkS ) problem. In this section, we will prove the equivalence between the two relaxations. First of all, the definition of the equivalence of two optimization problems is given as follows. In order to establish the equivalence for the (DNNP DkS ) problem and the (II − SDR DkS ) problem, a crucial theorem is given below and the details of its proof can be seen in [24] (Appendix A.5.5). To the end, by using Definition 2 and Theorem 3, we have the following main equivalence theorem. Proof. First of all, we prove that Opt(DNNP DkS ) ≥ Opt((II − SDR DkS )). Suppose that (z * , Z * ) is an optimal solution of the (II − SDR DkS ) problem, and let Directly from e T z * = 2k − n and Equation (2), we have By Equation (2) and e T Z * e = (2k − n) 2 , it holds that Since diag(Z * ) = e, Equation (2) further implies that Combining with Equation (5), it is true that from Formula (6) By Theorem 3 (ii) and Equation (2), it follows that i.e., 1 y T y Y ∈ S + 1+n . Again from Equation (9) and e T Y * e = k 2 , it is true that From Equation (9) and Theorem 3 (ii), it holds that By Equations (11)-(14), we can conclude that (z, Z) defined by Equation (9) is a feasible solution of the (II − SDR DkS ) problem. Furthermore, we have i.e., Opt(DNNP DkS ) ≤ Opt(II − SDR DkS ). Summarizing the analysis above, we obtain Opt(DNNP DkS ) = Opt(II − SDR DkS ). From Equations (2) and (9), we observe that (y, Y) defined by Equation (2) is an optimal solution for the (DNNP DkS ) problem and (z, Z) defined by Equation (9) is also an optimal solution for the (II − SDR DkS ) problem, respectively. According to Definition 2, we conclude that the (DNNP DkS ) problem and the (II − SDR DkS ) problem are equivalent. The above Theorem 4 shows that Opt(DNNP DkS ) = Opt(II − SDR DkS ). Note that the (DNNP DkS ) problem has n inequality constraints more than the (II − SDR DkS ) problem, thus the computational cost of solving (DNNP DkS ) problem may be greater than that of the (II − SDR DkS ) problem. The Approximation Accuracy The above section shows that the (DNNP DkS ) problem is equivalent to the (II − SDR DkS ) problem which has the tighter upper bound compared to the (SDR DkS ) problem (see Theorem 4). In this section, we further investigate the approximation accuracy of the (DNNP DkS ) problem for solving the (DkS) problem, comparing with the standard semidefinite relaxation problems which was proposed in the above sections, under some conditions. To simplify the expression, we denote then the (SDR DkS ) problem is simplified to the following problem: and the (II − SDR DkS ) problem can be simplified as follows: Combining Theorem 3 in [25] with the corresponding known approximation accuracy of semidefinite relaxation for some quadratic programming problems [26], we immediately have that the following theorem holds. In the following analysis, we assume that k = n 2 . We first observe that Obviously, diag(S) = e implies that ∑ i S ii = n + 1, i.e., I • S = n + 1, but we could not obtain diag(S) = e from I • S = n + 1. These results further imply that Similar to the Theorem 4.2 in [27], we have that the following approximation accuracy theorem holds. Up to now, we not only establish the equivalence between the (DNNP DkS ) problem and the (II − SDR DkS ) problem, but also some approximation accuracy results about the (DNNP DkS ) problem and some standard semidefinite relaxation problems are given. In the following Section 5, we will implement some numerical experiments to give a flavour of the actual behaviour of the (DNNP DkS ) problem and some semidefinite relaxation problems. Numerical Experiments In this section, some random (DkS) examples are tested to show the efficiency of the proposed relaxation problems. These relaxation problems are all solved by CVX [28], which is implemented by using MATLAB R2010a on the Windows XP platform, and on a PC with 2.53 GHz CPU. The corresponding comparative numerical results are reported in the following parts. To give a flavour of the behaviour of the above relaxation problems, we consider results for the following test examples. The data of the test examples are given in Table 1. The first column of Table 1 denotes the name of the test examples, n and k stand for the number of vertices of the given graph and the finding subgraph, respectively. The last column denotes the procedures for generating the coefficient matrices A in the (DkS) problem. The more detailed explanations for the procedures are given as follows: • P25. 50 random examples are generated from the 'seed = 1,2,...,50'. The corresponding coefficient matrices A of order n = 25 with integer weights are drawn from {0, 1, . . . , 10}. • P30. This example is generated by the MATLAB function randn from the 'seed = 2012'. The elements of A satisfy the standard normal distribution. • P40. This example is generated by MATLAB function rand from the 'seed = 2017'. The elements of A satisfy the standard uniform distribution on the interval (0, 1). • • P60. This example is generated by MATLAB function rand from the 'seed = 2020'. The elements of A are drawn from {0, 1}. First of all, the performances of the (DNNP DkS ) problem and the (II − SDR DkS ) problem as well as the (I − SDR DkS ) problem, for solving P25 and P50, are compared. We use the performance profiles described in Dolan and Moré's paper [29]. Our profiles are based on optimal values (i.e., average degree) and the number of iterations of these relaxation problems. The Cumulative Probability denotes the cumulative distribution function for the performance ratio within a factor τ ∈ R, i.e., is the probability that the solver will win over the rest of the solvers. The corresponding comparative results of performance are shown in Figures 1 and 2. The comparative results for P25 are shown in Figure 1. It is obvious that the (DNNP DkS ) problem and the (II − SDR DkS ) problem have the same performance, which is a bit better than that of the (I − SDR DkS ) problem from the viewpoint of optimal values. In view of the number of iterations, the performance of the (I − SDR DkS ) problem is the best, and the performance of the (II − SDR DkS ) problem is better than that of the (DNNP DkS ) problem. The performance of the three relaxation problems for solving P50 is shown in Figure 2. The results show that the performance of the (DNNP DkS ) problem is the same as that of the (II − SDR DkS ) problem; they are both much better than that of the (I − SDR DkS ) problem in view of optimal values-although the performance of the (I − SDR DkS ) problem is better than the one of the (DNNP DkS ) problem and the (II − SDR DkS ) problem from the viewpoint of the number of iterations. All of results show in Figures 1 and 2 further imply that the (DNNP DkS ) problem and the (II − SDR DkS ) problem can generate more promising bounds for solving P25 and P50, compared with the (I − SDR DkS ) problem, while the number of iterations is a bit more. Moreover, the (DNNP DkS ) problem and the (II − SDR DkS ) problem have the same performance based on optimal values, although the performance of the (II − SDR DkS ) problem is better than that of the (DNNP DkS ) problem from the viewpoint of the number of iterations, for solving P25 and P50. In order to further show the computational efficiency of the (DNNP DkS ) problem, which is compared with the (II − SDR DkS ) problem and some other types of semidefinite relaxation problems proposed in [30], for solving some (DkS) problems. The test examples A50 and A100 are chosen from [30]. (R-20), (R-24) and (R-MET) denote the three semidefinite relaxation problems proposed in [30], respectively. The corresponding numerical results are shown in Table 2, where "−" means that the corresponding information about the number of iterations is not given in [30]. The results show that the computational efficiency of the (DNNP DkS ) problem is better than the one of the (II − SDR DkS ) problem from the viewpoints of optimal values and number of iterations, respectively. Note that the performance of the (DNNP DkS ) problem and the (II − SDR DkS ) problem are both much better than that of (R − 20) and (R − 24). Moreover, the performance of the (DNNP DkS ) problem is more competitive with (R − MET) for solving these two problems. Table 3. The results signify that the efficiency of the (DNNP DkS ) problem is always better than that of the (II − SDR DkS ) problem from the viewpoint of optimal values and the number of iterations as well as CPU time, respectively, for solving these examples. The performance of the (I − SDR DkS ) problem and the (SDR DkS ) problem are almost the same for solving these examples. Moreover, note that the optimal value of the (DNNP DkS ) problem for solving P80 is larger than that of the (II − SDR DkS ) problem. Thus, we can conclude that it may be more promising to use the (DNNP DkS ) problem than to use the (II − SDR DkS ) problem for solving some specific (DkS) problems in practice. Conclusions In this paper, the DkS problem is studied, whose goal is to find a k-vertex subgraph such that the total weight of edges in this subgraph is maximized. This problem is NP-hard on bipartite graphs, chordal graphs, and planar graphs. By using the advantages of the structure of the DkS problem, the doubly nonnegative relaxation and the new semidefinite relaxation with tighter relaxation for solving the DkS problem are established, respectively. Moreover, we prove that the two relaxation problems are equivalent under the suitable conditions, and give some approximation accuracy results for these relaxation problems. Finally, the comparative numerical results show that the efficiency of the doubly nonnegative relaxation is better than the one of semidefinite relaxation for solving some DkS problems. Acknowledgments: The authors thank the reviewers for their very helpful suggestions, which led to substantial improvements of the paper. Conflicts of Interest: The authors declare no conflict of interest.
5,717.6
2019-01-24T00:00:00.000
[ "Computer Science", "Mathematics" ]
Net Proton Uptake Is Preceded by Multiple Proton Transfer Steps upon Electron Injection into Cytochrome c Oxidase* Background: The coupling mechanism of proton and electron transfer in the redox-linked proton pump cytochrome c oxidase (COX) is still not understood. Results: Both H+ uptake and release steps during single-electron injection into oxidized COX precede net H+ uptake. Conclusion: The first H+ uptake coincides with electron input into CuA at the opposite membrane side. Significance: This suggests efficient H+ uptake mechanisms, such as proton-collecting antennae. Cytochrome c oxidase (COX), the last enzyme of the respiratory chain of aerobic organisms, catalyzes the reduction of molecular oxygen to water. It is a redox-linked proton pump, whose mechanism of proton pumping has been controversially discussed, and the coupling of proton and electron transfer is still not understood. Here, we investigated the kinetics of proton transfer reactions following the injection of a single electron into the fully oxidized enzyme and its transfer to the hemes using time-resolved absorption spectroscopy and pH indicator dyes. By comparison of proton uptake and release kinetics observed for solubilized COX and COX-containing liposomes, we conclude that the 1-μs electron injection into CuA, close to the positive membrane side (P-side) of the enzyme, already results in proton uptake from both the P-side and the N (negative)-side (1.5 H+/COX and 1 H+/COX, respectively). The subsequent 10-μs transfer of the electron to heme a is accompanied by the release of 1 proton from the P-side to the aqueous bulk phase, leaving ∼0.5 H+/COX at this side to electrostatically compensate the charge of the electron. With ∼200 μs, all but 0.4 H+ at the N-side are released to the bulk phase, and the remaining proton is transferred toward the hemes to a so-called “pump site.” Thus, this proton may already be taken up by the enzyme as early as during the first electron transfer to CuA. These results support the idea of a proton-collecting antenna, switched on by electron injection. Cytochrome c oxidase (COX), 2 the terminal enzyme of the respiratory chains of mitochondria and many aerobic prokaryotes, catalyzes electron transfer from cytochrome c to molecular oxygen, reducing the latter to water. Cytochrome c, which binds to COX on the positively charged side (P-side) of the membrane (the extramitochondrial or periplasmic side of bacteria), injects electrons into the bimetallic Cu A center, which in turn donates electrons (1 at the time) to the low-spin heme a (Fig. 1). From there, electrons are passed on to the high-spin heme a 3 -Cu B binuclear center, the binding site for oxygen. The protons required for water formation originate from the opposite, negatively charged side (N-side; the matrix side in the case of mitochondria or the cytoplasmic side in the case of bacteria) of the membrane. This redox reaction is coupled to translocation of additional protons across the membrane ("proton pumping") to further increase the electrochemical proton gradient, which is the driving force for the ATP synthesis by the ATPase. To reduce 1 molecule of oxygen, 4 electrons are taken up from cytochrome c. Extensive studies have been performed to elucidate the mechanisms by which the enzyme translocates protons and couples this process with the chemical reaction (1)(2)(3). It is generally accepted that there is an overall involvement of 8 protons during the catalytic cycle: 4 "substrate" protons to complete the reaction (water formation) and 4 to be translocated ("pumped") across the membrane. Based on the crystal structure (4,5) and mutagenesis studies (6 -8), two proton pathways have been suggested for the bacterial COXs from Paracoccus denitrificans and Rhodobacter sphaeroides, leading from the N-side toward the heme-copper site: the K-pathway and the D-pathway. The K-pathway includes the conserved amino acid Lys 354 and may be involved in the delivery of the first 1 or 2 protons during the reduction of the oxidized enzyme. The D-pathway, including Asp 124 , is likely to be involved in the uptake of both "chemical" and pumped protons in the F 3 O state transition. It appears to be the only pathway required when the fully reduced COX reacts with molecular oxygen (9 -12). The assignment of proton uptake and proton pumping to the individual steps of the catalytic cycle of COX is a matter of controversy (10,(13)(14)(15)(16)(17)(18)(19)(20)(21)(22). Injection of 1 electron into the fully oxidized O state leads to formation of the 1 electron-reduced E state. During this step of the catalytic cycle, proton uptake was proposed to take place from the N-side of the membrane via the K-pathway and to be linked to the reduction of heme a (23). This idea is based on the fact that the slower electrogenic phase ( ϳ 180 s) observed in voltage measurements showed a clear kinetic deuterium isotope effect, indicative of the transfer of a proton from the N-side toward the P-side. However, the existence of this protonic phase was questioned (24) and is still a matter of debate (22,25). Resolving this debate is of crucial importance not only for proton uptake linkage in the O 3 E step but also for the determination of the role of proton transfer pathways through COX in the different reactions of the catalytic cycle. Our experiments are designed to directly determine the proton uptake and release in COX by nanosecond time-resolved absorption spectroscopy in combination with pH indicator dyes. To address the role of the D-and K-pathways, COX variants with mutations in the respective pathway were investigated. Our results presented here clearly show that the first proton uptake from the aqueous environment already takes place in the 1-s time range and coincides with electron input into the Cu A center (26,27). At later times, no proton uptake was observed. Instead and most surprisingly, electron transfer to heme a was accompanied by proton release. These data challenge and extend the current models of coupling proton and electron transfer (5,17,(22)(23)(24)28). We present a model that contains important aspects of each of the previous models and is, in a sense, a unification of these hypotheses. Sample Preparation-Enzyme preparation of COX and variants from P. denitrificans strain AO1 was performed as described (29). Proteoliposomes were prepared by the cholate dialysis method as described (30) using asolectin, which was further purified as described (31), at a concentration of 40 mg/ml. Lipids were dried under vacuum and resuspended in 100 mM HEPES/KOH (pH 7.3), 10 mM KCl, and 2% (w/v) cholate. The suspension was stirred on ice under argon for 1-2 h and sonicated to clarity with a Branson sonifier. COX was added to a concentration of 4 M. Subsequently, asolectin vesicles were dialyzed against buffer without cholate and subsequent reduction of HEPES/KOH (10-kDa cutoff). In the last dialysis step, no buffer was present. The respiratory control ratio was determined as the ratio of the rates of cytochrome c oxidation in the coupled and uncoupled states, respectively (32). Reduced cytochrome c was used at a concentration of 40 M in 10 mM HEPES/KOH, 50 mM KCl, and 50 mM sucrose (pH 7.3); the rate of its oxidation was measured by following the change in absorbance at 550 nm after the addition of 1.2 nM reconstituted COX. Uncoupling was achieved by the addition of 5 M valinomycin and 10 M carbonyl cyanide m-chlorophenylhydrazone. The turnover number of COX was determined to be ϳ500 electrons/s, and the respiratory control ratio was 8.5-9, i.e. the enzyme has sustained no damage during preparation. The COX concentration was determined from the reduced-minus-oxidized optical difference spectrum with ⑀ 605-630 nm ϭ 11.7 mM Ϫ1 cm Ϫ1 (33). Flash Spectroscopy-Flash spectroscopy was performed with a homemade flash photolysis spectrometer (34). Prior to the experiments, samples of WT COX and functional variants were incubated overnight with potassium ferricyanide to ensure a fully oxidized enzyme. Ferricyanide was then quickly removed by gel filtration (GE Healthcare PD-10 columns). In general, preparation and measurements were carried out in the dark to prevent the enzyme from prereducing (24). Samples containing 10 M COX in 0.05% LM, 25 M Ru 2 D as an electron donor, 10 mM aniline as a sacrificial donor for ruthenium, 1 mM 3CP to prevent acidification due to proton release from aniline (35), and 50 mM KCl (pH 7.5) were excited with 10-ns pulses of 10 -15 mJ of energy at 492 nm. Under these conditions, up to ϳ10% of the COX becomes photoreduced. Electron uptake was monitored at 605 nm, as a rise in absorbance at this wavelength indicates the reduction of heme a. To reduce the amount of scattered light from the exciting laser flash, a cutoff filter (OG515) was placed in front of the entrance slit of the monochromator in the monitoring path. The signalto-noise ratio obtained in a single-flash experiment was sufficient for data analysis. Proton Uptake and Release Measurements-Proton concentration changes were recorded via the absorbance change in the soluble pH indicator dye phenol red (50 M) at 558 nm with and without 50 mM Tris-HCl. Typically 5-10 time traces from single-flash experiments were averaged for the protonation kinetics accompanying the O 3 E transition. The pK a of phenol red observed in the presence of COX and 50 mM salt was 7.95 compared with the value of 7.8 in the absence of COX. This small change in pK a indicates that the phenol red molecules may interact, at least partially, with the COX/detergent micelle, leading to a shift in pK a due to the protein surface potential (36,37). The time constant of proton release from bacteriorhodopsin/LM micelles measured with phenol red ( ϳ 70 s) agrees with the time constant for proton release from bacteriorhodopsin (36) measured with a covalently bound pH indicator dye facing the detergent shell or residing at the cytoplasmic surface (opposite the proton release side), thus supporting our assumption. When phenol red interacts with the COX micelle, i.e. resides in the membrane-water interfacial layer, fast proton release and uptake events are detectable, in contrast to measurements with pH indicator dyes residing entirely in the aqueous bulk phase (36). Proton concentration changes in proteoliposomes were observed by the addition of 50 M phenol red in the medium outside of the liposomes (P-side of COX). Proton uptake stoichiometry was calculated according to Equation 1, where ⌬A H ϩ signal denotes the absorbance change at 558 nm (phenol red), ⑀ red 605 nm is the extinction coefficient of the reduced heme a (21,600 M Ϫ1 cm Ϫ1 (38)), d is the diameter of the cuvette used, ⌬A/⌬c H ϩ is the proton calibration factor (determined as the absorbance change for a defined proton concentration change), and ⌬A 605 nm is the absorbance change resulting from the reduction of heme a. RESULTS Spectral Characterization of WT COX-WT COX shows clear spectral differences between the oxidized and reduced forms ( Fig. 2A). The shift of the strong absorption band from 425 to 440 nm is attributed to the reduced form of both heme groups (hemes a and a 3 ), whereas the two absorption bands at 598 nm (oxidized form) and 605 nm (reduced form) are derived from heme a to almost 90% (marked by black and red arrows in Fig. 2A). To investigate proton uptake or release, we used the soluble pH indicator dye phenol red, which has a pH-dependent absorption band at 558 nm (marked by an arrowhead in Fig. 2A). At this wavelength, the absorption spectrum of COX displays only marginal changes during reduction or oxidation; thus, the detected pH-dependent absorption change is almost solely due to the pH indicator dye. Time-resolved Measurements of Electron Transfer-To observe the kinetics of electrons transferred to heme a in a single-electron photochemical reduction of the enzyme from the light-reactive electron donor Ru 2 D, the latter was excited with a single laser flash, and the absorbance change at 605 nm was recorded (Fig. 2B, inset). The lower the ionic strength, the better the binding of the ruthenium complex to COX and the subsequent electron transfer (Fig. 2B). Optimal conditions for maximum electron transfer were found at pH 7.5. However, we observed aggregation below 50 mM salt. Thus, we set the salt concentration to 50 mM. A representative time trace is presented in Fig. 3A. For a better comparison with results in the literature, the trace is presented with a linear time scale, although our data were recorded with 50-ns time resolution and sampled on a logarithmic time scale with 100 data points per decade. The time-dependent absorbance changes at 605 nm were fitted with two time constants of 1 ϭ 1.5 Ϯ 0.1 s and 2 ϭ 13.2 Ϯ 0.7 s. The first rise time was assigned to the relaxation of Ru 2 D, which correlates with the kinetics of electron uptake by Cu A (26). The second time constant describes the kinetics of the electron transfer from Cu A to heme a. The average time constant of electron transfer under these optimal conditions is ϭ 13.7 Ϯ 2.4 s (mean value of five single-flash experiments at 492 nm excitation, 22°C, and 50 mM KCl (pH 7.5)). No further absorbance changes were observed, in agreement with the formation of the 1 electron-reduced E state. Proton Uptake and Release Kinetics of WT COX-Proton uptake from the aqueous bulk phase was detected with the pH indicator dye phenol red in the O 3 E step of the WT COX catalytic cycle. In Fig. 3 (A and C), the kinetics of electron transfer (⌬A at 605 nm, "electron transfer signal") are compared with the kinetics of proton concentration changes as detected with the pH indicator dye (⌬⌬A at 558 nm, "proton signal"). Both measurements were performed under the same conditions (20°C, pH 7.5, and 10 M COX in 0.05% LM, 25 M Ru 2 D, 10 mM aniline, 1 mM 3CP, and 50 mM salt/buffer). The proton signal was calculated as the difference between the two phenol red time traces obtained with and without buffer (Fig. 3B). To discriminate between true proton uptake by COX and possible transient protonation changes caused by the electron donor system, we measured a control proton signal using a covalent ruthenium-cytochrome complex. Under our experimental conditions, no contribution of the electron donor system to the transient proton signal was observed (Fig. 3E). The proton signal (Fig. 3C) contains both proton uptake and release phases as seen by the positive and negative absorption changes. A fit of the proton signal (Fig. 3C) required three exponentials, marked by the respective arrows. The proton uptake time of 1 ϭ 1.2 Ϯ 0.1 s correlates with the first phase of the electron transfer signal ( 1 ϭ 1.5 Ϯ 0.1 s), i.e. with the time constant for the electron transfer to Cu A . Surprisingly, the proton uptake is followed by proton release. The decay of the proton signal contains two components, 2 ϭ 11.5 Ϯ 2.8 s and 3 ϭ 249 Ϯ 18 s. The 11.5-s proton release component correlates with the electron transfer from Cu A to heme a ( 2 ϭ 13.7 Ϯ 2.4 s). However, the sum of the amplitudes of the decay components (proton release) is smaller than the amplitude of the rise component (proton uptake), therefore resulting in a positive net amplitude, i.e. in a net proton uptake. Proton Kinetics of WT COX Incorporated in Liposomes-To ascertain the sidedness of the above observed proton uptake and release steps, proton measurements were also performed using COX-containing liposomes. The overall shape of the time trace (Fig. 3D) resembles that of the solubilized enzyme (Fig. 3C), with fast proton uptake in the beginning ( 1 ϭ 0.9 Ϯ 0.1 s), followed by two consecutive proton release steps with 2 ϭ 6.7 Ϯ 1.4 s and 3 ϭ 192 Ϯ 28 s. In contrast to the kinetics of proton uptake and release observed in solubilized COX, no net proton uptake was detected in COX-containing liposomes with phenol red residing at the P-side of the enzyme (Fig. 3D). Thus, net proton uptake occurs from the N-side. Stoichiometry of Proton Uptake by COX and Its Variants K345M and D124N-The analysis of the two proton signals measured for the solubilized enzyme and COX incorporated in liposomes (Fig. 3, C and D) in terms of the numbers of protons taken up per injected electron using Equation 1 clearly shows that, for each injected electron, more than 1 proton is taken up by the enzyme: ϳ1 proton from the N-side and, on average, ϳ1.5 H ϩ from the P-side (Table 1). Simultaneously with electron transfer from Cu A to heme a, ϳ1 proton is released from the P-side. The last proton release step with a time constant of ϳ200 s takes place on both sides of the enzyme. The number of remaining protons in the enzyme at the N-side is 0.4 H ϩ /COX on average. Time constants and stoichiometries are summarized in Table 1. Measurements of COX variants blocked in either the K-pathway (K354M) or the D-pathway (D124N) showed that the blockage results in reduced initial proton uptake but does not abolish net proton uptake (Table 1). In both variants, ϳ1 proton is taken up in the first step, which kinetically correlates with the initial electron injection into Cu A as observed in the DISCUSSION The coupling of electron injection and proton uptake is still elusive. Although several studies investigating the proton uptake by COX have been published, the results are not consistent (10,(13)(14)(15)(16)(17)(18)(19)(20)(21)(22). In particular, for the O 3 E transition, where 1 electron is injected into the oxidized enzyme, the published results are controversial (22)(23)(24)(25). Electron transfer from Cu A to heme a followed by proton transfer through the K-pathway in the O 3 E transition with kinetics of ϳ200 s has been found in voltage measurements (23). In disagreement with that, a signal consistent with the absence of proton uptake was observed upon reduction of heme a, also using voltage measurements (24), although a more recent study indicated that proton uptake occurs with a time constant of 150 s (25). To explain the differences in the various studies, it was suggested (22) that the proton uptake observed previously (23) did not correlate to the O 3 E transition but rather to electron transfer from heme a to the heme a 3 -Cu B binuclear center due to a not fully oxidized sample or differences in preparation of the fully oxidized COX sample leading to partial transfer of the first electron from heme a to the binuclear center. In our experiments, care was taken to use a fully oxidized COX sample (see "Experimental Procedures"), and the kinetics of the electron transfer steps were monitored. In this study, using time-resolved absorption spectroscopy and pH indicator dyes, we directly observed proton concentration changes equivalent to proton uptake and release by COX during the O 3 E step. A careful optimization of the parameters affecting the proton signal allowed us to gain fundamental insights into the proton transfer steps associated with the injection of 1 electron into the oxidized enzyme. A striking feature is the observation that proton uptake by COX occurs already in the 1-s time range, when electron injection into Cu A occurs. This fast proton uptake is followed by a gradual proton release. As the amplitude of the proton release signal is smaller than the amplitude of the proton uptake component, a final net proton uptake from the N-side is determined for the O 3 E step and becomes apparent at times slower than 300 s (Fig. 3C). The proton uptake stoichiometry was calculated according to Equation 1 and amounts to an initial value of 2.6 H ϩ /COX per electron input into the WT enzyme. Comparison of proton uptake by the solubilized enzyme and that by COX incorporated into liposomes unambiguously shows that protons are taken up from both sides of the enzyme, ϳ1.5 H ϩ from the P-side and 1 H ϩ from the N-side ( Fig. 3 and Table 1). This large number of protons taken up is clearly surprising. It is very unlikely, however, that the electron donor system is responsible for the observed proton excess. Rapid proton release by aniline upon re-reduction of Ru 2 D, an effect that even opposes the TABLE 1 Time constants of proton uptake and release and proton stoichiometries Time constants of proton uptake and release as well as of electron transfer in the O 3 E step and proton stoichiometries for the individual proton transfer reactions triggered by the injection of 1 electron into the oxidized enzyme are given below. observed decrease in proton concentration in the aqueous bulk phase as a result of proton uptake by COX, was eliminated by the addition of 3CP and further tested in our experiments by monitoring the pH indicator dye absorption changes upon photoreduction of cytochrome c by using a covalent ruthenium complex. However, even more surprising is the finding that electron transfer to Cu A already leads to proton uptake at the opposite membrane surface. One might have expected the proton uptake accompanying electron injection into Cu A on the P-side by titratable residues nearby to electrostatically balance the negative charge of the extra electron at Cu A , but obviously, the organization of the protein in the low membrane dielectric allows pK changes leading to proton uptake at the opposite side of the membrane via long-range interactions. These findings are in accordance with the electroneutrality principle (39), which states that each electron transfer into the hydrophobic interior of COX is charge-compensated by the uptake of a proton. In addition, long-range effects may result in protonation dynamics at the opposite membrane surface. For proton uptake from the N-side, the idea of a protoncollecting antenna has been widely discussed (see Ref. 40 for a detailed review). This idea is supported by our observations: as initial excess proton uptake occurs not only from the P-side but also from the N-side and net proton uptake accompanying formation of the E state is abolished in neither the D124N nor K354M variant, some of the residues in the vicinity of the proton transfer pathways probably play a role as the primary proton acceptor(s). Our experimental result differs from an earlier theoretical prediction (41), where all protonation changes upon reduction of Cu A are accompanied by proton uptake mostly by residues located on the cytosolic side (N-side) of the membrane. On the basis of our data, we propose that important transmembrane charge compensation occurs already during electron transfer to the Cu A center and not primarily during reduction of heme a (Fig. 4). Reduction of Cu A may, via longrange interactions, influence and prepare proton transfer via the D-pathway. From the primary proton acceptor sites at the N-side as well as from the P-side, partial proton release occurs within ϳ200 s, leaving ϳ0.4 H ϩ on the N-side of the enzyme (Figs. 3 and 4). This time constant correlates with the time constant observed for the deuterium-sensitive phase (ϳ180 s) in voltage measurements (23). Combining these results, the following picture emerges: at the same time as the last proton release step occurs from the enzyme, the remaining proton is transferred toward the hemes, observed as a shift of a positive charge from the N-side toward the membrane interior (Fig. 4). The number of 0.4 H ϩ /COX also agrees with the results for redox-linked proton uptake of ϳ0.2-0.4 H ϩ /heme a-Cu A pair in carbon monoxide-treated COX (42). In summary, we have shown that multistep proton transfer reactions take place during the single-electron transfer in the O 3 E step of the catalytic cycle of COX. Excess proton uptake from both sides of the membrane is coupled to electron input into Cu A and precedes proton transfer from the N-side to the hemes. The former suggests the existence of efficient proton uptake mechanisms, such as proton-collecting antennae at the protein surface (40), which were also discussed for proton uptake by bacteriorhodopsin (43)(44)(45) and green fluorescent protein (46). The observed crosstalk of the two enzyme surfaces, probably mediated by long-range electrostatic interactions, and the consecutive protonation-deprotonation reactions may constitute a common mechanism linking the proton and electron transfer reactions in the different stages of the catalytic cycle of COX.
5,619.6
2012-01-11T00:00:00.000
[ "Biology", "Physics", "Chemistry" ]
NANOPARTICLE-CELL MEMBRANE INTERACTIONS: ADSORPTION KINETICS AND THE MONOLAYER RESPONSE The fast-growing production and utilization of nanomaterials in diverse applications will undoubtedly lead to the release of these materials into the environment. As nanomaterials enter the environment, determining their interaction with biological systems is a key aspect to understanding their impact on environmental health and safety. It has been shown that engineered nanoparticles (ENPs) can interact with cell membranes by adhering onto their surface and compromising their integrity, permeability, and function. The interfacial and biophysical forces that drive these processes can be examined using lipid monolayers or bilayers as model cell membranes. Interfacial interactions between NPs and cell membranes have been proven to be affected by various parameters such as the physicochemical properties of the NPs, cell membrane composition, and the extent of exposure. This study focuses on the effects of NP charge, surface functional groups and interfacial activity on the response of lipid monolayers. Dynamic surface pressure measurements were used to examine the kinetics of nanoparticle adsorption and the monolayer response. Fluorescence and real-time in situ Brewster angle microscopy (BAM) imaging were employed to characterize the morphology and structure of the monolayers. Bulk concentrations of NP and phosphorus were examined to determine the extent of NP binding and lipid extraction. The results of this study will contribute to further understanding of the membrane’s role in ENP cytotoxicity and cellular uptake and aid the design of biocompatible nanomaterials with minimal or controlled membrane activity. INTRODUCTION The production and utilization of engineered nanoparticles (ENPs) in technology and medicine is constantly expanding; 1 however, there are still many uncertainties associated with the potential risks that they pose to environmental health and safety (EHS). 2,3 Fundamental studies that assess the hazard of ENPs are necessary in order to promote safe use and limit risks, and to guide the design of environmentally and biologically compatible materials. 4 Due to their high specific surface area and nanoscale size (<100 nm), ENPs display novel physical and chemical properties that are substantially different from those observed in the bulk materials. 5,6 Hence, ENPs are suitable candidates for a broad variety of commercial applications. For instance, metal NPs such as gold 7 or silver 9,10 exhibit unique optical, electronic and catalytic properties, primarily due to their localized surface plasmon resonance (LSPR) characteristics, 11 and they have been used for environmental remediation, (bio)chemical sensing, and drug delivery. 6,12 However, nanoparticles have been shown to bioaccumulate and exhibit various levels of toxicity. [13][14][15] This can be attributed to their size, shape, surface chemistry, and surface reactivity, which may allow them to penetrate tissues, enter cells, and interact with the compartments of the cell membrane. 16,17 This process can lead to a range of nanoparticle-induced biophysical and/or biochemical changes with the degree of change dependent on a variety of parameters such as cell membrane composition, NPs concentration and physicochemical properties, and the extent of exposure. [18][19][20][21][22] As a result, the safe use of ENPs in biological systems requires evaluation of their possible cytotoxicity. Recent toxicological studies conducted in vitro and vivo have demonstrated that both carbon-based 13,14 and inorganic 23,24 NPs can strongly interact with cell membranes, and cause cytotoxicity through a variety of disruptive mechanisms including (1) adherence of the NPs to the membrane, (2) aggregation around the membrane, (3) removal of lipids from the membrane, and (4) permanently embedding into the membrane. 24 Adhesive forces between nanoparticles and cell surfaces driven by surface interactions, notably electrostatic, hydrophobic, and van der Waals, govern the timescale for nanoparticle-cell association, membrane disruption, and the extent of cellular uptake. [25][26][27] This behavior is independent of well-known cytotoxicity mechanisms related to chemical stability by which inorganic ENPs can release ions into solution or generate reactive oxygen species. 28 Dawson et al. 29 described how the scientific community generally views nanoparticle-cell interactions as occurring through "classical biological processes," but emphasized the importance of physical interactions (thus far neglected), such as those occurring between nanoparticles and membrane barriers. This is further emphasized by observations that greater nanoparticle-lipid interactions correlate with greater cellular uptake. 30,31 Hence, understanding nanoparticle-membrane interactions at the biophysical level will provide new insight into how nanoparticles affect cell function and viability. Understanding these interactions will elucidate the membrane's role in ENP cytotoxicity and cellular uptake and aid the design of biocompatible nanomaterials. This increased understanding may also provide new routes for designing nanoscale assemblies for biomedical applications. NP uptake initiates with an attachment of the particle to the cell and subsequent interactions with the lipids and other components of the cell membrane. The interfacial and biophysical interactions that modulate this process can be examined using lipid bilayers or monolayers as model cell membranes. 27,[32][33][34][35][36][37][38][39][40] Cellular membranes are complex, multicomponent systems that contain a variety of charged and uncharged lipids with different degrees of tail saturation. In model cell membranes, attempts to mimic the complexity of real membranes involve adding multiple lipids to achieve a net surface charge and/or coexisting membrane domains (e.g., ordered and disordered). Two main advantages of model membranes are that the lipid composition can be varied, and that membrane organization and disruption can be measured directly using techniques that are not amenable to living cells. These simplified structures can be considered as first step approaches to investigate real systems due to their ability to mimic some of the most relevant physicochemical features of the real cell membrane. 27,34 The overall objectives of this dissertation were (1) to develop experimental approaches to capture the key parameters that control the duration and extent of nanoparticle adhesion to model cell membranes, and (2) INTRODUCTION Physical interactions between engineered nanoparticles and lipid membranes play an important role in nanotoxicology and nanomedicine. [1][2][3] Adhesive forces between nanoparticles and cell surfaces driven by surface interactions, notably electrostatic, hydrophobic, and van der Waals interactions, govern the timescale for nanoparticle-cell association, 4 changes in nanoparticle organization at the membrane/water interface, [5][6] membrane disruption, 7 and the extent of and cellular uptake. [8][9] The interfacial and biophysical interactions that drive these processes can be examined using lipid bilayers or monolayers as model cell membranes. [10][11][12][13][14][15][16][17][18] The main advantages of model membranes are that the lipid composition can be varied and that membrane organization and disruption can be measured directly using techniques that are not amenable to living cells. Model membranes have been used extensively to examine the adsorption of, and in some cases the resulting disruption caused by, carbonaceous, [19][20][21][22] metal oxide, [23][24][25][26][27][28][29] metallic, 11, 30-32 and polymeric 28,33 nanoparticles. Recent studies have also been conducted to determine how proteins or natural organic matter, adsorbed onto the nanoparticle surface, influence membrane interactions. 20,34 Cellular membranes are complex, multicomponent systems that contain a variety of charged and uncharged lipids with varying degrees of tail saturation. In model cell membranes, attempts to mimic this complexity involve adding multiple lipids to achieve a net surface charge and/or co-existing membrane domains (e.g. ordered and disordered). In context of nanoparticle-membrane interactions, Ha et al. 19 have shown that fullerene partitioning to lipid bilayers composed of biologically relevant ternary lipid mixtures that can form liquid ordered 'lipid raft' domains is lower below the phase transition temperature than above the transition temperature when the rafts are present. shown to bind quickly and strongly to PC/PG membranes leaving them intact, but causing an increase in membrane rigidity. 36 Cationic nanoparticle binding to PC and PC/PG membranes also lead to membrane protrusions and pore formation due to 'steric crowding' within the membrane as the nanoparticles pack on the surface and consume excess area between the lipids. 12 Steric crowding caused the lipids to pack more tightly or compress, which increased the surface tension of the membrane. Finally, we have also shown that anionic and cationic silver nanoparticles (AgNPs) bind to PC/PG membranes (bilayer vesicles) without membrane rupture. 11 However, AgNP binding did lead to membrane deformation and vesicle aggregation due to membrane-AgNPmembrane bridging. 11 Lipid monolayers have been successfully used to examine nanoparticle-lipid interactions based on changes in interlipid interactions that affect the degree of lipid packing and the monolayer phase behavior, and on lipid extraction from the air/water interface due to nanoparticle-lipid binding. 23-31, 33, 37-38 This study focuses on the effects of AgNP charge, provided by anionic and cationic polymer coatings ( Fig. 2-1), on the response of PC/PG monolayers (3:1 mol). Dynamic surface pressure measurements were used to examine the duration and extent of nanoparticle adsorption and the monolayer response. Sub-phase Ag and phosphorus (P) concentrations were examined to confirm AgNP binding and the extent of lipid extraction. AgNPs, referred to as Ag-COOH, were coated with a carboxylated amphiphilic polymer formed by hydrolyzing poly-(maleic anhydride-alt-1-octadecane). [39][40] Cationic AgNPs, referred to as Ag-NH, were prepared by coating Ag-COOH nanoparticles with polyethyleneimine. Sterile, ultra-filtered deionized water was obtained from Millipore Direct-3Q purification system and adjusted to pH 7. DPPG and DOPG are sodium salts and the concentration of Na + counterions within the subphase was equivalent to 3×10 -5 mM. Isotherms were generated for a single compression/expansion cycle at a barrier rate of 10 cm 2 min -1 and π was measured using paper Wilhelmy plates. The total area of the trough during this cycle ranged from roughly 20-70 cm 2 . Materials Step (2) was used to determine the change in monolayer surface pressure in the presence of AgNPs as a function of time. To measure dynamic changes in surface pressure (∆π) the trough was initially set to maintain a constant surface pressure (π0 = 10, 20, or 30 mN m -1 ) after the compression/expansion isotherms (step 1). Once the monolayer stabilized and π0 remained constant, the barrier positions were fixed at the corresponding interfacial area or charge density. AgNPs were added to the water subphase by injecting them behind the barriers using a syringe to avoid disrupting the monolayer. The volume and concentration of the AgNP solution that was injected was 100 uL and 5 mg mL -1 , respectively. The AgNPs were mixed within the subphase by recycling the solution using a peristaltic pump. Control experiments confirmed that the pumping action did not disturb the monolayers and that water evaporation did not alter the ∆π measurements. The initial AgNP concentration in the subphase was 3.6 mg L -1 or 33.4 M, which was estimated to provide excess surface coverage based on the AgNP cross sectional area at a monolayer surface area of 70 cm 2 . AgNP Characterization. AgNPs were characterized prior to the monolayer experiments to confirm their physicochemical properties and to determine the extent of AgNP dissolution. The average rc was 6 ± 2 nm based on TEM analysis and was common to both Ag-COOH and Ag-NH (TEM, Fig. 2-3A). The polymer coatings surrounding the AgNPs were not observed in the micrographs. Ag-COOH had a hydrodynamic radius, rh, of 14 ± 2 nm (0.02 PDI) and a zeta potential, ζ, of -63 ± 3 mV. Ag-NH had a rh = 20 ± 3 nm (0.02 PDI) and a ζ = +46 ± 2 mV. The average coating thicknesses based on the difference between rh and rc were 8 nm for Ag-COOH and 14 nm for Ag-NH. The increase in coating thickness from Ag-COOH to Ag-NH is consistent with PEI coating of Ag-COOH. The maximum absorbance due to AgNP surface plasmon resonance (SPR) was observed at a wavelength of 410 nm ( Fig. 2-3B). The SPR absorbance was measured over 3 months to confirm the stability of the AgNPs and determine the extent of dissolution. There was no shift in the SPR wavelength, indicating that the AgNPs were stable. A slight reduction in SPR absorbance was observed over 3 months consistent with a ~3% decrease in the AgNP concentration. Given that the monolayer studies were conducted within 1 month of receiving the samples, we did not account for AgNP dissolution in our analyses. Finally, the surface activity of the native AgNPs was examined in the absence of a lipid monolayer ( Fig. 2-3C). The π-A isotherm for Ag-COOH and Ag-NH showed a π of 16.9 mN m -1 and 6.3 mN m -1 , respectively, with 74% compression (70 to 18 cm 2 ) indicating that polymer coatings rendered the nanoparticles surface active due to hydrophobic interactions at the air/water interface. Dynamic changes in monolayer surface pressure due to AgNP adsorption. Dynamic changes in monolayer surface pressure, ∆π, were determined as Δπ = π(t) -π0 = 0 - (t), where π(t) is the dynamic surface pressure after AgNP addition and π0 is the initial surface pressure of the air/lipid/water interface. The relationship between ∆π and the initial air/lipid/water interfacial tension, 0, and the dynamic interfacial tension, (t), shows that an increase in ∆π would result from a decrease in (t) due to AgNP-lipid monolayer interactions (and vice versa). Changes in ∆π are depicted in Fig. 2 for the proposed AgNP-lipid monolayer interaction mechanisms. Hädicke and Blume 43 have shown that dynamic surface measurements with cationic peptides and anionic DPPG monolayers can be used to differentiate between peptide insertion into the monolayer (increasing Δπ) and lipid condensation due to peptide-lipid binding (decreasing Δπ). This approach has also been used to examine the insertion of gold nanoparticles into DPPC monolayers. 30 Increased initial packing (based on π0) prevented nanoparticle insertion and the decrease in Δπ indicates that Ag-COOH led to lipid condensation ( Fig. 2-2B2). This behavior was independent of phase state. A linear fit of Δπ as a function π0 at t = 180 min was used to estimate the minimum insertion pressure (MIP) of Ag-COOH, which corresponds to the condition Δπ = 0 (Figures 2-5A2 and 5B2). We refer to insertion as meaning that the nanoparticles breach the plane of the monolayer and occupy area at the air/water interface with or without an adsorbed lipid coating. The MIPs for that below this surface pressure the nanoparticles are capable of inserting into the monolayer. Above the MIP inter-lipid interactions within the monolayer resist nanoparticle insertion. The MIPs determined for Ag-COOH are considerably lower than those reported for 10 and 15 nm diameter anionic gold nanoparticles and zwitterionic DPPC monolayers. 30 It should be noted that the gold nanoparticle concentration was more than order of magnitude higher than what was used in this work. hydrophobic interactions with lipids tails, counterion-mediated (Na + ) binding to PGs, and electrostatic and charge-dipole interactions with PCs. The surface activity of Ag-COOH supports the assertion that Ag-COOH penetrated into loosely packed monolayers at π0 = 10 mN m -1 and resided at the air/water interface. It should be noted that the ability for Ag-COOH to insert into the monolayer might also stem from the nanoparticles being rendered partially hydrophobic due to the adsorption of lipids at the air/water interface and the formation of nanoparticle-lipid complexes. 24 Hydrophobic interactions do not, however, explain the reductions in surface pressure at 20 or 30 mN m -1 . With regards to counterion-mediated binding, the Na + counterions associated with PGs may have facilitated the adsorption of Ag-COOH. This mode of adsorption has been proposed for anionic citrate-coated gold nanoparticles and DPPG monolayers, which caused an increase in surface pressure (or monolayer expansion). 31 Given that PGs comprised only 25 mol% of the monolayers examined herein, and significant decreases in surface pressure were observed consistent with lipid condensation, it is unlikely that counterion-mediating adsorption played a dominant role. Electrostatic and charge-dipole interactions with PCs, which were present at 75 mol% in the monolayers, appear to be a main driving force for Ag-COOH adsorption. At π0 = 20 and 30 mN m -1 the reductions in surface pressure suggest that the nanoparticles did not penetrate the monolayer, but rather remained bound to the monolayer below the interface and caused lipid condensation (i.e. a reduction in the effective area per lipid). It has been shown that anionic nanoparticles can bind to zwitterionic lipids 44 and pulmonary surfactant monolayers 28 through attractive interactions with the positive choline group of zwitterionic lipids. [23][24][25] Zwitterionic lipids have a dipole moment extending into the aqueous phase that can also lead to attractive short-range ion-dipole interactions. Anionic nanoparticles can reorient the headgroup dipoles of zwitterionic lipids, causing the dipole to orient perpendicular to the lipid/water interface and reducing the area per lipid. 44 Hence, lipid condensation in the monolayers appears to be attributed to the dipole reorientation of DPPC and DOPC. The ability for Ag-COOH to adsorb onto DPPC/DPPG monolayers is consistent with our previous work showing Ag-COOH adsorption onto DPPC/DPPG bilayer vesicles. 11 The role of lipid condensation was examined further using monolayers containing equimolar mixtures of PC and PG lipids (data not shown). Reducing the concentration of DPPC or DOPC from 75 mol% to 50 mol% reduced the magnitude of the Δπ decrease. With less PC lipid there was less lipid condensation. Cationic Ag-NH. In contrast to Ag-COOH, the monolayers responded differently to oppositely charged Ag-NH and MIP values could not be determined (Fig. 2-6). A significant increase in π was observed for DPPC/DPPG monolayers at π0 = 10 mN ma decrease in π was observed suggesting that lipid condensation occurred, and at π0 = 30 mN m -1 a two-state response was observed where π increased rapidly up to 10 min (insertion) and then decreased exponentially (condensation). The rapid increase in π observed initially at π0 = 10 and 30 mN m -1 was due to electrostatic attraction between the monolayers and Ag-NH that drove adsorption and insertion. Electrostatic attraction was also present at π0 = 20 mN m -1 , however, the surface pressure response reflected competition between lipid condensation and Ag-NH insertion, where at this initial surface pressure, lipid condensation had the greatest impact on π. For DOPC/DOPG, π was unchanged (10 mN m -1 ) or reduced (20 and 30 mN m -1 ) and there was no evidence of Ag-NH insertion. Only lipid condensation was observed at high initial surface pressures. Lipid condensation caused by cationic Ag-NH was driven by electrostatic attraction with anionic DPPG or DOPG lipids and inter-lipid charge neutralization. This differs from anionic Ag-COOH, which interacted with the zwitterionic lipids. Previous work has shown that cationic gold nanoparticles have a minimal effect on the surface pressure isotherms of DPPC, 31 which further supports the assertion that PGs were responsible for Ag-NH adsorption. The concentration of AgNPs in the sub-phase (Fig. 2-7) provides a number of insights into the monolayer response. First, there is generally little difference in AgNP concentrations between the two monolayers; the exception being Ag-COOH at the highest monolayer charge density where the standard errors were large. This suggests that AgNP adsorption was primarily driven by lipid headgroup interactions and that the monolayer response was driven by the lipid tail saturation and phase behavior. Second, the sub-phase concentration of Ag-NH is less than Ag-COOH, which means that more Nano ZSX for their core radius, and hydrodynamic radius and zeta potentials, respectively. The average core radius (rc) of Ag-PEG was determined by analyzing multiple TEM images with the ImageJ software (n > 50). 35 To measure the average zeta potentials (ζ) and hydrodynamic radius (rh) of Ag-PEG, the as-received particles were diluted ten-fold in deionized water and analyzed at 25 °C. The values reported are based on triplicate measurements of three different samples. trough had a fully opened area of ∼80 cm 2 and a width of 7 cm (Fig. 3-2). Ag-PEG characterization. Ag-PEG nanoparticles were characterized prior to the monolayer experiments for their size, zeta potentials, stability and extent of dissolution. As shown in Fig. 3-3A, the average core radius (rc) was 6 ± 2 nm based on analysis of TEM images. The polymer coatings surrounding Ag-PEG were not observed in the micrographs. The mean hydrodynamic radius (rh) and zeta potential (ζ) were measured to be 15 ± 2 nm (0.04 polydispersity index) and -10.6 ± 0.1 mV, respectively. The average coating thickness based on the difference between rh and rc was 9 nm. The maximum surface plasmon resonance (SPR) absorbance was observed at a wavelength of 425 nm (Fig. 3-3B). Ag-PEG SPR absorbance was measured by UV−vis spectroscopy over 3 months, and there was no significant shift and reduction in the SPR wavelength indicating that the nanoparticles were stable. Similar to our previous study on anionic (COOH) and cationic (NH)-coated AgNPs, 26 considering that the monolayer experiments were conducted within 3 months of receiving the samples, we did not account for NP dissolution in our analysis. As shown in Fig. 3 In the presence of DOPC/DOPG monolayers at an initial surface pressure of 10 mN m -1 (Fig. 3-4B) Ag-PEG remained surface activity and the lipid monolayer did not prevent Ag-PEG adsorption at the interface ( Fig. 3-4D). Considering that both Ag-PEG and DOPC/DOPG monolayers exhibit a net negative charge, adsorption can be attributed to hydrophobic interactions. Xi et al. 21 have also demonstrated that Ag-PEG similar to those used in this study bind to DOPC/DOPG bilayer vesicles. In their work, it was proposed that the surface activity of the PEG-polymer coating may have facilitated membrane penetration through hydrophobic interactions despite electrostatic repulsion. monolayer were similar, suggesting that lipid extraction was not a significant factor ( Fig. 3-5). Therefore, we conclude that Ag-PEG did not extract lipids from the monolayers and that the lipids remained at the interface to form a mixed Ag-PEG + lipid film. 55 The collapse pressure (πc, mN m -1 ) and collapse area (Ac, cm 2 ) were determined from − A isotherms of Ag-PEG at high nanoparticle concentrations (0.71 to 3.55 mg L -1 ) (Fig.3-7). The collapse pressure was directly proportional to Ag-PEG concentration. Based on Ac, and assuming 2D hexagonal packing, an effective Ag-PEG radius of 12.5 ± 3.9 nm was calculated at the interface. The calculated 'interface radius' of the nanoparticles is consistent with the measured hydrodynamic radius. Hence, Ag-PEG assembled as densely packed monolayers at the air/water interface at high concentrations, and the monolayers collapsed once they exceeded hexagonal packing. A comparison between compression/expansion isotherms of Ag-PEG at air/water and air/lipid/water interfaces are shown in Fig. 3-6. At low Ag-PEG concentrations ([Ag-PEG] ≤ 0.35 mg L -1 ), the isotherm shifted to smaller area with respect to the isotherm of the lipid mixture alone, noting that more compression was necessary for the Ag-PEG + lipid films to attain the same arbitrary surface pressure compared to pure lipid film. This behavior is not attributed to the extraction of lipid molecules (Fig. 3-5 INTRODUCTION The environmental concentration of polymeric particles is constantly increasing due to the significant amount of plastic waste that is being disposed in the oceans and soil. [1][2][3] Recent studies on the size distribution of the plastic debris have shown that millimeter-size plastics can be fragmented to even smaller particles, referred to as micro-and nano-plastics, 4-7 which may pose a significant threat both to the environment and human health. [6][7][8][9][10][11][12][13][14][15] The small size of these particles (<1µm) makes them a susceptible of ingestion by organisms that are at the base of the food-chain. 1 The potential adverse effects associated with interactions between these materials and biological systems could be comparable to those observed with engineered nanoparticles (ENPs). [16][17][18] Toxicological studies conducted in vitro and vivo have demonstrated that polymeric ENPs can translocate across living cells to the lymphatic and/or circulatory system, 19,20 accumulate in secondary organs, 21 and impact the immune system and cell health. [22][23][24] NP cellular uptake begins with an initial adhesion of the particle to the cell and subsequent interactions with the lipids and other components of the cell membrane. The interfacial and biophysical forces that modulate this process can be examined using lipid bilayers or monolayers as model cell membranes. [25][26][27][28][29][30][31][32][33][34] Two main advantages of model membranes are that: (1) the lipid composition and structure can be precisely controlled, thereby capturing the essential aspects of the real cell membranes, and (2) the membrane organization and disruption can be measured directly using techniques that are not amenable to living cells. 18 Model membranes have been used extensively to study the adhesion of, and in some cases the resulting disruption caused by both carbon-based and inorganic ENPs. 35 In the work discussed below, we have examined the response of human red blood cell model membranes to the adhesion of polystyrene (PS) nanoparticles with a particular emphasis on the effect of NP surface chemistry on this process. Physicochemical properties of NPs, such as size, charge and surface chemistry are the main factors modulating NP durability and solubility in biological media as well as their biocompatibility and membrane interactions. 36 Upon encountering biological fluids (e.g. blood, lymph, cytoplasm, cell culture media) nanoparticles are covered by biomoleculesof which proteins have received the most attention, forming what is described as a "corona". 37,38 Recent research has revealed that in many cases it is the biomolecular corona that interacts with biological systems and thereby constitutes a major element of the biological identity of the nanoparticle. [39][40][41][42][43][44] In particular, the corona is composed of a tightly, but not completely irreversibly, adsorbed layer of biomolecules ("hard" corona), which is surrounded by a more loosely associated and rapidly exchanging layer of biomolecules ("soft" corona). 45 The formation of a corona has been reported for several nanoparticles, including polystyrene, 46 silica, 47 carbon nanotubes, 48 silver, 39 and gold. 49 The amount, composition, and orientation of biomolecules present in the corona strongly influence HSA (5% in PBS) was added to the microcentrifuge tubes, and the tubes were incubated at 37 °C for one hour. The tubes were subsequently centrifuged three times (18000 rcf, 4 ℃) with a PBS solution wash between each centrifugation step. Finally, the sedimented NPs were re-dispersed in PBS to isolate the NPs and associated complexed proteins. Characterization of NPs and NP-HC complexes. NPs and NP-HC complexes were characterized using transmission electron microscopy (TEM; JEOL JEM-2100F) operating at 200 kV and a Malvern Zetasizer Nano ZSX for their core radius, and hydrodynamic radius and zeta (ζ) potentials, respectively. The average size of PS NPs was determined by analyzing multiple TEM images with ImageJ software (n > 50). 70 To measure the average ζ-potentials and hydrodynamic diameter (dh) of NPs, the as- Exposure of PS NPs to protein led to changes in their hydrodynamic properties. We incubated the PS NPs in human serum albumin (HSA) solution for 60 min to allow NP-HSA complexes to form and separated these complexes from free and weakly complexed HSA via a series of centrifugation and washing steps comparable to those previously used to operationally define the hard corona on nanoparticles (Fig. 4-2A). 1,50,76 The changes in ζ-potential and dh of the particles (Fig. 4 M HSA, which reached a plateau at dh = 123 nm for 300 μM HSA. We infer that HSA concentration of 300 μM is sufficient to saturate the NP surface and form a close-packed monolayer of protein corona. 71 The increase in dh due to corona formation was about 20 nm and was common to all of them, corresponding to a hydrodynamic-shell thickness of 10 nm (Fig. 4-2C). Negative-staining TEM of NP-HC complexes confirmed that the HSA shell thickness on the NPs was 7 ± 1 nm ( Fig. 4-2B Dynamic changes in surface tension (DST) for unmodified, carboxylate-modified, and amine modified PS and PS-HC complexes are depicted in Fig. 4-3A-C, respectively. In general, as NPs diffuse from the bulk and adsorb to the interface, they effectively reduce . Early in this process, decreases relatively slowly due to the adsorption of single particles to a pristine interface. When the surface concentration of NPs increases, drops more rapidly. At long times ( → ∞), where the interface approaches maximum coverage, the rate of NP surface adsorption decreases due to a steric barrier and approaches a plateau reflecting a pseudo-equilibrium condition. As shown in Fig. 4-3A-C, bare NPs were not inherently surface active. Although Brewster angle microscopy images showed the adherence of particles at the air-water interface ( Fig. 4-3D-F), the reduction in interfacial tensions due to their attachment, functional group. HSA corona complexation rendered the NPs surface active due to hydrophobic interactions at the air-water interface, which led to a lower equilibrium surface tension Results are shown in Fig. 4 Adsorption kinetics at the air-water interface. Dynamic interfacial tension data can be further analyzed using the classical model of Ward and Tordai 81 to quantitatively describe the kinetics of NP adsorption. The following asymptotic equations have been employed to interpret data from the early ( → 0) and late ( → ∞) times of nanoparticle adsorption. At early times (first-stage), an individual NP that is adsorbing to the interface encounters a bare interface. Assuming there is no barrier to adsorption at this stage, the rate of particle diffusion through the bulk is the rate-limiting factor and the diffusioncontrolled Ward and Tordai mechanism can be applied. 81 Bizmark et al. 82 modified the Ward and Tordai model to account for NPs larger than 10 nm with adsorption trapping energy exceeding 10 3 kBT: Here, is Avogadro's number, ∆E is the trapping energy of a single particle at the interface, is its diffusion coefficient, and 0 is the molar concentration. The number of NPs adsorbed at the interface is significantly less than that remaining in the bulk and C0 is assumed to be constant throughout the adsorption process. Surface coverage at any time during the adsorption process can be calculated from the measured surface tension: 82 where ∞ is the maximum fraction of surface coverage, which is 0.91 for hexagonal close packing of spheres 83 , 0 is the pristine interfacial tension of water, and ∞ is the equilibrium interfacial tension. For native NPs, considering that they were not surface active, ∞ was determined based on calculated excess PS surface concentrations at the end of adsorption process and was less than 0.5 for all three types of them. We note that for NP-HC complexes, ∞ = 0.91 as they were surface active and assembled as denselypacked monolayers at the air-water interface. , in which is the hydrodynamic radii of the particles and is the viscosity of water at room temperature. As summarized in Table 1, the values of 1 and are within the same order of magnitude, indicating that equation (1) is valid during the early times adsorption of particles from the bulk to the air-water interface. Using values, we were able to extract the first-stage adsorption energy, |∆ 1 |, by fitting the slope of early time DST data against 0.5 . As shown in Table1, there was a clear correlation between the adsorption energy and the ζ-potential of the NPs. Anionic unmodified and carboxylate-modified PS had similar adsorption energy, while greater values were observed for cationic amine-modified PS. Anionic PS NPs were electrostatically repelled from the interface, since the ζ-potential at the air-water interface has been shown to be negative. 84,85 In the first-stage approximation ( < 0.3) proposed by Bizmark et al., 82 only one slope was observed when DST was plotted against 0.5 . We observed similar behavior for NP adsorption at early times. However, for NP-HC complexes, two distinct stages with clearly different slopes were noted in a plot of early time DST over 0.5 (Fig. 4-5B) consistent with the results of recent work by Tian et al. 86 using poly(ethylene oxide) (PEO)-modified polystyrene NPs to study the adsorption kinetics at the air-water interface. 86 The presence of two distinct stages at the early time adsorption were comparable when NP-HC complexes were employed. As shown in Fig. 4 (Table 1). The observed two-stage transition for NP-HC complexes is attributed to protein denaturation at an interface. 81 HSA has hydrophilic groups on its surface that make it water-soluble, but hydrophobic peptide residues in the core. Proteins denature at a hydrophobic interface wherein the hydrophobic core peptides unfold at the interface, while the hydrophilic peptides orient toward the aqueous phase. The extent of increase in adsorption energy due to HSA denaturation at the air-water interface was consistent with the extent of HSA associated with NPs. The greater increase in |∆ | was observed for PS-NH-HC, while unmodified and carboxylatemodified PS NPS showed similar values. It has been shown that anionic nanoparticles can bind to zwitterionic lipid monolayers and bilayers through attractive interactions with the positive group of zwitterionic lipids (e.g. choline group of POPC and ethanolamine group of POPE). [88][89][90] Moreover, zwitterionic lipids have dipole moments extending into the aqueous phase that can lead to attractive short-range ion-dipole interactions. Both anionic and cationic nanoparticles can reorient the headgroup dipoles of zwitterionic lipids, causing the dipole to orient perpendicularly to the air-water interface and reducing the area per lipid. 90 Hence, lipid condensation in the RBC monolayers can be attributed to the dipoles reorientation of zwitterionic POPC and POPE. We observed similar behaviour in our previous work using carboxylate-and amine-modified silver NPs and PC/PG monolayers. 25 The morphology of the monolayer was visualized in situ using Brewster angle microscopy (BAM) technique. As shown in Fig. 4-8A, the extent of lipid condensation was greater for PS-NH compared to unmodified PS and PS-COOH, suggesting that inclusion of cationic nanoparticles within a monolayer induces more modification in the monolayer lipid packing. The extent of increase in RBC monolayer DST due to the adsorption of NP-HC complexes was smaller compared to that for bare NPs (Fig. 4-7), indicating that NP-HC complexes induced less lipid condensation. These results are consistent with our previous work using cationic and anionic silver nanoparticles and show that hydrophobic interactions were responsible for NP insertion, while electrostatic and charge-dipole interactions were responsible for lipid condensation. Moreover, real time BAM imaging of the film displayed lipid condensations at early time NP-HC complexes adsorption, and the formation of homogenous densely packed monolayer at equilibrium ( Fig. 4-8B1&2). Hence, we infer that at early times, NP-HC complexes penetrate into monolayers through attractive short-range ion-dipole interactions, bind to zwitterionic lipids and cause lipid condensation (increasing ) similar to what we observed for bare NPs adsorption. This process follows by the protein corona partitioning between coexisting membrane domains via attractive hydrophobic interactions (increasing ) and unfolding at the air-water interface. 91,92 This leads to the formation of homogenous densely packed RBC+HSA film at the interface, in which NPs are an integral part of the mixed film. 93 This behaviour was common to all three NP-HC complexes. Excess NP and NP-HC concentrations at the air-lipid-water interface. To further quantify the extent of NP and NP-HC complex adsorption at the air-lipid-water interface, the subphase concentrations of PS were analyzed by UV-vis spectroscopy.
8,040.2
2019-01-01T00:00:00.000
[ "Materials Science", "Biology" ]
On a nonlinear Schr{\"o}dinger equation for nucleons in one space dimension We study a 1D nonlinear Schr{\"o}dinger equation appearing in the description of a particle inside an atomic nucleus. For various nonlinearities, the ground states are discussed and given in explicit form. Their stability is studied numerically via the time evolution of perturbed ground states. In the time evolution of general localized initial data, they are shown to appear in the long time behaviour of certain cases. Introduction This paper is concerned with the study of solutions to a nonlinear Schrödinger (NLS) type equation which, in a specific non-relativistic limit proper to nuclear physics, describes the behavior of a particle inside the atomic nucleus.This equation is, at least formally (see Appendix A), deduced from a relativistic model involving a Dirac operator and, in space dimension d = 1, is given by (1) i∂ where φ ∈ L 2 (R, C) is a function that describes the quantum state of a nucleon (a proton or a neutron), α ∈ N * is a strictly positive integer and a > 0 is a parameter of the model.Note that equation (1) is Hamiltonian and has a conserved energy (2) Solitary wave solutions for this equation can be constructed by taking φ(t, x) = e ibt ϕ(x) with ϕ a real positive square integrable solution to the stationary equation The reasoning that the solution can be chosen to be real is the same as for the standard NLS equation. Positive square integrable solutions of (3) can be seen as ground states solutions of (1) since they are minimizers of (2) among all the functions belonging to (4) where f + denotes the positive part of any function f .As shown in [4], this is the appropriate way to define ground states for this energy.Indeed, on the one hand, by adapting the arguments of [4], it can been shown that the energy E is not bounded from below in the set ϕ ∈ H 1 (R), R |ϕ| 2 = 1 .On the other hand, if ϕ ∈ X, one can show (see [4]) that |ϕ| 2 ≤ 1 a.e. in R. As a consequence, for any ϕ ∈ X, E[ϕ] ≥ − a α+1 . In this paper we will prove the existence of solitary waves solutions to (1) for any value of α ∈ N * , give them in explicit form, and study numerically their stability as well as the time evolution of more general initial data with |φ| < 1. In the physical literature, the most relevant case is given by α = 1 which leads to the cubic nonlinear Schrödinger type equation (5) i∂ Nevertheless, it could be mathematically interesting to investigate also the behavior of solutions for other power nonlinearities as for example the quintic nonlinearity (α = 2) which corresponds to the L 2 critical case for the usual NLS equation.For the latter equation, it is known that initial data with a mass larger than the ground state can blow up in finite time, see for instance [7] and references therein. To our knowledge, the above model was mathematically studied for the first time in [3], where M.J. Esteban and S. Rota Nodari consider the equation which is the generalization of (3) for any spatial dimension d ≥ 1 and for α = 1.In particular, the existence of real positive radial square integrable solutions has been shown whenever a > 2b.Note that solutions to (6) do not have a simple scaling property in the parameter b as ground states for the standard NLS equation.This makes it necessary to study several values of b in this context.This result has then been generalized in [8], where the existence of infinitely many squareintegrable excited states (solutions with an arbitrary but finite number of sign changes) of ( 6) was shown in dimension d ≥ 2. In [4] (see also [6]), using a variational approach the existence of solutions to ( 6) is proved without considering any particular ansatz for the wave function of the nucleon and for a large range of values for the parameter a. Finally, in [6], M. Lewin and S. Rota Nodari proved the uniqueness, modulo translations and multiplication by a phase factor, and the non-degeneracy of the positive solution to (6).The proof of this result is based on the remark that equation ( 6) can be written in terms of u = arcsin(ϕ) as simpler nonlinear Schrödinger equation. The same can be done for (3).Indeed, by taking u := arcsin(ϕ α ), one obtains In Appendix B, we generalize the results of [6] for any α ∈ N * in spatial dimension 1 by proving the following theorem. The paper is organized as follows: In Section 2 we derive the explicit form of solutions to (3) for any α ∈ N * whenever a > (α + 1)b > 0, and we show their behavior for various values of the parameters.The computation presented in Section 2 is justified in the Appendix B where the proof of Theorem 1 is done.In Section 3, we outline the numerical approach for the time evolution of initial data according to (1).This code is applied to perturbations of the ground states for various values of the nonlinearity parameter α and for initial data from the Schwartz class of rapidly decreasing functions.In Section 4, we discuss the generalization of the model in higher space dimension.Finally, a formal derivation of the equation ( 1) is presented in Appendix A). Ground states In this section we construct ground state solutions to the equation ( 1) and show some examples for different values of the parameters. First of all, equation ( 3) can be integrated once to give where we have used the asymptotic behavior of ϕ for x → ∞.Putting ψ := ϕ −2α , we get from (9), which has for a = (α + 1)b the solution Here x 0 is an integration constant reflecting the translation invariance in x of the ground state and ψ(x 0 ) is chosen in order to have a C 1 solution to (10) defined for any x ∈ R. Using the translation invariance, we will assume in the following that the maximum of the solution is at x = 0, and then we put x 0 = 0.The solution to equation (10) for a = (α + 1)b leading to the wanted asymptotic behavior of ϕ will not be globally differentiable. Summing up, with (11) we get the ground states for 0 < (α + 1)b < a in the form Let us point out that this construction will be further justified in Appendix B where Theorem 1 is proven. As a concrete example we show the solutions (12) for a = 9 and various values of b < a/(α + 1).The solutions for α = 1 can be seen in Fig. 1.With b → a/2, the solutions become broader and broader and have a larger maximum.The peak near 1 becomes also flatter.For b = 4.499, the maximum is roughly at 0.9999 and almost touches on some interval the line 1. For α = 2, 3 we get in the same way the figures in Fig. 2. It can be seen that the higher nonlinearity has a tendency to lead to more compressed peaks as in [1].But due to the missing scaling invariance of the ground states here, it is difficult to compare them. Numerical study of the time evolution In this section we study the time evolution of initial data for the equation (5).We study the stability of the ground state and the time evolution of general initial data in the Schwartz class of smooth rapidly decreasing functions for various parameters. The results of this section can be summarized in the following Conjecture 2. The ground states of equation ( 5) are asymptotically stable if the perturbed initial data satisfy |φ(x, 0)| < 1. 3.1.Time evolution approach.The exponential decay of the stationary solutions makes the use of Fourier spectral methods attractive.Thus we define the standard Fourier transform of a function u, and consider the x-dependence in equation ( 1) in Fourier space. The numerical solution is constructed on the interval x ∈ L[−π, π] where L > 0 is chosen such that the solution and relevant derivatives vanish with numerical precision (we work here with double precision which corresponds to an accuracy of the order of 10 −16 ).The solution φ is approximated via a truncated Fourier series where the coefficients φ are computed efficiently via a fast Fourier transform (FFT).This means we treat the equation, ( 14) and approximate the Fourier transform in ( 14) by a discrete Fourier transform. The study of the solutions to ( 14) is challenging for several reasons: first it is an NLS equation which leads to a stiff system of ODEs if FFT techniques are used.Since a possible definition of stiffness is that explicit time integration schemes are not efficient, the use of special integrators is recommended in this case.But most of the explicit stiff integrators for NLS equations, see for instance [5] and references therein, assume a stiffness in the linear part of the equations.However, here the second derivatives with respect to x appear in nonlinear terms.Since we are interested in a fourth order method (for the accuracy needed for |φ| ∼ 1 as discussed below), we will nonetheless use an explicit approach (the standard explicit fourth order Runge-Kutta method) with very small steps for stability reasons, but also since the accuracy is needed in some of the studied examples.This appears to be more efficient than an implicit scheme where a nonlinear equation has to be solved iteratively at each time step. An additional problem of equation ( 1) is the singular term for |φ| → 1.Since the equation is focusing, it is to be expected that for initial data with modulus close to 1 it will be numerically challenging since the focusing nature of the equation might lead for some time to even higher values of |φ|.The accuracy of the solution is controlled as in [5]: the decrease of the Fourier coefficients indicates the spatial resolution since the numerical error of a truncated Fourier series is of the order of the first neglected Fourier coefficients.The error in the time integration is controlled via conserved quantities.We use the energy (2) which is a conserved quantity of ( 1), but which will numerically depend on time due to unavoidable numerical errors.In the examples below, the relative energy is always conserved to better than 10 −6 since the stability conditions as discussed above enforce the use of very small time steps. We test the numerical approach at the example of the ground state.Concretely we consider the ground state solution for α = 1, a = 9, b = 4.4 as initial data.We use N = 2 10 Fourier modes in x for x ∈ 5[−π, π] and N t = 10 5 time steps for t ∈ [0, 1].Note that though the ground state solution is stationary, it is not time independent.We compare the numerical and the exact solution, i.e., the solution to (12) times e ibt at the final time t = 1.This difference is of the order of 10 −14 as shown in Fig. 3.The relative conservation of the energy (2) is during the whole computation of the order of 10 −14 .This shows that the ground state can be numerically evolved with an accuracy of the order 10 −14 , and that the conservation of the numerically computed energy indicates at accuracy of the time integration.3.2.Perturbations of ground states.We first consider the stability of the ground states constructed in the previous section.To this end we perturb it first in the form φ(x, 0) = λϕ(x), where λ ∼ 1.We use N = 2 11 Fourier modes and N t = 5 * 10 5 time steps for t ∈ [0, 0.25], i.e., more than a whole period of the perturbed ground state.In Fig. 4 we show the solution for the perturbed ground state with λ = 0.99.It can be be seen that after a short phase of focusing a ground state with slightly larger maximum than the initial data is reached.In addition there is some radiation towards infinity.The reaching of a ground state is even more obvious from the L ∞ norm of the solution shown on the left of Fig. 5.Note that a comparison with the ground state corresponding to the presumed final state of the time evolution in Fig. 4 is difficult since in contrast to ground states of the standard NLS equation, the ground states here do not have a simple scaling property in b.The Fourier coefficients of the solution at the final time on the right of Fig. 5 indicate that the solution is fully resolved in x.If we perturb the same ground state as in Fig. 4 with a factor λ > 1 (such that ||λϕ|| ∞ < 1), we observe a similar behavior as can be seen in Fig. 6.As is more obvious from the L ∞ norm in Fig. 7 on the left, a ground state with slightly lower maximum than the initial data is quickly reached.The decrease of the modulus of the Fourier coefficients on the right of Fig. 7 indicates that the numerical error in the spatial resolution is of the order of 10 −8 .This shows that there are stronger gradients to resolve in this case than in Fig.The same initial data as above are perturbed with a localized perturbation of the form φ(x, 0) = ϕ(x) ± 0.001 exp(−x 2 ).The resulting L ∞ norms of the solutions to (5) for these initial data are shown in Fig. 8.In both cases the L ∞ norm appears to approach a slightly smaller ground state (for the − sign in the initial data) respectively slightly larger ground state (for the + sign).In any case the ground states appear to be stable also in this case. We repeat the experiments of Fig. 4 and Fig. 6 for α = 2, i.e., a higher nonlinearity.The L ∞ norms for the perturbed ground states can be seen in Fig. 9. Again a ground state with slightly smaller maximum state is reached for λ = 0.99, and with a slightly larger maximum for λ = 1.001.The oscillations which appear for larger times are due to us studying the perturbations in a periodic setting and not on R. of the computational domain and interacts after some time with the ground state which leads to periodic excitations of the latter.Note that the quintic nonlinearity is L 2 critical in the standard NLS equation, i.e., solutions to initial data of sufficient mass blow up in finite time.Here no blow-up is observed, ||φ|| ∞ < 1 for all times.But in addition the ground state appears to be stable also for the Gaussian perturbations of Fig. 8 The same experiments as above are shown for an even higher nonlinearity α = 3 in Fig. 10.Once more the ground states appear stable (also for Gaussian perturbations not shown here).Note that for the standard NLS a septic nonlinearity would be supercritical which again would lead to a blow-up of initial data of sufficiently large mass. 3.3. Schwartz class initial data.An interesting question in this context is whether these stable ground states appear in the long time behavior of solutions to generic localized initial data.To address this question we consider initial data of the form φ(x, 0) = µ exp(−x 2 ) with 0 < µ < 1, again for a = 9.We use N = 2 12 Fourier modes for x ∈ 5[−π, π] and N t = 5 * 10 5 time steps for the indicated time intervals.In Fig. 11 it can be seen that the L ∞ norm of the solution appears to oscillate around some asymptotic values, and that some radiation is emitted towards infinity. The former effect is more visible on the left of Fig. 12 where the L ∞ norm of the solution is shown.Since there is no dissipation in the system, the final ground state will be only reached asymptotically.The small oscillations in the L ∞ norm for larger times are again due to the fact that the problem is treated as being periodic.Radiation emitted towards infinity reenters because of this periodicity on the other side of the computational interval if it leaves on one side.This 5) with a = 9 for the initial data φ(x, 0) = 0.9 exp(−x 2 ). is more visible on the right of Fig. 12, where the solution at the final time of the computation is shown.The situation is similar for higher nonlinearity.In Fig. 13 we show the case α = 2. On the left for µ = 0.9, the L ∞ norm of the solution appears to be decreasing.The initial data seem to be simply radiated towards infinity.On the right we show the case µ = 0.96 where the L ∞ norm seems to oscillate around some final state with non-vanishing L ∞ norm, presumably a ground state.This would indicate that the ground state can appear in the long time behavior of generic solutions. For even higher nonlinearity α = 3, we consider in Fig. 14 the case µ = 0.9 on the left and µ = 0.99 on the right.In both cases the initial data appear to be simply radiated away to infinity.Note that this does not imply that ground states cannot be observed in the long time behavior of solutions in this case, it just shows that this is not the case in the studied examples.For instance it is possible that Gaussian initial data do not have a sufficiently broad maximum in this case.1) for α = 3 and the initial data φ(x, 0) = 0.9 exp(−x 2 ) on the left, and φ(x, 0) = 0.99 exp(−x 2 ) on the right. Outlook: analysis of the model in higher space dimension It seems interesting to investigate the model described in this paper in higher space dimension, d = 3 being the most relevant case from a physical point of view. On the one hand, to generalize the model for any dimension d > 1, one can simply replace ∂ x in equation ( 1) by the operator ∇.This leads to a quasilinear Schrödinger equation of the form ).On the other hand, at least in dimension d = 2 and d = 3, another possibility is to formally derive the equation of the model by following the arguments presented in Appendix A and by taking as starting point the Dirac equation in dimension 1 < d ≤ 3.This will lead to a slightly more complicated quasilinear Schrödinger equation. In both cases, solitary wave solutions for any α ∈ N * are expected to exist and one can investigate their behavior analytically and numerically.However, the study of the time-dependent equation from an analytical point view seems much more involved. From a numerical point of view, a stiff time integrator would be recommended for higher dimensions in order to overcome stability constraints.A straight forward, but computationally expensive approach would be an implicit scheme.More interesting would be Rosenbrock-type integrators based on a linearization of the equation after the spatial discretisation, and an eponential integrator for the Jacobian of the resulting system.This can be done efficiently by using so-called Leja points, see for instance [2] and references therein. We leave this generalization to future work. Appendix A. Formal derivation of the non-relativistic model Consider the following nonlinear Dirac equation in one space dimension with κ 1 and κ 2 positive constants and α ∈ N * .Here Ψ = (ψ, ζ) is a 2-spinor that describes the quantum state of a nucleon of mass m, and σ 1 and σ 3 are the Pauli matrices given by In nuclear physics, the interesting regime is when the parameters κ 1 and κ 2 behave like m, whereas κ 1 − κ 2 stays bounded.More precisely, let κ 1 = θm and κ 1 − κ 2 = λ with θ and λ positive constants.As a consequence, the nonlinear Dirac equation ( 15) can be written as ( 16) Hence, by writing φ(t, x) = e imt ψ(t, x) and χ(t, x) = e imt ζ(t, x), we obtain As usual, in the non-relativistic regime, the lower spinor χ is of order 1/ √ m.Hence, we have to perform the following change of scale with a = 2λ θ , and F, G defined by Finally, denoting ε = 1 m the perturbative parameter, we obtain In particular, when ε = 0, we have which leads at least formally to the time-dependent quasilinear Schrödinger equation Appendix B. Existence of positive solutions to the stationary equation In this appendix, we prove Theorem 1 and we make rigorous the construction of ground states presented in Section 2. As in [3], we write the stationary equation (3) as a system of first order ODEs for any strictly positive integer α. The existence of solutions to (19) is an immediate consequence of the Cauchy-Lipschitz theorem.More precisely, we have the following lemma. A key ingredient of the proof is to remark that system (19) is the Hamiltonian system associated with the energy As a consequence, to have a complete description of the dynamical system, it is enough to analyze the energy levels of (20), i.e. the curves in the (χ, ϕ)-plane defined by Γ c = {(χ, ϕ)|H(χ, ϕ) = c}.As a consequence of Remark 6, we have the following lemma. Hence, let a > (α + 1)b.In this case we are able to prove the existence and uniqueness (modulo translations) of a positive solution ϕ to of (3) such that lim x→±∞ ϕ(x) = 0. Remark 11.As shown in Section 2, a straightforward computation leads to the explicit formula for ϕ.In particular, for any fixed x 0 ∈ R, Finally, we conclude by proving the non-degeneracy of the solution ϕ.The linearized operator at our solution ϕ is defined by Our goal is to prove that in L 2 , ker L = span{ϕ }. Figure 3 . Figure 3. Difference of the numerical solution to the equation (5) for initial data being the ground state solution for a = 9 and b = 4.4, and the exact solution for t = 1. Figure 5 . Figure 5. On the left the L ∞ norm of the solution of Fig. 4, on the right the modulus of the Fourier coefficients of the solution at the final time. Figure 7 . Figure 7. On the left the L ∞ norm of the solution of Fig. 4, on the right the modulus of the Fourier coefficients of the solution at the final time. Figure 8 . Figure 8. L ∞ norms of the solution of (5) for the initial data φ(x, 0) = ϕ(x) ± exp(−x 2 ), on the left for the minus sign, on the right for the plus sign in the initial data. Figure 12 . Figure 12.On the left the L ∞ norm of the solution of Fig. 11, on the right the solution at the final time.
5,398.4
2020-02-07T00:00:00.000
[ "Mathematics", "Physics" ]
Convolutional Neural Networks vs. Convolution Kernels: Feature Engineering for Answer Sentence Reranking In this paper, we study, compare and combine two state-of-the-art approaches to automatic feature engineering: Convolution Tree Ker-nels (CTKs) and Convolutional Neural Networks (CNNs) for learning to rank answer sentences in a Question Answering (QA) setting. When dealing with QA, the key aspect is to encode relational information between the constituents of question and answer in learning algorithms. For this purpose, we propose novel CNNs using relational information and combined them with relational CTKs. The results show that (i) both approaches achieve the state of the art on a question answering task, where CTKs produce higher accuracy and (ii) combining such methods leads to unprecedented high results. Introduction The increasing use of machine learning for the design of NLP applications pushes for fast methods for feature engineering. In contrast, the latter typically requires considerable effort especially when dealing with highly semantic tasks such as QA. For example, for an effective design of automated QA systems, the question text needs to be put in relation with the text passages retrieved from a document collection to enable an accurate extraction of the correct answers from passages. From a machine learning perspective, encoding the information above consists in manually defining expressive rules and features based on syntactic and semantic patterns. Therefore, methods for automatizing feature engineering are remarkably important also in the light of fast prototyping of commercial applications. To the best of our knowledge, two of the most effective methods for engineering features are: (i) kernel methods, which naturally map feature vectors or directly objects in richer feature spaces; and more recently (ii) approaches based on deep learning, which have been shown to be very effective. Regarding the former, in (Moschitti et al., 2007), we firstly used CTKs in Support Vector Machines (SVMs) to generate features from a question (Q) and their candidate answer passages (AP). CTKs enable SVMs to learn in the space of convolutional subtrees of syntactic and semantic trees used for representing Q and AP. This automatically engineers syntactic/semantic features. One important characteristic we added in (Severyn and Moschitti, 2012) is the use of relational links between Q and AP, which basically merged the two syntactic trees in a relational graph (containing relational features). Although based on different principles, also CNNs can generate powerful features, e.g., see (Kalchbrenner et al., 2014;Kim, 2014). CNNs can effectively capture the compositional process of mapping the meaning of individual words in a sentence to a continuous representation of the sentence. This way CNNs can efficiently learn to embed input sentences into low-dimensional vector space, preserving important syntactic and semantic aspects of the input sentence. However, engineering features spanning two pieces of text such as in QA is a more complex task than classifying single sentences. Indeed, only very recently, CNNs were proposed for QA by Yu et al. (2014). Although, such network achieved high accuracy, its design is still not enough to model relational features. In this paper, we aim at comparing the ability of CTKs and CNNs of generating features for QA. For this purpose, we first explore CTKs applied to shallow linguistic structures for automatically learning classification and ranking functions with SVMs. At the same time, we assess a novel deep learning architecture for effectively modeling Q and AP pairs generating relational features we initially modeled in (Severyn and Moschitti, 2015;Severyn and Moschitti, 2016). The main building blocks of our approach are two sentence models based on CNNs. These work in parallel, mapping questions and answer sentences to fixed size vectors, which are then used to learn the semantic similarity between them. To compute question-answer similarity score we adopt the approach used by Yu et al. (2014). Our main novelty is the way we model relational information: we inject overlapping words directly into the word embeddings as additional dimensions. The augmented word representation is then passed through the layers of the convolutional feature extractors, which encode the relatedness between Q and AP pairs in a more structured manner. Moreover, the embedding dimensions encoding overlapping words are parameters of the network and are tuned during training. We experiment with two different QA benchmarks for sentence reranking TREC13 (Wang et al., 2007) and WikiQA (Yang et al., 2015). We compare CTKs and CNNs and then we also combine them. For this purpose, we design a new kernel that sum together CTKs and different embeddings extracted from different CNN layers. Our CTK-based models achieve the state of the art on TREC 13, obtaining an MRR of 85.53 and an MAP of 75.18 largely outperforming all the previous best results. On Wik-iQA, our CNNs perform almost on par with tree kernels, i.e., an MRR of 71.07 vs. 72.51 of CTK, which again is the current state of the art on such data. The combination between CTK and CNNs produces a further boost, achieving an MRR of 75.52 and an MAP of 73.99, confirming that the research line of combining these two interesting machine learning methods is very promising. Related Work Relational learning from entire pieces of text concerns several natural language processing tasks, e.g., QA (Moschitti, 2008), Textual Entailment (Zanzotto and Moschitti, 2006) and Paraphrase Identification (Filice et al., 2015). Regarding QA, a referring work for our research is the IBM Watson system (Ferrucci et al., 2010). This is an advanced QA pipeline based on deep linguistic processing and semantic resources. Wang et al. (2007) used quasi-synchronous grammar to model relations between a question and a candidate answer with syntactic transformations. (Heilman and Smith, 2010) applied Tree Edit Distance (TED) for learning tree transformations in a Q/AP pair. (Wang and Manning, 2010) designed a probabilistic model to learn tree-edit operations on dependency parse trees. (Yao et al., 2013) applied linear chain CRFs with features derived from TED to automatically learn associations between questions and candidate answers. Yih et al. (2013a) applied enhanced lexical semantics to build a word-alignment model, exploiting a number of large-scale external semantic resources. Although the above approaches are very valuable, they required considerable effort to study, define and implement features that could capture relational representations. In contrast, we are interested in techniques that try to automatize the feature engineering step. In this respect, our work (Moschitti et al., 2007) is the first using CTKs applied to syntactic and semantic structural representations of the Q/AP pairs in a learning to rank algorithm based on SVMs. After this, we proposed several important improvement exploiting different type of relational links between Q and AP, i.e., (Severyn and Moschitti, 2012;Tymoshenko et al., 2014;Tymoshenko and Moschitti, 2015). The main difference with our previous approaches is usage of better-preprocessing algorithms and new structural representations, which highly outperform them. Recently, deep learning approaches have been successfully applied to various sentence classification tasks, e.g., (Kalchbrenner et al., 2014;Kim, 2014), and for automatically modeling text pairs, e.g., (Lu and Li, 2013;Hu et al., 2014). Additionally, a number of deep learning models have been recently applied to question answering, e.g., Yih et al. (2014) applied CNNs to open-domain QA; Bordes et al. (2014b) propose a neural embedding model Iyyer et al. (2014) applied recursive neural networks to factoid QA over paragraphs. (Miao et al., 2015) proposed a neural variational inference model and a Long-short Term Memory network for the same task. Recently (Yin et al., 2015) proposed a siamese convolutional network for matching sentences that employ an attentive average pooling mechanism, obtaining state-of-the-art results in various tasks and datasets. The work closest to this paper is (Yu et al., 2014) and (Severyn and Moschitti, 2015). The former presented a CNN architecture for answer sentence selection that uses a bigram convolution and average pooling, whereas in the latter we used convolution with k-max pooling. However, these models only partially captures relational information. In contrast, in this paper, we encode relational information about words that are matched betweem Q and AP. Feature Engineering for QA with CTKs Our approach to learning relations between two texts is to first convert them into a richer structural representation based on their syntactic and semantic structures, and then apply CTKs. To make our approach more effective, we further enriched structures with relational semantics by linking the related constituents with lexical and other semantic links. Shallow Representation of Short Text Pairs In our study, we employ a modified version of the shallow structural representation of question and answer pairs, CH, described in Tymoshenko and Moschitti, 2015). We represent a pair of short texts as two trees with lemmas at leaf level and their part-of-speech (POS) tags at the preterminal level. Preterminal POS-tags are grouped into chunk nodes and the chunks are further grouped into sentences. Figure 1 provides an example of this structure. We enrich the above representation with the in-formation about question class and question focus. Questions are classified in terms of their expected answer type. employed coarse-grained classes from (Li and Roth, 2002), namely HUM (person), ENTY (an entity), DESC (description), LOC (location), and NUM (number). In this work, we split the NUM class into three subcategories, DATE, QUANTITY, CURRENCY and train question classifiers as described in . Differently from before, we add the question class node as the rightmost child of the root node both to the question and the answer structures. We detect question focus using a focus classifier, FCLASS, trained as in . However, in our previous model, we classified all words over the chunks in the question and picked the one with the highest FCLASS prediction score as a focus even if it is negative. In this work, if FCLASS assigns negative scores to all the question chunks, we consider the first question chunk, which is typically a question word, to be a focus. We mark the focus chunk by prepending the REL-FOCUS tag to its label. In previous work, we have shown the importance of encoding information about the relatedness between Q and AP into their structural representations. Thus, we employ lexical and question class match, described hereafter. Lexical match. Lemmas that occur both in Q and AP are marked by prepending the REL tag to the labels of the corresponding preterminal nodes and their parents. Question class match. We detect named entities (NEs) in AP and mark the NEs of type compatible 1 with the question class by prepending the REL-FOCUS-QC label to the corresponding prepreterminals in the trees. The QC suffix in the labels is replaced by the question class in the given pair. For example, in Figure 1, the Dumbledore lemma occurs in both Q and AP, therefore the respective POS and chunk nodes are marked with REL. The named entities, Harris, Michael Gambon and Dumbledore have the type Person compatible with the question class HUM, thus their respective chunk nodes are marked as REL-FOCUS-HUM (overriding the previously inserted REL tag for the Dumbledore chunk). Reranking with Tree Kernels We aim at learning reranker that can decide which Q/AP pair is more probably correct than others, where correct Q/AP pairs are formed by an AP containing a correct answer to Q along with a supporting justification. We adopt the following kernel for reranking: Feature Engineering for QA with CNNs The architecture of our convolutional neural network for matching Q and AP pairs is presented in Fig. 2. Its main components are: (i) sentence matrices s i ∈ R d×|i| obtained by the concatenation of the word vectors w j ∈ R d (with d being the size of the embeddings) of the corresponding words w j from the input sentences (Q and AP) s i ; (ii) a convolutional sentence model f : R d×|i| → R m that maps the sentence matrix of an input sentence s i to a fixed-size vector representations x s i of size m; (iii) a layer for computing the similarity between the obtained intermediate vector representations of the input sentences, using a similarity matrix M ∈ R m×m -an intermediate vector representation x s 1 of a sentence s 1 is projected to ax s 1 = x s 1 M, which is then matched with x s 2 (Bordes et al., 2014a), i.e., by computing a dot-productx s 1 x s 2 , thus resulting in a single similarity score x sim ; (iv) a set of fullyconnected hidden layers that model the similarity between sentences using their vector representations produced by the sentence model (also integrating the single similarity score from the previous layer); and (v) a sigmoid layer that outputs probability scores reflecting how well the Q-AP pairs match with each other. The choice of the sentence model plays a crucial role as the resulting intermediate representations of the input sentences will affect the successive step of computing their similarity. Recently, convolutional sentence models, where f (s) is represented by a sequence of convolutional-pooling feature maps, have shown state-of-the-art results on many NLP tasks, e.g., (Kalchbrenner et al., 2014;Kim, 2014). In this paper, we opt for a convolutional operation followed by a k-max pooling layer with k = 1 as proposed in (Severyn and Moschitti, 2015). Considering recent applications of deep learning models to the problem of matching sentences, our network is most similar to the models in (Hu et al., 2014) applied for computing sentence similarity and in (Yu et al., 2014) (answer sentence selection in QA) with the following difference. To compute the similarity between the vector representation of the input sentences, our network uses two methods: (i) computing the similarity score obtained using a similarity matrix M (explored in (Yu et al., 2014)), and (ii) directly modelling interactions between intermediate vector representations of the input sentences via fully-connected hidden layers (used by (Hu et al., 2014)). This approach, as proposed in (Severyn and , results in a significant improvement in the task of question answer selection over the two methods used separately. Differently from the above models we do not add additional features in the join layer. Representation Layers It should be noted that NNs non-linearly transform the input at each layer. For instance, the output of the convolutional and pooling operation f (s i ) is a fixed-size representation of the input sentence s i . In the reminder of the paper, we will refer to these vector representations for the question and the answer passage as the question embedding (QE) and the answer embedding (AE), respectively. Similarly, the output of the penultimate layer of the network (the hidden layer whose output is fed to the final classification layer) is a compact representation of the input Question and Answer pair, which we call Joint Embedding (JE). Injecting Relational Information in CNNs Sec. 3 has shown that establishing relational links (REL nodes) between Q and A pairs is very important for solving the QA task. Yih et al. (2013b) also use latent word-alignment structure in their semantic similarity model to compute similarity between question and answer sentences. Yu et al. (2014) achieve large improvement by combining the output of their deep learning model with word count features in a logistic regression model. Differently from (Yu et al., 2014;Severyn and Moschitti, 2015) we do not add additional features such as the word count in the join layer. We allow our convolutional neural network to capture the connections between related words in a pair and we feed it with an additional binary-like input about overlapping words (Severyn and Moschitti, 2016). In particular, in the input sentence, we associate an additional word overlap indicator feature o ∈ {0, 1} with each word w, where 1 corresponds to words that overlap in a given pair and 0 otherwise. To decide if the words overlap, we perform string matching. Basically this small feature vector plays the role of REL tag added to the CTK structures. Hence, we require an additional lookup table layer for the word overlap features LT Wo (·) with parameters W o ∈ R do×2 , where d o ∈ N is a hyper-parameter of the model, which indicates the number of dimensions used for encoding the word overlap features. Thus, we augment word embeddings with additional dimensions that encode the fact that a given word in a pair is overlapping or semantically similar and let the network learn its optimal representation. Given a word w i , its final word embedding w i ∈ R d (where d = d w + d o ) is obtained by concatenating the output of two lookup table operations LT W (w i ) and LT Wo (w i ). Experiments In these experiments, we compare the impact in accuracy of two main methods for automatic feature engineering, i.e., CTKs and CNNs, for relational learning, using two different answer sentence selection datasets, WikiQA and TREC13. We propose several strategies to combine CNNs with CTKs and we show that the two approaches are complementary as their joint use significantly boosts both models. Experimental Setup We utilized two datasets for testing our models: TREC13. This is the factoid open-domain TREC QA corpus prepared by (Wang et al., 2007). The training data was assembled from the 1,229 TREC8-12 questions. The answers for the training questions were automatically marked in sentences by applying regular expressions, therefore the dataset can be noisy. The test data contains 68 questions, whose answers were manually annotated. We used 10 answer passages for each question for training our classifiers and all the answer passages available for each question for testing. WikiQA. TREC13 is a small dataset with an even smaller test set, which makes the system evaluation rather unstable, i.e., a small difference in parameters and models can produce very different results. Moreover, as pointed by (Yih et al., 2013b), it has significant lexical overlap between questions and answer candidates, therefore simple lexical match models may likely outperform more elaborate methods if trained and tested on it. WikiQA dataset (Yang et al., 2015) is a larger dataset, created for open domain QA, which overcomes these problems. Its questions were sampled from the Bing query logs and candidate answers were extracted from the summary paragraphs of the associated Wikipedia pages. The train, test, and development sets contain 2,118, 633 and 296 questions, respectively. There is no correct answer sentence for 1,245 training, 170 development and 390 test questions. Consistently with (Yin et al., 2015), we remove the questions without answers for our evaluations. Preprocessing. We used the Illinois chunker (Punyakanok and Roth, 2001), question class and focus classifiers trained as in and the Stanford CoreNLP (Manning et al., 2014) toolkit for the needed preprocessing. CTKs. We used SVM-light-TK 2 to train our models. The toolkit enables the use of structural kernels (Moschitti, 2006) in SVM-light (Joachims, 2002). We applied (i) the partial tree kernel (PTK) with its default parameters to all our structures and (ii) the polynomial kernel of degree 3 on all feature vectors we generate. Metaclassifier. We used the scikit 3 logistic regression classifier implementation to train the metaclassifier on the outputs of CTKs and CNNs. CNNs. We pre-initialize the word embeddings by running the word2vec tool (Mikolov et al., 2013) on the English Wikipedia dump and the jacana corpus as in (Severyn and Moschitti, 2015). We opt for a skipgram model with window size 5 and filtering MRR MAP P@1 State of the art CNNc (Yang et al., 2015) 66.52 65.20 n/a ABCNN (Yin et al., 2015) 71.27 69.14 n/a LSTMa,c (Miao et al., 2015) 70.41 68.55 n/a NASMc (Miao et al., 2015) 70 words with frequency less than 5. The dimensionality of the embeddings is set to 50. The input sentences are mapped to fixed-sized vectors by computing the average of their word embeddings. We use a single non-linear hidden layer (with hyperbolic tangent activation, Tanh), whose size is equal to the size of the previous layer. The network is trained using SGD with shuffled mini-batches using the Adam update rule (Kingma and Ba, 2014). The batch size is set to 100 examples. The network is trained for a fixed number of epochs (i.e., 3) for all the experiments. We decided to avoid using early stopping, in order to do not overfit the development set and have a fair comparison with the CTKs models. QA metrics. We used common QA metrics: Precision at rank 1 (P@1), i.e., the percentage of questions with a correct answer ranked at the first position, the Mean Reciprocal Rank (MRR) and the Mean Average Precision (MAP). Experiments on WikiQA State of the art. CH, VJE, CNNR 74.01 72.31 62.14 n/a n/a n/a CH, VJE 73.95 72.15 62.14 n/a n/a n/a CH+VJE, CNNR 73.43 71.58 60.49 n/a n/a n/a Table 3: Performance on the WikiQA on the development set ABCNN is the Attention-Based CNN, LSTM a,c is the long short-term memory network with attention and word count, and NASM c is the neural answer selection model with word count. CNN R is the relational CNN described in Section 4. CH 4 is a tree kernel-based SVM reranker trained on the shallow pos-chunk tree representations of question and answer sentences (Sec. 3.1), where the subscript coarse refers to the model with the coarsegrained question classes as in (Tymoshenko and Moschitti, 2015). V is a polynomial SVM reranker, where the subscripts AE, QE, JE indicate the use of the answer, question or joint embeddings (see Sec. 4.1) as the feature vector of SVM and + means that two embeddings were concatenated into a single vector. The results show that our CNN R model performs comparably to ABCNN (Yin et al., 2015), which is the most recent and accurate NN model and to CH coarse . The performance drops when the embeddings AE, QE and JE are used in a polynomial SVM reranker. In contrast, CH (using our tree structure enriched with fine-grained categories) outperforms all the models, showing the importance of syntactic relational information for the answer sentence selection task. Combining CNN with CTK on WikiQA We experiment with two ways of combining CTK with CNN R : (i) at the kernel level, i.e., summing tree kernels with the polynomial kernel over different embeddings, i.e., CH+V, and (ii) using the predictions of SVM and CNN R models (computed on the development set) as features to train logistic regression meta-classifiers (again only on the development set). These are reported in the last three lines of Table 1, where the name of the classifiers participating with their outputs are illustrated as a comma-separated list. The results are very interesting as all kinds of combinations largely outperform the state of the art, e.g., by around 3 points in terms of MRR, 2 points in terms of MAP and 5 points in terms of P@1 with respect to the strongest standalone system, CH. Directly using the predictions of the CNN R as features in the meta-classifier does not impact the overall performance. It should be noted that the meta-classifier could only be trained on the development data to avoid predictions biased by the training data. Using less training data Since we train the weights of CNN R on the training set of WikiQA, to obtain the embeddings minimizing the loss function, we risk to have overfitted, i.e., "biased", JE, AE and QE on the questions and answers of the training set. Therefore, we conducted another set of experiments to study this case. We randomly split the training set into two equal subsets. We train CNN R on one of them and in the other subset, (referred to as TRAIN50) we produce the embeddings of questions and answers. Table 2 reports the results on the WikiQA test set which we obtained when training SVM on TRAIN50 and on the development set, DEV. We trained the meta-classifier on the predictions of the standalone models on DEV. Consistently with the previous results, we obtain the best performance combining the CNN R embeddings with CTK. Even when we train on the 50% of the training data only, we still outperform the state of the art, and our best model CH+V JE performs only around 2 points lower in terms of MRR, MAP and P@1 than when training on the full training set. Finally, Table 3 reports the performance of our models when tested on the development set and demonstrates that the improvement obtained when combining CTK and CNN R embeddings also holds on it. Note, that we did not use the development set for any parameter tuning and we train all the models with the default parameters. Experiments on TREC13 dataset TREC13 corpus has been used for evaluation in a number of works starting from 2007. Table 4 reports our as well as some state-of-the-art system results on TREC13. It should be noted that, to be consistent with the previous work, we evaluated our models in the same setting as (Wang et al., 2007;Yih et al., 2013a), i.e., we (i) remove the questions having only correct or only incorrect answer sentence candidates and (ii) used the same evaluation script and the gold judgment file as they used. As pointed out by Footnote 7 in (Yih et al., 2014), the evaluation script always considers 4 questions to be answered incorrectly thus penalizing the overall system score. We note that our models, i.e., CNN R , V JE , Discussion The main focus and novelty of this paper is comparing and combining CTKs and CNETs. We showed that the features they generate are complementary as their combination improve both models. For the combinations, we used voting and our new method of combining network layers embedded in a polynomial kernels added to tree kernels. We would like to stress that to the best of our knowledge we are the first to merge CNNs and CTK together. We showed that kernels based on different embedding layers learned with our CNNs, when used in SVMs, deliver the same accuracy of CNNs. This enables an effective combination between TK and CNNs at kernel level. Indeed, we experimented with different kernel combinations built on top of different CNN layers, improving the state of the art, largely outperforming all previous systems exactly using the same testing conditions. These results are important for developing future research as they provide indications on features/methods and referring baselines to compare with. Finally, we generated modified structures and used better parsers outperforming our initial result in ) by more than 10 points. Efficiency An interesting question is the practical use of our models, which require the discussion of their efficiency. In this respect, our framework combines CTKs and CNNs by generating a global kernel. Thus, the time complexity during training is basically given by (i) training CNNs, (ii) extracting their embeddings and (iii) use these embeddings during the CTK training. The time for computing steps (i) and (ii) is linear with respect to the number of examples as the architecture and the number of optimization steps are fixed. In practice, the bottleneck of training our CNN architecture is in the number of weights. Regarding Step (iii), since the embeddings just feed a polynomial kernel, which is slightly more efficient than CTKs, the overall complexity is dominated by the one of the CTK framework, i.e., O(n 2 ). In practice, this is rather efficient, e.g., see the discussion in (Tymoshenko and Moschitti, 2015). The testing complexity is reduced to the number of kernel operations between the support vectors and the test examples (the worst case is O(n 2 )), which are also parallelizable. Conclusions This paper compares two state-of-the-art feature engineering approaches, namely CTKs and CNNs, on the very complex task of answer reranking in a QA setting. In order to have a meaningful comparison, we have set the best configuration for CTK by defining and implementing innovative linguistic structures enriched with semantic information from statistical classifiers (i.e., question and focus classifiers). At the same time, we have developed powerful CNNs, which can embed relational information in their representations. We tested our models for answer passage reranking in QA on two benchmarks, WikiQA and TREC13. Thus, they are directly comparable with many systems from previous work. The results show that our models outperform the state of the art achieved by more complex networks. In particular, CTKs outperform our CNNs but use more information, e.g., on TREC 13, CTKs obtain an MRR and MAP of 85.53 and 75.18 vs. 77.93 and 71.09 of CNNs. On WikiQA, CNNs combined with tree kernels achieves an MRR of 75.88 and an MAP of 74.17 largely outperforming the current state of the art, i.e., MRR of 71.27 and MAP 69.14 of ABCNN by Yin et al. (2015). It should be noted that CTK models use syntactic parsing, two statistical classifiers for focus and question classification and a named entity recognizer whereas CNNs only use words and two additional unsupervised corpora. In the future, we would like to embed CNN similarity in CTKs. A straightforward methods for achieving this is to use the Smoothed Partial Tree Kernel by Croce et al. (2011). Our preliminary experiments using word2vec were not successful. However, CNNs may provide a more effective similarity. Finally, it would be also very interesting to exploit structural kernels in the network layers.
6,767.6
2016-06-01T00:00:00.000
[ "Computer Science" ]
Decomposition-Based Multiobjective Optimization with Invasive Weed Colonies In order to solve the multiobjective optimization problems efficiently, this paper presents a hybrid multiobjective optimization algorithm which originates from invasive weed optimization (IWO) and multiobjective evolutionary algorithm based on decomposition (MOEA/D), a popular framework for multiobjective optimization. IWO is a simple but powerful numerical stochastic optimization method inspired from colonizing weeds; it is very robust and well adapted to changes in the environment. Based on the smart and distinct features of IWO andMOEA/D, we introducemultiobjective invasive weed optimization algorithm based on decomposition, abbreviated as MOEA/D-IWO, and try to combine their excellent features in this hybrid algorithm. The efficiency of the algorithmboth in convergence speed and optimality of results are comparedwithMOEA/D and some other popular multiobjective optimization algorithms through a big set of experiments on benchmark functions. Experimental results show the competitive performance of MOEA/D-IWO in solving these complicated multiobjective optimization problems. Introduction Multiobjective optimization problems (MOPs) widely exist in applications [1], such as design [2], scheduling [3][4][5], path planning [6], retrieval [7], and cloud computing [8].These problems usually have two or more objectives, which often conflict with each other.Traditional mathematical methods often cannot deal with them well.Evolutionary algorithms present unique superiority in handling this type of problems.Due to the wide application scenes of MOPs, research on multiobjective evolutionary algorithms (MOEAs) remains prosperous [9][10][11]. Multiobjective evolutionary algorithms that have been proposed in literatures can be classified into three categories [9,12]: the dominance-based approach, the indicator-based approach, and the decomposition-based approach. (1) Dominance-based approach: in this type of approach, Pareto-dominance selection principle plays an important role in convergence process, among which the Pareto-based nondominated sorting approach is the most popular, where solutions having better Pareto ranks are selected.Besides, often a diversity maintaining strategy is needed for achieving an even distribution of the Pareto optimal solutions.Improved strength Pareto EA (SPEA2) [13] and nondominated sorting genetic algorithm II (NSGA-II) [14] are two representative Pareto-based MOEAs, which perform effectively in solving 2objective or 3-objective MOPs.However, when the number of objectives becomes large, selection pressure will reduce sharply and optimization process will become ineffective [15][16][17]. (2) Indicator-based approach: i.n this type of approach, a performance indicator such as hypervolume indicator or R2 indicator is used to measure the fitness of solutions by assessing their contributions.The used indicator needs the capability of measuring both convergence and diversity of an optimization algorithm.R2 indicator based evolutionary algorithm (R2-IBEA) [18], hypervolume-based evolutionary algorithm (HypE) [19], and hybrid Multiobjective Particle Swarm Optimization Algorithm Based on R2 Indicator (R2HMOPSO) [20] are three well-known indicator-based optimization algorithms. (3) Decomposition-based approach: in this type of approach, an MOP is transformed into a series of singleobjective optimization subproblems through decomposition method, weighted sum approach, for example, and solves these subproblems simultaneously in a single run by an optimization algorithm.Decomposition-based method will utilize aggregated fitness value of solutions in selection process.Multiobjective genetic local search (MOGLS) [21], cellular genetic algorithm for multiobjective optimization (C-MOGA) [22], and MOEA/D [15] etc. are some of well famous representative MOEAs based on decomposition. MOEA/D first proposed in 2007 [15] is a milestone in the development of MOEAs; it is a classical decomposition-based algorithm.MOEA/D defines a framework of multiobjective optimization; its improved version has won first on CEC 2009 [23].Since being proposed, MOEA/D and its variants have solved many complex MOPs, which demonstrates that MOEA/D has lower computation complexity and performs better than NSGA-II in dealing with complex MOPs in a sense [24][25][26][27].Therefore, its research is worthy of attention. Invasive Weed Optimization (IWO) [28] first proposed in 2006, is a derivative-free metaheuristic algorithm mimicking the ecological behavior of colonizing weeds and distribution and is able to efficiently handle general linear, nonlinear, and multidimensional optimization problems [28,29].Since its proposal, IWO has been successfully applied in many practical optimization problems, such as developing a recommender system [30], many kinds of antenna configuration optimization [2,31], and DNA computing [32]. Kundu et al. [33] proposed multiobjective invasive weed optimization (IWO) in 2011 and applied it on solving CEC 2009 MOPs.In their work, fuzzy dominance mechanism, instead of nondominated sorting, was carried out to sort the promising weeds in each iteration.Y. Liu et al. developed multiobjective invasive weed optimization for synthesis of phase-only reconfigurable linear arrays [2].In addition, as far as we know, there has not much research on the multiobjective invasive weed optimization.Then in this work, we extend the classical IWO algorithm and integrate it into the framework of MOEA/D for well handling multiobjective problems.Based on the smart and distinct features of IWO and MOEA/D, we propose multiobjective invasive weed optimization algorithm based on decomposition (MOEA/D-IWO) and try to combine their excellent features in this extended hybrid algorithm.MOEA/D-IWO decomposes an MOP into a series of single-objective subproblmes and solves them in parallel in each generation.The population consists of the best solutions searched so far for each subproblem, and each subproblem utilizes an extended IWO algorithm for evolution in each generation.The performance of the proposed MOEA/D-IWO in both convergence speed and optimality of results are compared with those of NSGA-II, MOEA/D, and some other multiobjective evolutionary algorithms on a big set of MOPs.Comparison results indicate the feasibility of IWO as a very hopeful metaheuristic candidate in the domain of multiobjective optimization. The remaining parts of this paper are organized as follows.Section 2 formally describes the background knowledge on multiobjective optimization, the basic framework of MOEA/D, and an overview of IWO.Section 3 provides an adaptive modification of IWO and then integrates it into MOEA/D deducing our proposed MOEA/D-IWO. Experiments are carried out and discussed in Section 4. Finally, Section 5 concludes this paper and prospects our further research. There are three goals for MOEAs in handling an MOP: (1) good convergence, obtaining a set of approximations as close as possible to the PF, (2) good diversity, obtaining a set of evenly distributed approximations, and (3) good coverage, which can cover the entire PF. MOEA/D: an Overview. Multiobjective evolutionary algorithm based on decomposition (MOEA/D) is a representative of the decomposition-based method, proposed by Zhang and Li in 2007 [15].Large sets of experiments have illustrated that MOEA/D and its improved versions show superiority over other popular MOEAs on solving MOPs with complicated Pareto set shapes [16,36].The basic idea behind MOEA/D is to transform an MOP into a series of singleobjective optimization subproblems through decomposition method and coevolve these subproblems in each generation. The framework of MOEA/D is formally described in Algorithm 1. Tchebycheff method is used for decomposing an MOP into subproblems in this framework; is the aggregated scalar function after decomposition.There have been other decomposition methods, such as weighted sum (WS) and penalty-based boundary intersection (PBI) and can be used for decomposing.Detailed descriptions of these decomposition methods can refer to [15].Just as the optimal solution of each subproblem has been proved to be Pareto optimal to the MOP under consideration, then the solutions set of all subproblems can be considered as a good approximation of PF. Compared with other MOEAs, MOEA/D has the following three important features. (1) MOEA/D transforms an MOP into a series of singleobjective optimization subproblems through decomposition and solves these subproblems simultaneously; it does not directly solve the MOP as a whole.Different decomposition methods often have different effects on solving problems.Furthermore, many kinds of optimization strategies used in single-objective optimization algorithm can also be integrated into MOEA/D. (2) MOEA/D implements the coevolution of subproblems.With the solutions information of adjacent subproblems, multiple subproblems can be optimized simultaneously.Then the computational complexity of MOEA/D is lower than that of NSGA-II. (3) MOEA/D can solve the MOPs with complicated Pareto set shapes very well, which we often encounter in practical engineering optimization, synthesis of phase-only reconfigurable linear arrays for example [2].In addition, MOEA/D can solve the problem with multiple objects (especially when the number of objects is greater than four).When the number of objects is large, the performance of MOEA tends to decline, which requires a larger population for optimization.However, the performance of MOEA/D does not significantly decrease. Invasive Weed Optimization (IWO): an Overview. Weeds are plants whose vigorous and invasive habits of growth make them very robust and adaptive to changes in environment.Thus, capturing their properties and imitating their behaviors would lead to a powerful optimization method.This is the main idea behind IWO, which was originally proposed in [28]; it is a simple but effective meta-heuristic algorithm.To fulfill the IWO process, the following steps are needed. Step 1 (initialization).A number of weeds are uniformly generated in the feasible decision space, where each weed represents a trial solution of the optimization problem under consideration. Step 2 (fitness evaluation and ranking).Each weed will grow to a plant.Besides, fitness evaluation function will assign each plant a fitness and rank these plants based on their fitness values. Step 3 (reproduction).Every plant produces seeds based on its rank or assigned fitness value.In other words, the number of seeds each plant is permitted to produce, , is decided by its fitness or rank, , and the permissible maximum and minimum numbers of seeds max and min . is formulated as where max and min are the highest and the lowest fitness of the population.Generally speaking, high fitness or rank will have the chance of producing more seeds.This step also provides an important property that allows all plants to participate in the reproduction contest; i.e., it gives all plants the chance of surviving and reproducing based on their rank or fitness. Step 4 (spatial distribution).The produced seeds are designed to randomly distribute on the search space by Gaussian distribution with mean zero but varying variance.This step can ensure that the produced seeds are generated around their parent plant conducting a local search around each plant.However, the standard deviation of the random function is designed to decrease with iterations.At the current iteration '' , the standard deviation is described as where max is the upper limit of iterations and is a nonlinear regulatory factor. initial and final are presented as the initial and final standard deviations, respectively.It can be observed from (3) that the probability of dropping a seed in a remote area reduces nonlinearly with iterations leading to group fitter plants and elimination of inappropriate plants.Therefore, this step can be considered as the selection mechanism of IWO. Step 5 (repeat and terminate).After the above steps carry out for all of the plants, the process will be repeated at Step 2 until stop conditions are met.It should be noted that weeds with lower fitness have a high probability of being eliminated after all plants reproduce to the maximum number in colony process. Multiobjective Invasive Weed Optimization Algorithm Based on Decomposition In this part, we present a multiobjective invasive weed optimization algorithm based on decomposition, abbreviated as MOEA/D-IWO.We first adapt IWO for multiobjective optimization and then integrate it into MOEA/D providing a decomposition-based multiobjective optimization algorithm with invasive weed colonies. The main aspects of our motivation are as follows: IWO is a population-based stochastic optimization technique in solving continuous optimization problems.In case of nonlinear multidimensional continuous optimization problems, IWO outperforms PSO, GA, memetic algorithms, and shuffled frog leaping [28].However, in conventional IWO, fitness is used not only to compare two solutions but also in the reproduction process unlike in PSO, GA, etc. Comparing to other EAs, the fitness assignment of each solution in IWO is more difficult in solving MOPs than that in single objective optimization.Kundu et al. developed multiobjective invasive weed optimization [33], where fuzzy dominance mechanism, instead of nondominated sorting, is carried out to sort the promising weeds in each iteration.However, with the number of objectives becoming large, selection pressure will reduce sharply and optimization process becomes ineffective.To avoid this difficulty, the framework of decomposition-based multiobjective algorithm [15] can be considered as a reliable candidate.The mentioned advantages and disadvantages of IWO motivate us to propose a new hybrid version of IWO with MOEA/D framework to solve MOPs. Adaptive Modification of IWO. In IWO, only individuals with high fitness values are permitted to reproduce offsprings, and the number of offsprings is determined by the normalized fitness value.Therefore, IWO is able to avoid wasting time on searching the less feasible region in a constrained optimization problem.However, as a local search algorithm, IWO is sensitive to the initial values of the parameters and easily gets trapped into local optima. An adaptive modification of IWO in this study is for the aim of acquiring the balance between effective exploration and efficient exploitation utilizing neighborhood information for multiobjective optimization.The original IWO leads to a coarse-grained local search because the offsprings have the same dispersal degree in all dimensions at a certain iteration.In detail, it can be clearly seen from ( 3) that iter decreases with the increase of iterations; however, the value of iter for each parent seed in one iteration is the same, which is not conducive to exploration and efficient exploitation.Furthermore, we plan to integrate IWO into MOEA/D for multiobjective optimization.MOEA/D decomposes an MOP into a big set of scalar subproblems and coevolves these subproblems through neighborhood relationship.In the process of coevolution, we plan to utilize IWO fulfilling optimization for each subproblem.However, the current best solution and its neighbors have obviously different fitness for each subproblem; then the same setting of iter for them is not proper.In other words, iter influences the distance between parents and their produced children weeds, though they are under the same iteration.Different parent should have its own iter differing from those of other parent weeds.Thus, in this study we improve IWO and propose an adaptive standard deviation iter , where the value of iter varies not only with the iteration but also with the rank of the individual's fitness in the subproblem, as described in where is the aggregated scalar function value of the weed (Tchebycheff method is used for example), min , max , and mean , respectively, represent the minimum, maximum, and average scalar function value among all weeds (current solution and its neighbors) in current iteration for each subproblem, and is a regulatory factor of adjusting the variation range of standard deviation; its value is generally set from 0 to 0.5. It can be found from ( 4) that iter of weed consists with its scalar function value; the lower the scalar function value is, the smaller the standard deviation of the weed iter will be, which ensures the children seeds produced by better parents distribute relatively near around their parents, and the children seeds produced by worse parents distribute relatively far away from their parents.Moreover, the variable range of iter is extended to [1 − , 1 + ] iter strengthening the diversity of the seeds, and the standard deviation of producing weeds decreases with iterations on the whole.This will accelerate the convergence rate and meanwhile can escape from local optimum.Global and local search capabilities can be well balanced through this mechanism. On the other hand, the number of seeds produced by parent plant is described as where max and min in (5) are the largest and smallest number of seeds each parent is permitted to produce, respectively, ( * ) means the floor function of ' * ' .It is very evident that better individual will produce more seeds.Figure 1 visually illustrates the procedure. The dispersal degree of offsprings in adaptive IWO variant is determined by the estimation of the neighborhood information around their parents based on the neighborhood topology, which is more powerful in subproblem local search compared with the original IWO. MOEA/D-IWO-Algorithm Description. From the previous two sections we conclude that MOEA/D provide a good framework for multiobjective optimization while adaptive IWO novelly offers good exploration and diversity.In this part, we combine the two algorithms and present a novel algorithm: MOEA/D-IWO for handling multiobjective optimization problems.Based on the smart and distinct features of IWO and MOEA/D, we propose MOEA/D-IWO and try to combine their excellent features in this extended hybrid algorithm. Under the framework of MOEA/D, MOEA/D-IWO decomposes a multiobjective problem into a big set of scalar optimization subproblems and solves them simultaneously.In each subproblem, adaptive IWO is adopted for search, where the objective is to minimize the aggregation function of all the objects under consideration.Each subproblem has its own aggregation weight vector constructing its aggregation function, which is different from any of the others; i.e., all these aggregation weight vectors of the decomposed subproblems differ with each other.At each generation, the population is composed of the best solutions searched so far for each subproblem; then the number of the decomposed subproblems is also the population size.If the population size is set to , then, we need to optimize these subproblems simultaneously. An MOP can be transferred into a series of scalar optimization subproblems through decomposition [34].Tchebycheff decomposition approach is mainly employed in our experiments.Let 1 , . . ., be a set of uniformly distributed weight vectors, and * = ( of the -th subproblem can be described as the following [34]: where = ( 1 , . . ., ) ⊤ , is the number of objects.MOEA/D-IWO optimizes all those objective functions simultaneously.Each subproblem is optimized by adaptive IWO using information only from its neighbors.Neighborhood relations among subproblems here are defined based on the distance between their aggregation coefficient vectors.Detailed description of MOEA/D-IWO is provided in Algorithm 2. In the line labeled 7 of the Algorithm 2, ← IWO ( , iter ) describes the procedure of producing seeds. consists of all children seeds produced by .Suppose = { 1 , 2 , . . ., }; then is the total number of children seeds produced by , its value is determined by (5), where , i.e., ( | , * ), and min , max are obtained by the following equations, respectively: iter stands for the adaptive standard deviation of ; its value can be got through (3) and ( 4), where mean is calculated by where |()| is the number of neighbors for subproblem . Likewise, the same computing model is applied on the neighbors of in the line (labeled 8) of Algorithm 2. Experiments For illustrating the performance of MOEA/D-IWO in handling MOPs, in this part MOEA/D-IWO is experimented on a big set of benchmark test instances.Firstly, MOEA/D-IWO is tested on nine problems with complex Pareto set shapes chosen from [16] and compared with other two classical algorithms: NSGA-II and MOEA/D on these problems.This set of nine complex functions was proposed by professors Zhang and Li [16].Many experimental results have shown that this kind of complicated PSs as well as PFs could seriously affect the performance of MOEAs [23].Besides, MOEA/D-IWO is also tested on ten of CEC 2009 problems UF1-UF10 in this part for further comparing with other hybrid or outstanding algorithms including MOEA/D-DE [16], MOEA/D-PSO, dMOPSO [37], I-MOEA/D [17], and R2HMOPSO [20].Among these ten problems UF1-UF10, the first seven UF1-UF7 are problems with two objectives while the last three UF8-UF10 are problems with three objectives.Each of the test problems UF1-UF10 has a decision space composed of 30 variables.Detailed descriptions of these ten test problems can be found in [23]. All those nineteen test problems are for minimization of the objectives. Performance Metric. In multiobjective optimization, there are two basic aims that all the multiobjective algorithms pursue; i.e., the obtained solutions set must be as close as possible to the Pareto front, while the diversity of the solutions set needs to be maintained.In order to evaluate and compare the different algorithms quantitatively, we use the following performance metrics in experiments. (i) Inverted generational distance (IGD) [38]: suppose * is a large set of uniformly distributed points along the PF representing it well, and is the solutions set obtained by multiobjective algorithm.IGD ( * , ) represents the average distance from * to described as where (, ) is the minimum Euclidean distance between and the points in .If | * | is large enough to represent the PF very well, IGD( * , ) could measure both the diversity and convergence of in a sense.To have a low value of IGD( * , ), must be very close to the PF and cannot miss any part of the whole PF. (ii) Spacing (S): [39] proposed the spacing metric which measures the variance of distance of each solution in to its closest neighbour: A lower variance is preferred as this indicates a better distribution of solutions in the Pareto set.The idea value is 0 as this indicates that the distances from one solution to its closest neighbour is the same for every solution in the Pareto set which means a uniform distribution of solutions in the Pareto set.(iii) Hypervolume (HV) [40]: the hypervolume metric measures the size of the region which is dominated by the solutions in .Therefore a higher value of the HVmetric is preferred.Mathematically, the HV-metric is described as where V(⋅) is the Lebesgue measure, and = ( 1 , . . ., ) ⊤ is an antioptimal reference point in the objective space that is dominated by all Paretooptimal objective vectors. Parameters used in algorithms are set as follows. (1) Population Size and Number of Evaluations (i) For F1 -F9, population size is set to 300 and 595 for the problems with two objectives and three objectives, respectively, in all compared algorithms.The maximal number of generations is set to 250 for F1 -F9. (ii) For UF1 -UF10, population size is set to 300 and 600 for the test instances with two objectives and three objectives, respectively.The total number of evaluations = 300000 for all UF1 -UF10. (iii) Each algorithm runs 30 times independently on each test instance F1 -F9 and UF1 -UF10.All those algorithms stop running after getting a given maximal number of function evaluations or generations. MOEA/D-IWO Is Compared with NSGA-II [14] and MOEA/D [16] on F1 -F9.MOEA/D-IWO is compared with MOEA/D and NSGA-II in terms of performance metrics values.The statistical results of performance metrics obtained by MOEA/D-IWO and the other two algorithms are summarized in Tables 1-3.The three statistical results are based on 30 independent runs for each test problem, including the mean, the minimum, and the standard deviation (std) of the performance metrics values.The best performance on the same test problem is highlighted by bold font.Besides, in the fifth and seventh columns of Tables 1-3, the statistical significance (ss) of the advantage of MOEA/D-IWO in the mean IGD-metric, S-metric, and HV-metric value is reported.+/=/-, respectively, represents that MOEA/D-IWO is statistically superior to, equal to, and inferior to MOEA/D and NSGA-II in terms of mean performance metric value. As listed in Table 1, for almost all the test problems, the mean and the best IGD-metric values obtained by MOEA/D-IWO are smaller than those obtained by MOEA/D and NSGA-II, respectively, which demonstrates that MOEA/D-IWO behaves better than MOEA/D and NSGA-II in pursuing PF on both the convergence and diversity. The spacing-metric numerically describes the spread of the solutions on the objective space.Table 2 clearly shows that the spacing-metric values obtained by MOEA/D-IWO are smaller than other two algorithms for almost all the test problems, which indicates that the solutions obtained by MOEA/D-IWO are spaced more evenly than those obtained by MOEA/D and NSGA-II in general. The HV-metric measures the size of the region which is dominated by the obtained Pareto front, i.e., the region of coverage of the obtained Pareto front.Therefore the higher value of the HV-metric is preferred.As described in Table 3 for almost all the test problems MOEA/D-IWO has better performance than MOEA/D and NSGA-II in terms of HVmetric. To verify the convergence trend of the proposed algorithm, convergence graphs of the three algorithms on F1 -F9 are shown in Figure 2 plotting the evolution of the average IGD-metric values.It can be clearly seen from Figure 2 that MOEA/D-IWO converges much faster than NSGA-II and MOEA/D in minimizing the IGD-metric values for almost all the problems, which indicates that in most cases the adaptive IWO is effective in accelerating the convergence, and the proposed hybrid MOEA/D-IWO is feasible in improving the accuracy of the Pareto solutions. Figures 3-5 plot the distribution of the final population in the objective space obtained by three algorithms on F1 -F9.It can be observed from these three figures that MOEA/D-IWO can obtain good approximations to F1, F3, F4 -F6.However, it fails within the given number of generations, to approximate the PFs of the problems F8 and F9 satisfactorily, perhaps for the reason that incorporating IWO (using the random Gaussian reproduction mechanism for optimization) into an MOEA would, in some sense, spoil the diversity of the algorithm, and the current mechanism is not good enough for well solving the concave or problems with many local Pareto solutions.However, as evidenced from Tables 1-3 and Figures 2-5, in general MOEA/D-IWO performs well and preferably.Table 4 provides the mean and the standard deviation (std) of the IGD performance metrics of all compared algorithms, where the best performance on each problem is highlighted by bold font.In order to validate the statistical significance of the advantages of MOEA/D-IWO over other algorithms, -test is carried out on the obtained IGD performance metric values and the results are shown in the rows labeled 'ss' .+/=/-shows that MOEA/D-IWO is superior to, similar to, or inferior to the compared algorithm, respectively.Total comparing results are summed up in the last row. From Table 4 we can observe that MOEA/D-IWO performs the best in all those six multiobjective algorithms.Among those ten test problems UF1-UF10, MOEA/D-IWO behaves better than MOEA/D-PSO on all of them; Mathematical Problems in Engineering MOEA/D-IWO behaves better than dMOPSO on nine problems; MOEA/D-IWO behaves better than MOEA/D-DE and I-MOEA/D on eight problems and better than R2HMOPSO on seven problems.All these results demonstrate the preferable performance of MOEA/D-IWO. Figure 6 visually plots the approximate Pareto fronts of UF1 -UF10 searched by MOEA/D-IWO.It can be seen from the figure that MOEA/D-IWO could find good approximations to UF1, UF2, UF3, and UF7.Its approximations to UF6, UF8, and UF9 are acceptable.However, it fails to approximate satisfactory PFs of UF4, UF5, and UF10 under the present given stop conditions.For well solving these problems with discontinuous or concave PFs, MOEA/D-IWO needs to be further improved.Nevertheless, as evidenced from Tables 1-4, the hybrid of MOEA/D and IWO can generally improve the performance of MOEA/D.Compared with other hybrid MOEA/D algorithms, MOEA/D-IWO is efficient and competitive. In a word, through comparing with other popular MOEAs we validate the great potential of MOEA/D-IWO in dealing with this kind of complicated multiobjective problems. Additional Experimental Discuss. MOEA/D-IWO decomposes an MOP into a big set of scalar subproblems and coevolves these subproblems simultaneously.In the process of coevolution, an adaptive IWO is proposed and utilized for each subproblem.As described in Section 3. is an important regulatory factor in adaptive IWO for balancing effective exploration and efficient exploitation.Trial experiments observed that its value is properly set from 0 to 0.5.To investigate the impact of on the performance of MOEA/D-IWO, different settings of have been tested in this part.7, MOEA/D-IWO performs relatively stable with from 0 to 0.5 on the two problems, and it will deteriorate when is greater than 0.5 on UF7.It is evident that MOEA/D-IWO is not very sensitive to the setting of under the range considered [0, 0.5].When is relatively large, the reason for the poor performance of MOEA/D-IWO may be that the search diversity of the algorithm is enhanced but the exploration ability is weakened. Conclusion IWO is a smart algorithm mimicking the ecological behavior of colonizing weeds and distribution and is able to efficiently handle general linear, nonlinear, and multidimensional optimization problems.Since its proposal, IWO has been successfully applied in many practical optimization problems.However, to our knowledge, there has been little research on IWO for multiobjective optimization.Actually, MOEA/D-IWO is still faced with some challenges in solving the MOPs with discontinuous or concave PFs.Strengthening the performance of the algorithm remains to be studied further.MOEA/D-IWO may be improved by using better and newer variants of IWO in future.Meanwhile, we also intend to study the ability of MOEA/D-IWO in solving high-dimensional multiobjective optimization problems in the future. Figure 1 : Figure 1: Seed production procedure in a colony of weeds. F7 and F8 are two complicated MOPs with many local Pareto solutions; MOEA/D-IWO performs not well on them in experiments.We choose the two problems as examples to test the impact of .Different settings of in the implementation of MOEA/D-IWO for UF7 and UF8 have been tested.All the other parameters settings are the same as in Section 4.2 except the setting of .For each setting of , MOEA/D-IWO runs 30 times independently.Figures 7(a) and 7(b) box plot the IGD-metric values of the obtained solutions for UF7 and UF8 based on those 30 independent runs, while Figure 7(c) depicts the variation trend of the mean IGD-metric values under different .As clearly shown in Figure For well solving the complex multiobjective problems, in this work, we broaden the use of classical IWO and integrate it into the frame of MOEA/D for handling MOPs.Based on the smart and distinct features of IWO and MOEA/D, we introduce MOEA/D-IWO and try to combine their excellent features in this extended hybrid algorithm.MOEA/D-IWO decomposes an MOP into a big set of single-objective subproblmes and handles them simultaneously in each generation.The population consists of the best solutions found so far for each subproblem, and each subproblem adopts an adaptive IWO for evolution in each generation.The performance of the proposed MOEA/D-IWO in both convergence speed and optimality of results are compared with those of NSGA-II, MOEA/D, and some other MOEAs on a big set of MOPs.Comparison results indicate the feasible and competitive performance of MOEA/D-IWO in the field of multiobjective optimization.
6,854.4
2019-08-06T00:00:00.000
[ "Computer Science" ]
Decomposing reflectance spectra to track gross primary production in a subalpine evergreen forest Photosynthesis by terrestrial plants represents the majority of CO2 uptake on Earth, yet it is difficult to measure directly from space. Estimation of gross primary production (GPP) from remote sensing indices represents a primary source of uncertainty, in particular for observing seasonal variations in evergreen forests. Recent vegetation remote sensing techniques have highlighted spectral regions sensitive to dynamic changes in leaf/needle carotenoid composition, showing promise for tracking seasonal changes in photosynthesis of evergreen forests. However, these have mostly been investigated with intermittent field campaigns or with narrow-band spectrometers in these ecosystems. To investigate this potential, we continuously measured vegetation reflectance (400–900 nm) using a canopy spectrometer system, PhotoSpec, mounted on top of an eddy-covariance flux tower in a subalpine evergreen forest at Niwot Ridge, Colorado, USA. We analyzed driving spectral components in the measured canopy reflectance using both statistical and processbased approaches. The decomposed spectral components covaried with carotenoid content and GPP, supporting the interpretation of the photochemical reflectance index (PRI) and the chlorophyll/carotenoid index (CCI). Although the entire 400–900 nm range showed additional spectral changes near the red edge, it did not provide significant improvements in GPP predictions. We found little seasonal variation in both normalized difference vegetation index (NDVI) and the nearinfrared vegetation index (NIRv) in this ecosystem. In addition, we quantitatively determined needle-scale chlorophyllto-carotenoid ratios as well as anthocyanin contents using full-spectrum inversions, both of which were tightly correlated with seasonal GPP changes. Reconstructing GPP from vegetation reflectance using partial least-squares regression (PLSR) explained approximately 87 % of the variability in observed GPP. Our results linked the seasonal variation in reflectance to the pool size of photoprotective pigments, highPublished by Copernicus Publications on behalf of the European Geosciences Union. 4524 R. Cheng et al.: Decomposing reflectance spectra to track gross primary production lighting all spectral locations within 400–900 nm associated with GPP seasonality in evergreen forests. Abstract. Photosynthesis by terrestrial plants represents the majority of CO 2 uptake on Earth, yet it is difficult to measure directly from space. Estimation of gross primary production (GPP) from remote sensing indices represents a primary source of uncertainty, in particular for observing seasonal variations in evergreen forests. Recent vegetation remote sensing techniques have highlighted spectral regions sensitive to dynamic changes in leaf/needle carotenoid composition, showing promise for tracking seasonal changes in photosynthesis of evergreen forests. However, these have mostly been investigated with intermittent field campaigns or with narrow-band spectrometers in these ecosystems. To investigate this potential, we continuously measured vegetation reflectance (400-900 nm) using a canopy spectrometer system, PhotoSpec, mounted on top of an eddy-covariance flux tower in a subalpine evergreen forest at Niwot Ridge, Colorado, USA. We analyzed driving spectral components in the measured canopy reflectance using both statistical and process-based approaches. The decomposed spectral components covaried with carotenoid content and GPP, supporting the interpretation of the photochemical reflectance index (PRI) and the chlorophyll/carotenoid index (CCI). Although the entire 400-900 nm range showed additional spectral changes near the red edge, it did not provide significant improvements in GPP predictions. We found little seasonal variation in both normalized difference vegetation index (NDVI) and the nearinfrared vegetation index (NIRv) in this ecosystem. In addition, we quantitatively determined needle-scale chlorophyllto-carotenoid ratios as well as anthocyanin contents using full-spectrum inversions, both of which were tightly correlated with seasonal GPP changes. Reconstructing GPP from vegetation reflectance using partial least-squares regression (PLSR) explained approximately 87 % of the variability in observed GPP. Our results linked the seasonal variation in reflectance to the pool size of photoprotective pigments, high- Introduction Terrestrial gross primary production (GPP), the gross CO 2 uptake through photosynthesis, is the largest uptake of atmospheric CO 2 (Ciais et al., 2013), yet the uncertainties are large, hampering our ability to monitor and predict the response of the terrestrial biosphere to climate change (Ahlström et al., 2012). Hence, accurately mapping GPP globally is critical. In contrast to unevenly distributed ground-level measurements such as Fluxnet (Baldocchi et al., 2001), satellites can infer GPP globally and uniformly. Remote sensing techniques are based on the optical response of vegetation to incoming sunlight, which can track photosynthesis via the absorption features of photosynthetic and photoprotective pigments (Rouse et al., 1974;Liu and Huete, 1995;Gamon et al., 1992Gamon et al., , 2016. Progress is particularly important for evergreen forests, which can have large seasonal dynamics in photosynthesis but low variability in canopy structure and color. However, these promising techniques still lack a comprehensive evaluation/validation using both continuous in situ measurements and process-based simulations. GPP can be expressed as a function of photosynthetically active radiation (PAR), the fraction of PAR absorbed by the canopy (fPAR), and light-use efficiency (LUE): with LUE representing the efficiency of plants to fix carbon using absorbed light (Monteith, 1972;Monteith and Moss, 1977). The accuracy of remote-sensing-derived GPP is limited by the estimation of LUE, which is more dynamic and difficult to measure remotely than PAR and fPAR, particularly in evergreen ecosystems. There have been many studies inferring the light absorbed by canopies (i.e., fPAR) from vegetation indices (VIs) that estimate the "greenness" of canopies (Running et al., 2004;Zhao et al., 2005;Robinson et al., 2018;Glenn et al., 2008), such as the normalized difference vegetation index (NDVI; Rouse et al., 1974;Tucker, 1979), the enhanced vegetation index (EVI; Liu and Huete, 1995;Huete et al., 1997), and the near-infrared vegetation index (NIRv; Badgley et al., 2017). Current GPP data products derived from Eq. (1) rely on the modulation of abiotic conditions to estimate LUE (Xiao et al., 2004). LUE is derived empirically by defining a general timing of dormancy for all evergreen forests with the same plant functional type (e.g., Krinner et al., 2005) or the same meteorological thresholds (e.g., Running et al., 2004). However, within the same climate region or plant functional type, forests are not identical -leading to uncertainties in estimated LUE (Stylinski et al., 2002;Gamon et al., 2016;Zuromski et al., 2018), which propagate to the estimation of GPP. Because evergreen trees retain most of their needles and chlorophyll throughout the entire year , LUE in evergreens is regulated by needle biochemistry. As LUE falls with the onset of winter due to unfavorable environmental conditions and seasonal downregulation of photosynthetic capacity, evergreen needles quench excess absorbed light via thermal energy dissipation that involves the xanthophyll cycle and other pigments (Adams and Demmig-Adams, 1994;Demmig-Adams and Adams, 1996;Verhoeven et al., 1996;Zarter et al., 2006). Thermal energy dissipation is a primary de-excitation pathway measured by pulse-amplitude fluorescence as non-photochemical quenching (NPQ; Schreiber et al., 1986). At the same time, a small amount of radiation, solar-induced fluorescence (SIF), via the de-excitation of absorbed photons is emitted by photosystem II (Genty et al., 1989;Krause and Weis, 1991). Some vegetation indices are sensitive to photoprotective pigments (e.g., carotenoids) and can characterize the seasonality of evergreen LUE with some success. For instance, the photochemical reflectance index (PRI; Gamon et al., 1992Gamon et al., , 1997 and chlorophyll/carotenoid index (CCI; Gamon et al., 2016) both use wavelength regions that represent carotenoid absorption features around 531 nm at the leaf level (Wong et al., 2019;Wong and Gamon, 2015a, b) and show great promise for estimating photosynthetic seasonality (Hall et al., 2008;Hilker et al., 2011a). Due to the relatively invariant canopy structure in evergreen forests, CCI and PRI have been applied at the canopy level as well (Gamon et al., 2016;Garbulsky et al., 2011;Middleton et al., 2016). In addition, the green chromatic coordinate (GCC; Richardson et al., 2009Richardson et al., , 2018Sonnentag et al., 2012), an index derived from the brightness levels of RGB canopy images, is also capable of tracking the seasonality of evergreen GPP . However, the full potential of spectrally resolved reflectance measurements to explore the photosynthetic phenology of evergreens has not been comprehensively explored at the canopy scale. The evaluation of pigment-driven spectral changes in evergreen forests over the course of a season is necessary to determine where, when, and why certain wavelength regions could advance our mechanistic understanding of canopy photosynthetic and photoprotective pigments. However, this has not been done with both empirical and process-based methods using continuously measured canopy hyperspectral reflectance and in situ pigment samples. In addition to seasonal changes in pigment concentrations, canopy SIF was found to correlate significantly with the seasonality of photoprotective pigment content in a subalpine coniferous forest (Magney et al., 2019a). Steady-state SIF is regulated by NPQ and photochemistry (Porcar-Castell et al., 2014), and it provides complementary information on canopy GPP. Yang and van der Tol (2018) justified that the relative SIF, SIF normalized by the reflected near-infrared radiation, is more representative of the physiological variations in SIF as it is comparable to a SIF yield (Guanter et al., 2014;Genty et al., 1989). Our continuous optical measurements make it possible to differentiate mechanisms undergoing seasonal changes by comparing the decomposed reflectance spectrum against relative far-red SIF. Additionally, using relative SIF can effectively correct for incoming irradiance and account for the sunlit and shade fractions within the observation field of view (FOV) of PhotoSpec (Magney et al., 2019a). In the present study, we analyzed continuous canopy reflectance data from PhotoSpec at a subalpine evergreen forest at the Niwot Ridge AmeriFlux site (US-NR1) in Colorado, US, and sought to understand the mechanisms controlling the seasonality of photosynthesis using continuous hyperspectral remote sensing. We first explored empirical techniques to study all seasonal variations in reflectance spectra, identified specific spectral regions that best explained the seasonal changes in GPP, and then linked these spectral features to pigment absorption features that impacted both biochemical and biophysical traits. We also used full-spectral inversions using a canopy RTM to infer quantitative estimates of leaf pigment pool sizes. Finally, we compared the spring onset of photosynthesis captured by different methods, VIs, and relative SIF to determine the underlying mechanisms that contributed to photosynthetic phenology. Study site The high-altitude (3050 m above sea level) subalpine evergreen forest near Niwot Ridge, Colorado, US, is an active AmeriFlux site (US-NR1, lat: 40.0329 • N, long: 105.5464 • W; tower height: 26 m; Monson et al., 2002;Burns et al., 2015Burns et al., , 2016Blanken et al., 2019). Three species dominate: subalpine fir (Abies lasiocarpa var. bifolia), Engelmann spruce (Picea engelmannii), and lodgepole pine (Pinus contorta) with an average height of 11.5 m, a leaf area index of 4.2 (Burns et al., 2016), and minimal understory. The annual mean precipitation and air temperature are 800 mm and 1.5 • C, respectively (Monson et al., 2002). The high elevation creates an environment with cold winters (with snow present more than half the year), while the relatively low latitude (40 • N) allows for year-round high solar irradiation (Monson et al., 2002). Thus, trees have to dissipate a considerable amount of excess sunlight during winter dormancy, which makes this forest an ideal site for studying seasonal variation in NPQ including the sustained component of it during dormancy Magney et al., 2019a). Continuous tower-based measurements of canopy reflectance PhotoSpec (Grossmann et al., 2018) is a 2D scanning telescope spectrometer unit originally designed to measure SIF. It also features a broadband Flame-S spectrometer (Ocean Optics, Inc., Florida, USA), used to measure reflectance from 400 to 900 nm at a moderate (full width at half maximum = 1.2 nm) spectral resolution with a FOV of 0.7 • (more details in Grossmann et al., 2018;Magney et al., 2019a). In the summer of 2017, we installed a PhotoSpec system on the top of the US-NR1 eddy-covariance tower, from where we can scan the canopy by changing both viewing azimuth angle and zenith angles. On every other summer day and every winter day, PhotoSpec scans the canopy by changing the view zenith angle with small increments at fixed view azimuth angles, i.e., elevation scans. Only one azimuth position is kept after 18 October 2017 to protect the mechanism from potentially damaging winter conditions at the site. Spectrally resolved reflectance was calculated using direct solar irradiance measurements via a cosine diffuser mounted in the upward nadir direction (Grossmann et al., 2018) as well as reflected radiance from the canopy. The reflectance data used in this study are from 16 June 2017 to 15 June 2018. Here, we integrated all elevation scans to daily-averaged reflectance (every other day before 18 October 2017) by using all scanning viewing directions with vegetation in the field of view over the course of a day, filtering for both lowlight conditions and thick clouds by requiring PAR to be both at least 100 µmol m −2 s −1 and 60 % of theoretical clear-sky PAR. A detailed description of data processing can be found in Appendix B. To further test whether bidirectional reflectance effects impacted our daily averages, we compared the NDVI and NIRv at various canopy positions given a range of solar zenith and azimuth angles (Figs. A1-A3). Neither of the daily averaged VIs was substantially impacted by the solar geometry supporting the robustness of daily averaged canopy reflectance. An additional analysis (Fig. A4) has also shown the variation in phase angle at a daily time step is not a critical factor for the change in reflectance. About 49 winter days exhibited significantly higher reflectances, attributable to snow within the field of view, which we corroborated with canopy RGB imagery from the tower. After removing data strongly affected by snow and excluding the days of instrument outages, 211 valid sample days remained, among which 96 valid sample days were between DOY 100 and 300. The daily-averaged reflectance was computed as the median reflectance from all selected scans for a single day, which was then smoothed by a 10-point (3.7 nm) box-car filter over the spectral dimension (400-900 nm) to remove the noise in the spectra. Figure 1a shows the seasonally averaged and spectrally resolved canopy reflectances measured by PhotoSpec. To further emphasize the change in reflectance as a result of changes in pigment contents, we transformed the reflectance (shown as R λ ) using the negative logarithm (Eq. 1), as light intensity diminishes exponentially with pigment contents (Horler et al., 1983). Here σ is the absorption cross section of pigments. Therefore, the log-transformed reflectance (Fig. 1b) should correlate more linearly with pigment contents (shown as C). We also considered a variety of typical VIs using the reflectance data from PhotoSpec. (Richardson et al., 2009). (3e) In order to calculate GCC, we convolved the reflectance using the instrumental spectral response function ( . For comparison, we normalized the reflectance by the value at 800 nm on each day. Here, we referred to 13 November-18 April as dormancy, and 2 June-21 August as the main growing season. The seasonal averaged canopy reflectance is composed of 39 daily-average reflectance in the growing season and 113 daily-averaged reflectance in the dormancy. In addition to the reflectance measurements, we also included relative SIF, far-red SIF normalized by the reflected near-infrared radiance at 755 nm. The far-red SIF (745-758 nm, Grossmann et al., 2018) was measured simultaneously with reflectance with a QEPro spectrometer (Ocean Optics, Inc., Florida, USA). The daily relative SIF was processed in the same fashion as the reflectance. Eddy covariance measurements and LUE Observations of net ecosystem exchange (net flux of CO 2 , NEE), PAR, and meteorological variables made at the US-NR1 tower are part of the official AmeriFlux Network data (Burns et al., 2016). GPP was estimated in half-hourly intervals (Reichstein et al., 2005) using the REddyProc package (Wutzler et al., 2018), allowing us to compute LUE (Goulden et al., 1996;Gamon et al., 2016) at half-hourly intervals. According to the light response curves, GPP is a nonlinear function of PAR ( Fig. 2; Harbinson, 2012). Magney et al. (2019a) showed that fPAR does not significantly vary with seasons. We started to observe a photosynthetic saturation between 500 and 1000 µmol m −2 s −1 of PAR (Fig. 2), when the carboxylation rate, driven by maximum carboxylation rate (V cmax ), became the limiting factor (Farquhar et al., 1980). Thus, we defined the light-saturated GPP (GPP max ), as the mean half-hourly GPP at PAR levels between 1000 and 1500 µmol m −2 s −1 , a range which was covered throughout the year (Fig. 2), even in winter. Therefore, GPP max was less susceptible to short-term changes in PAR. Yet, due to the lower light intensity during storms, GPP max was not always available. As suggested by the low PAR value at which light saturation happened, plants remained in a light-saturated condition for most of the daytime. A higher GPP max indicates a greater V cmax and maximum electron transport rate (J max ) when the variation in GPP max is independent of stomatal conductance and intercellular CO 2 concentration (Leuning, 1995). Therefore, GPP max was closely correlated with daily LUE driven by physiology (see Sect. S2.4 in the Supplement). We refrained from normalizing GPP max by absorbed photosynthetically active radiation (APAR) due to some of the APAR measurements (see Sect. S2.1 in the Supplement) not available in the beginning of growing season. GPP max was significantly linearly correlated with normalized GPP max by APAR (Fig. S2c). We also included air temperature (T air ) and vapor pressure deficit (VPD) provided from the AmeriFlux network data. Daytime daily mean T air and VPD were computed from averaging the half-hourly T air and VPD when PAR was greater than 100 µmol m −2 s −1 . Pigment measurements To link canopy reflectance with variations in pigment contents, we used pigment data Bowling and Logan, 2019;Magney et al., 2019a) at monthly intervals over the course of the sampling period. Here, we focused on the xanthophyll cycle pool size (violaxanthin + antheraxanthin + zeaxanthin, V + A + Z), total carotenoid content (car), and total chlorophyll content (chl) measured on Pinus contorta and Picea engelmannii needles with units of moles per unit fresh mass. Car includes V + A + Z, lutein, neoxanthin, and beta-carotene. We also computed the ratio of chlorophyll to carotenoid contents (chl : car), because CCI derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) can track chl : car (Gamon et al., 2016). Overall, we can match 10 individual leaf-level sampling days for both pine and spruce samples with reflectance measured within ±2 d. Among these 10 valid sample days, 6 sample days are between DOY 100 and 300. Data-driven spectral decomposition We assumed that the spectrally resolved reflectance is a result of mixed absorption processes by different pigments. This allowed us to apply an independent component analysis (ICA; Hyvärinen and Oja, 2000) to decompose the logtransformed reflectance matrix (day of the year in rows and spectral dimension in columns) into its independent components. An advantage of the ICA is that it can separate a multivariate signal into additive subcomponents that are maximally independent, without the condition of orthogonality (Comon, 1994). We extracted three independent components, which explained more than 99.99 % of the variance, using the ICA algorithm (FastICA, Python package scikitlearn v0.21.0; Sect. S4 in the Supplement), such as where i is the ith component in spectral space. The decomposed spectral components revealed characteristic features that explain most of the variance in the reflectance matrix, which dictated the time-independent spectral shapes of pigment absorption features based on Eq. (1). The corresponding temporal loadings showed temporal variations in these spectral features, i.e., the variations in pigment contents. We will introduce the method of extracting pigment absorption features in a quantitative model-driven approach in Sect. 2.6. In addition to analyzing the transformed reflectance alone, we empirically correlated the reflectance with GPP max using partial least-squares regression (PLSR, Python package scikit-learn v0.21.0). PLSR is a predictive regression model which solves for a coefficient that can maximally explain the linear covariance of the predictor with multiple variables (Wold et al., 1984;Geladi and Kowalski, 1986). PLSR has been used to successfully predict photosynthetic properties using reflectance matrices in previous studies from the leaf to canopy scales (e.g., Serbin et al., 2012Serbin et al., , 2015Barnes et al., 2017;Silva-Perez et al., 2018;Woodgate et al., 2019). Applying the PLSR to the hyperspectral canopy reflectance and GPP max resulted in a time-independent coefficient that emphasizes the key wavelength regions which contribute to the covariation of reflectance and GPP max , such as We implemented another set of PLSR analyses on the reflectance with individual pigment measurement as the target variable, such as the mean values of V + A + Z, car, and chl : car, such as We did not include chl as one of the target variables in this PLSR analysis since Bowling et al. (2018) and Magney et al. (2019a) have already shown chl did not vary seasonally in our study site. Fitting the minimal variance in chl will lead to overfitting the PLSR model. Comparing the PLSR coefficient of pigment measurements at the leaf level with the PLSR coefficient of GPP max connected the changes in GPP max to the pool size of photoprotective pigments, because the reflectance is regulated by the absorption of pigments. We used PROSAIL (with PROSPECT-D; Féret et al., 2017) to compute the derivative of the daily-averaged negative logarithm transformed reflectance with respect to individual pigment contents, namely chlorophyll content (chlorophyll Jacobian, ∂−log(R) ∂C chl ) and carotenoid content (carotenoid Jacobian, ∂−log(R) ∂C car ) (Dutta et al., 2019). This helped explain the decomposed spectral components from the empirical analysis. We also used PROSAIL to infer pigment contents (i.e., C chl , C car , C ant ) by optimizing the agreement between PROSAIL-modeled reflectance and measured canopy daily-mean reflectance from PhotoSpec. We fixed canopy structural parameters (e.g., the leaf area index (LAI) to 4.2, as reported in Burns et al., 2015) and fitted leaf pigment compositions as well as a low-order polynomial for soil reflectance (Appendix C), similar to Vilfan et al. (2018) and Féret et al. (2017). The cost function J in Eq. (7) represents a leastsquares approach, whereR is the modeled reflectance. We used the spectral range between 450 and 800 nm, which encompasses most pigment absorption features. 3 Results and discussion Seasonal cycle of GPP max and environmental conditions As can be seen in Fig. 3, the subalpine evergreen forest at Niwot Ridge exhibits strong seasonal variation in GPP, T air , VPD, GPP max , and PAR. GPP and GPP max dropped to zero while sufficient PAR, required for photosynthesis, was still available in the dormancy, which suggests that the abiotic environmental factors impact photosynthesis seasonality nonlinearly and jointly. Abiotic factors played a strong role in regulating GPP max in this subalpine evergreen forest over the course of the season. For instance, there was a strong dependence of GPP max with T air . However, photosynthesis completely shut down during dormancy, even when the T air exceeded 5 • C (Fig. 3). During the onset and cessation periods of photosynthesis, GPP max rapidly increased with temperature ( Fig. S3a left panel), potentially because needle temperature co-varied with T air , and needle temperature controls the activity of photosynthetic enzymes which affect V cmax . Spring warming approaches the optimal temperature for photosynthetic enzymes, leading to activation of photosynthesis, while cooling in the early winter inhibits these enzymes (Rook, 1969). Warming in spring melted frozen boles and made them available for water uptake , and thus caused the recovery of GPP max (Monson et al., 2005). Once the temperature was around the optimum (in the growing season), T air was no longer the determining factor for photosynthesis. Higher VPD caused by rising T air can stress the plants such that stomata closed, intercellular CO 2 reduced, and photosynthesis decreased (Fig. S3a right panel). When intercellular CO 2 concentration was not a limiting factor, GPP max was more representative of V cmax and did not vary T significantly. Seasonal cycle of reflectance In Fig. 4, the Jacobians show the maximum sensitivity of the reflectance spectral shape to carotenoid content at 524 nm, and near 566 and 700 nm for chlorophyll. The first peak of the chlorophyll Jacobian covers a wide spectral range in the visible range, while the second peak around the red edge is narrower. It can be seen that the first spectral ICA component has a similar shape as the chlorophyll Jacobian. The corresponding temporal loading has a range between −0.2 and 0.2 without any obvious seasonal variation, consistent with a negligible seasonal cycle in chlorophyll content as shown in the pigment analysis. However, there is a gradual increase before DOY 50 in the first temporal loading, which appears to be anti-correlated with the temporal loading of the second ICA structure. Two major features in the second spectral component can be observed. One is a negative peak centered around 530 nm, which aligns with the carotenoid Jacobian. At the negative logarithm scale, the negative values resulting from the negative ICA spectral peak multiplied by the positive ICA temporal loadings (growing season in Fig. 4 middle plots) indicate there were fewer carotenoids during the growing season ( Eqs. 1 and 4). Conversely, positive values resulting from a negative spectral peak multiplied by the negative temporal loadings (dormancy in Fig. 4 middle plots) indicate there were more carotenoids during dormancy (i.e., sustained photoprotection via the xanthophyll pigments; Bowling et al., 2018). Another feature is the valley-trough shape, which is co-located with the chlorophyll Jacobian center at the longer wavelength in the red-edge region. The center of this feature occurs at the shorter-wavelength edge of the chlorophyll Jacobian but does not easily explain changes in total chlorophyll content, which should show equal changes around 600 nm. The corresponding temporal loading apparently varied seasonally with GPP max . The second temporal loading transitioned more gradually from dormancy to the peak growing season than GPP max . Unfortunately, we were missing data to evaluate the relative timing of GPP max cessation. The third spectral component is similar to the mean shape of reflectance spectra. Its temporal loading remained around zero throughout the year. Overall, the second ICA spectral component is more representative of the seasonal variation in the magnitude of total canopy reflectance than the other spectral components. The spectral changes around the red edge in the second component are interesting and might be related to structural needle changes in chlorophyll-a and chlorophyll-b contributions (de Tomás Marín et al., 2016;Rautiainen et al., 2018), which are not separated in PROSPECT. CCI and PRI (Fig. 5a-b) followed the seasonal cycle of GPP max closely. CCI and PRI use reflectance near the center of the 530 nm valley feature (Eqs. 3c-3d), the spectral range that is most sensitive to the change of carotenoid content, so that they matched changes in GPP max very well. PRI was the smoothest throughout the year, without any significant fluctuations within the growing season, as opposite to what was observed in GPP max , which co-varied with T air and VPD ( Fig. S3a and b). This performance is intriguing given that PRI was originally developed to track short-term variations in LUE (Gamon et al., 1992), such as day-to-day and subseasonal scales. GCC (Fig. 5c) also correlated well with GPP max , but less than CCI and PRI. As can be seen in Fig. S1, the peak of the green channel used for GCC is close to the carotenoid Jacobian peak, while the red channel feature covers a part of the chlorophyll Jacobian feature. This explained the sensitivity of the GCC to changes in both carotenoid content and chlorophyll. The bands used in GCC are broader than the ones used by PRI and CCI; however it still captured these variations and can be computed using RGB imagery. Gentine and Alemohammad (2018) found that the green band helps to reconstruct variations in SIF using reflectances from MODIS. While they speculated that most variations in SIF are related to variations in PAR · fPAR (Gentine and Alemohammad, 2018), we suggest here that the green band indeed captures variations in LUE as well. NDVI (Fig. 5e) and NIRv (Fig. 5f) did not show an obvious seasonal variability. Similar to the ICA components, all VIs were quite noisy during dormancy, especially prior to DOY 50. This noise may be due to snow because we only removed the reflectance when the canopy was snow covered. Scattered photons possibly still reached the telescope when there was snow on the ground, which is true for our study site as snowpack exists in winter . PLSR coefficients of reflectance with GPP max and pigment measurements The spectral shape of the PLSR coefficient with GPP max highlighted a peak (centering at 532 nm) near that of the carotenoid Jacobian with the same valley-trough feature observed near the second peak of the chlorophyll Jacobian (Fig. 6a). The reconstructed GPP max captured the onset and cessation of growth, while the day-to-day noise in reflectance during dormancy propagated to the reconstructed GPP max (−2 to 5 µmol m −2 s −1 ). During the growing season, the day-today variations in GPP max were not captured by any of the methods using pigment absorption features (Figs. 5a-c and 6b), which indicates those variations were not related to pigment content, but rather changes in environmental conditions that lead to day-to-day changes in photosynthesis (Fig. S3a). Overall, the observed GPP max was significantly correlated with the PLSR reconstruction (Pearson r 2 = 0.87), but very similar compared to CCI and PRI. A similar PLSR model of reflectance but with pigment measurements (Fig. 7) showed a direct link between pigment contents and reflectance. It can be seen that the PLSR coefficients of reflectance are very similar, irrespective of the target variable. They feature a valley near the peak of the carotenoid Jacobian and a valley-trough feature near the peak at the longer wavelength of the chlorophyll Jacobian. This spectral shape is also very similar to the second ICA spectral component and PLSR coefficients of GPP max . V + A + Z, chl : car, and car were all nicely reconstructed by using the PLSR coefficients and reflectance (Fig. 7b). The reconstructed V + A + Z, car, and chl:car are correlated with the measured ones with Pearson r 2 values of 0.84, 0.71, and 0.93, respectively. The second ICA component and PLSR empirically showed the seasonality of reflectance using two different empirical frameworks. ICA only used the reflectance, while the PLSR model accounts for variations in both reflectance and GPP max or pigment content. Yet, both ICA and PLSR agreed on similar spectral features that co-varied seasonally with GPP max . This indicates that the resulting spectral features were primarily responsible for representing this seasonal cycle. The overlap of these features with the chlorophyll/carotenoid absorption features showed that the seasonality of GPP max was related to variation in pigment content at the canopy scale, which was directly validated with a similar PLSR coefficient of reflectance and pigment contents. These results are consistent with leaf-level measurements of a higher ratio of chlorophyll to carotenoid content during the growing season in this forest (Fig. 7). The highlighted spectral feature around 530 nm from ICA and PLSR closely overlaps with one of the bands used in CCI, PRI, and GCC (Eqs. 3a-3e), which provides a justification that these VIs can remarkably capture the LUE seasonality. The comparable Pearson r 2 values of PLSR, CCI, and PRI with GPP max suggest the pigment-driven seasonal cycle of GPP max is sufficiently represented by CCI and PRI. The spectral feature around the red edge does not make PLSR significantly more correlated with GPP max than CCI or PRI, which implies the feature is not driven by total chlorophyll or carotenoid contents. Process-based estimation of pigment content PROSAIL inversion results further supported the link between canopy reflectance, pigment contents, and GPP max . Figure 8 shows a continuous time series of C chl , C car , anthocyanin content (C ant ), and C chl C car derived from the PROSAIL canopy RTM inversion model. Examples of simulated and measured reflectance spectra shown are in Fig. C1. Anthocyanins are another type of photoprotective pigment (Pietrini et al., 2002;Lee and Gould, 2002;Gould, 2004) that protects the plants from high light intensity (Hughes, 2011). The pigment inversions closely matched the seasonality of GPP max . C chl C car showed the greatest sensitivity in capturing the seasonal cycle, with the strongest correlation to leaf level measurements (Fig. 8c). The inverted C chl had the weakest empirical relationship with the measured one ( Fig. 8a right panel). Apparently, some of the inversion errors of individual C car and C chl contents canceled out in the ratio, making the ratio more stable. C ant performed similarly to C car , since they both are photoprotective, and the anthocyanins absorb at 550 nm (Sims and Gamon, 2002), which is close to the center of carotenoid absorption feature. Even though we lacked field measurements of anthocyanins to validate anthocyanin retrievals, the inversions showed that more than just carotenoid content can be obtained from full-spectral inversions. Strictly speaking, the complex canopy structure of evergreens makes the application of 1D canopy RTMs such as PROSAIL difficult Zarco-Tejada et al., 2019). Yet, Moorthy et al. (2008), Ali et al. (2016), andZarco-Tejada et al. (2019) reasonably discussed the pigment retrieval in conifer forests with careful applications. In our study, the reflectance was collected from needles with a very small FOV, and our study site has a very stable canopy structure throughout the year (Burns et al., 2016). Thus, the inversion results are meaningful for discussing the seasonality of pigment contents. In the future, radiative transfer models that properly describe conifer forests, such as LIBERTY (Dawson et al., 1998), could be used. Comparison across methods Although decomposing the hyperspectral canopy reflectance and using relative SIF (Fig. 5d) both successfully tracked the seasonal cycle of evergreen LUE, they underlie different de-excitation processes. During the growing season, environmental conditions primarily drove the day-to-day variations in GPP max . Relative SIF responded to such environmental stresses so that it appeared to track sub-seasonal variations better than reflectance, particularly during the growing season (Fig. S5f). Yet, reflectance decompositions and VIs were less sensitive to such day-to-day variations (Figs. 6, S3b). There was also some variability between reflectance-based methods and relative SIF during the transition periods between the growing season and dormancy. We focused on the growing season onset since the reflectance measurements were not available during the cessation period. The onset (DOY 60 to 166) described by all the methods mentioned above as well as the relative SIF are compared in Fig. 9, using a sigmoid fit to available data (Fig. D1). The observed GPP max had the most rapid yet latest growing onset. The methods and VIs derived from or related to the pigment contents increased earlier than GPP max -such as the ICA component, PLSR coefficient, PROSAIL C chl C car , and CCI. However, they built up slowly to reach the maximum, which sug-gests that reduction of the carotenoid content is a slower process than the recovery of LUE. Reflectance-based VIs (Fig. 5) and decomposing methods (Figs. 4 and 8b, c) had a slower growing season onset than GPP max , as found in Bowling et al. (2018) as well. On the other hand, relative SIF started the onset at almost the same time as the GPP max , and it quickly reached the maximum. Therefore, using both SIF and reflectance to constrain the LUE prediction can further improve the prediction accuracy. Conclusion and future work In this study, we analyzed seasonal co-variation in GPP and the spectrally resolved visible and near-infrared reflectance signal, as well as several commonly used VIs. The main spectral feature centered around 530 nm is most important for inferring the seasonal cycle of reflectance (400-900 nm) 4534 R. Cheng et al.: Decomposing reflectance spectra to track gross primary production Figure 8. The left panels are the estimations of (a) C chl , (b) C car , C ant , and (c) C chl C car from the PROSAIL overlaid with the GPP max . We normalized two metrics because they report the pigment contents in different units. The vertical dashed line divides the observations from DOY for the years 2017 and 2018. The plots on the right compare the pigment contents from leaf-level measurements and using PROSAIL: (a) chl vs. C chl , (b) car vs. C car , and (c) chl : car vs. C chl C car . The correlations are statistically significant except C chl . Figure 9. Temporal evolution of the growing season onset using sigmoid fits (scaled) of PLSR, ICA, CCI, chlorophyll-to-carotenoid ratio, and relative SIF. and LUE, which corresponds to changes in carotenoid content. This explains why CCI, PRI, and GCC track GPP seasonality so well, as most variations are driven by carotenoid pool changes. Our analysis included RTM simulation and in situ pigment measurements throughout the season, confirming the link between reflectance/VIs and pigment con-tents. The comparison of reflectance/VIs and relative SIF reveals differences in the timing of the growing season onset, pigment changes, and SIF, indicating the potential of using both reflectance and SIF to track the seasonality of photosynthesis. However, the close correspondence between both SIF and reflectance suggests that hyperspectral reflectance alone provides mechanistic evidence for a robust approach to track photosynthetic phenology of evergreen systems. Because seasonal variation in pigment concentration plays a strong role in regulating the seasonality of photosynthesis in evergreen systems, our work will help to inform future studies using hyperspectral reflectance to achieve accurate monitoring of these ecosystems. While indices like PRI and CCI are performing sufficiently as our methods which use the full-spectrum analysis at the canopy scale, the application of the full spectrum might be more robust for space-based measurements. In addition, we found seasonal changes of canopy reflectance near the red-edge region, which could be related to leaf structural changes or changes in chlorophyll a and b. Our PLSR coefficients are good references for customizing VIs to infer the photosynthetic seasonality in evergreen forest when there are restrictions to use the specific bands from currently existing VIs (such as PRI and CCI). While our current study is limited to a subalpine evergreen forest and canopyscale measurements, applications to other regions, vegetation types, and observational platforms will be a focus for future research. R. Cheng et al.: Decomposing reflectance spectra to track gross primary production 4535 Appendix A: Bidirectional reflectance effect A1 NDVI and NIRv The impact of geometry and small FOV is relatively negligible. First, our method only used the scans when FOV is on the needles by setting a NDVI threshold. Second, we plotted the NDVI and NIRv against the solar geometry at each individual tree target throughout a year. NDVI and NIRv are quite homogeneous regardless of various solar geometries as shown in the following figures. 4536 R. Cheng et al.: Decomposing reflectance spectra to track gross primary production A2 PLSR on phase angle and reflectance We did a PLSR analysis on individual measurements of phase angle and reflectance for 3 summer days (1 to 3 July 2017). The results are the same from other sample days. Indeed, the reflectance has different sensitivities to the phase angle. However, the poor correlation of PLSR reconstructed phase angle and the measured one suggests the variations in phase angle should not be the critical factor for the change in reflectance. In our study, we primarily removed the bidirectional impact by averaging all the individual reflectance that was measured at different solar geometry and viewing geometry. Appendix B: Detailed processes on integrating daily-averaged canopy reflectance First, we chose scans targeting vegetation only by requiring an NDVI greater than 0.6. Second, it is important to ensure that the solar irradiation did not change between the acquisition of the solar irradiance and the reflected radiance measurement. To achieve this, we matched the timestamps of a PAR sensor (LI-COR LI-190SA, LI-COR Environmental, Lincoln, Nebraska, US) to the timestamps of PhotoSpec, and we compared the PAR value from the PAR sensor during the PhotoSpec irradiance acquisition with PAR during the actual target scan of the reflected radiance from vegetation. We only used the scans when the ratio of the two was 1.0 ± 0.1, ensuring stable PAR conditions. Third, in order to avoid unstable PAR because of clouds (Dye, 2004), we also removed cloudy scenes by requiring PAR to be at least 60 % of a theoretical maximum driven by solar geometry (Fig. B1). Further, only data when PAR was greater than 100 µmol m −2 s −1 were considered to eliminate the impact of low solar angles on reflectance data. The VIs shown in Fig. 5 were extracted in the same fashion as above. Figure B1. The distribution of the ratio of the measured PAR to the PAR at theoretical maximum from all individual scans. 4538 R. Cheng et al.: Decomposing reflectance spectra to track gross primary production Appendix C: PROSAIL fits We used the following range constraints for variables included in the state vector of the PROSAIL inversion. . Individual sigmoid fits of the onset of growth from different methods and more VIs. The fitted curve has been expressed as the derivation as above. The Pearson r 2 and p values listed in each subplot were calculated from the correlation of observed and fitted variables. The residual was calculated as the average L2 norm of the difference between observed (y) and fitted variables (ŷ) normalized by the observation, i.e., 1 n i ( y−ŷ y ) 2 . The fittings are overall good. Because the ICA loading lacks a clear sigmoid shape, ICA has a larger residual. The first derivative of y is At the half maximum point (x = x half ), y = a+b 2 . Therefore, we need to solve Hence, x half = d.
9,711.8
2020-02-17T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Inherited Thrombocytopenia Caused by Variants in Crucial Genes for Glycosylation Protein glycosylation, including sialylation, involves complex and frequent post-translational modifications, which play a critical role in different biological processes. The conjugation of carbohydrate residues to specific molecules and receptors is critical for normal hematopoiesis, as it favors the proliferation and clearance of hematopoietic precursors. Through this mechanism, the circulating platelet count is controlled by the appropriate platelet production by megakaryocytes, and the kinetics of platelet clearance. Platelets have a half-life in blood ranging from 8 to 11 days, after which they lose the final sialic acid and are recognized by receptors in the liver and eliminated from the bloodstream. This favors the transduction of thrombopoietin, which induces megakaryopoiesis to produce new platelets. More than two hundred enzymes are responsible for proper glycosylation and sialylation. In recent years, novel disorders of glycosylation caused by molecular variants in multiple genes have been described. The phenotype of the patients with genetic alterations in GNE, SLC35A1, GALE and B4GALT is consistent with syndromic manifestations, severe inherited thrombocytopenia, and hemorrhagic complications. Introduction Glycosylation is a key process by which carbohydrates or saccharides bind to proteins, lipids, and other biomolecules. It is a highly prevalent, conserved, and complex post-translational alteration [1]. Glycosylation influences a wide range of cellular processes, including control of protein secretion and degradation, cell signaling, adhesion and migration, host-pathogen interactions, or immune defense including both innate and acquired immunity [2][3][4][5]. Glycosylation is a highly modular process, whereby carbohydrate building blocks are repeatedly linked and assembled in varying lengths and branches. It is an unplanned process that gives rise to a wide and diverse repertoire of functional molecules [6] and plays a crucial role in the correct folding of proteins, their stability, and the formation of mature and functional proteins [7]. The presentation of glycans on cell surfaces is governed by more than 200 glycosyltransferases, sugar-nucleotide synthesis, and transport proteins, mainly located in the Int. J. Mol. Sci. 2023, 24, 5109 2 of 15 endoplasmic reticulum and Golgi apparatus [8,9]. The glycoconjugate forms are generally based on nine monosaccharides ( Figure 1A). The glycan residues can be conjugated to asparagine (N-glycan) or serine/threonine (O-glycan) residues to form the glycoproteins. Two N-acetylglucosamine (GlcNAc) and three mannose (Man) residues usually constitute the core of N-glycans, which are generally highly branched. We can distinguish high mannose, hybrid, and complex N-linked glycans [10,11]. In contrast, O-glycans are linked with N-acetylgalactosamine (GalNAc) and are, in general, less branched than N-glycans ( Figure 1B). Between the N-and O-branches, traces of galactose (Gal), GalNAc, GlcNAc, fucose (Fuc) and the final sialic acid (Sial) can be detected ( Figure 1B). Glycosphingolipids are conjugated at the plasma membrane, whereas glycosaminoglycans are mainly composed of an initial xylose (Xyl) followed by glucuronic acid (GlcA) and GlcNAc or GalNAc branches ( Figure 1B) [10]. Two N-acetylglucosamine (GlcNAc) and three mannose (Man) residues usually constitute the core of N-glycans, which are generally highly branched. We can distinguish high mannose, hybrid, and complex N-linked glycans [10,11]. In contrast, O-glycans are linked with N-acetylgalactosamine (GalNAc) and are, in general, less branched than N-glycans (Figure 1B). Between the N-and O-branches, traces of galactose (Gal), GalNAc, GlcNAc, fucose (Fuc) and the final sialic acid (Sial) can be detected ( Figure 1B). Glycosphingolipids are conjugated at the plasma membrane, whereas glycosaminoglycans are mainly composed of an initial xylose (Xyl) followed by glucuronic acid (GlcA) and GlcNAc or GalNAc branches ( Figure 1B) [10]. More than half of the proteins in human cells and 50-70% of serum proteins are glycosylated [12]. Platelets express highly glycosylated proteins on their surface, which are involved in platelet hemostasis and function, as well as in their interaction with other cells [13]. To maintain a normal circulating platelet count between 150-400 × 10 9 /L, about 10 11 of them are cleared daily, highlighting the importance of the balance between production and removal of these cells. Glycosyltransferases and synthesis and transport proteins are involved in both processes, and their dysregulation leads to variations in platelet counts and/or functional alterations [14]. In this review, we will focus on the role of glycosylation for proper platelet formation and clearance, and on the genes involved in platelet physiology whose molecular alterations are associated with inherited thrombocytopenia (IT). Addition of N-acetylgalactosamine (GalNAc, yellow square) to serine/threonine (Ser/Thr) residues initiates O-glycan synthesis, while two N-acetylglucosamine (GlcNAc, blue square) and three mannose (Man, green circle) constitute the N-glycan core, and the glycosylation branches are formed by galactose molecules (Gal, yellow circle), GalNAc, GlcNAc, fucose (Fuc, red triangle) and the final sialic acid (Sial, purple rhombus). Glycosphingolipids are conjugated to the plasma membrane, whereas glycosaminoglycans are mainly composed of an initial xylose (Xyl, white star) followed by glucuronic acid (GlcA, blue and white rhombus) and GlcNAc or GalNAc. More than half of the proteins in human cells and 50-70% of serum proteins are glycosylated [12]. Platelets express highly glycosylated proteins on their surface, which are involved in platelet hemostasis and function, as well as in their interaction with other cells [13]. To maintain a normal circulating platelet count between 150-400 × 10 9 /L, about 10 11 of them are cleared daily, highlighting the importance of the balance between production and removal of these cells. Glycosyltransferases and synthesis and transport proteins are involved in both processes, and their dysregulation leads to variations in platelet counts and/or functional alterations [14]. In this review, we will focus on the role of glycosylation for proper platelet formation and clearance, and on the genes involved in platelet physiology whose molecular alterations are associated with inherited thrombocytopenia (IT). Role of Glycosylation in Thrombopoiesis and Platelet Clearance Thrombopoietin (TPO) is a hematopoietic growth factor essential for thrombopoiesis that is produced predominantly by the liver [15]. Binding of TPO to the c-Mpl receptor (encoded by MPL) on platelets and megakaryocytes (MKs) activates a cascade of signaling molecules driving MK development and platelet formation [16]. The plasma concentration of TPO correlates inversely with platelet number, and circulating levels are determined as a function of its binding to platelets and MK, leading to its internalization and degradation along with the c-Mpl receptor [17]. Decreased platelet turnover rate or reduced platelet number results in increased levels of free TPO, which induces a compensatory response dependent on bone marrow MK concentration to increase platelet production [18]. N-linked and O-linked glycans play an essential role in the stability of major MK and platelet surface glycoproteins, including the GPIb-IX-V complex, GPIIb-IIIa (integrin αIIbβ3) and GPVI. Alteration of their glycosylation negatively influences glycoprotein functions, leading to abnormal morphology, defective platelet activation and excessive bleeding [19]. In addition, platelet GPIbα is responsible for the maintenance of steadystate hepatic TPO production [20]. It has been described that the absence of GPIbα in the MK membrane leads to reduced thrombopoiesis due to aberrant membrane development during MK maturation, impaired formation of the membrane demarcation system (DMS), and disruption of the microtubule cytoskeleton, as described in Bernard Soulier syndrome (BSS) [21]. In addition, our group has recently described a defect in GPIbα glycosylation that affects thrombopoiesis and actin cytoskeleton remodeling [22]. These findings highlight the essential role of protein glycosylation during megakaryopoiesis and thrombopoiesis. Desialylated and/or senescent platelets increase TPO production. Loss of the final sialic acid is responsible for platelet clearance, as exposure of the penultimate Gal residue is recognized by hepatic Ashwell-Morell receptors (AMR), whereas exposure of the Glc-NAc residue is recognized by resident hepatic macrophages (Kupffer cells), via the αMβ2 integrin. Consequently, AMR activation drives hepatic TPO mRNA expression through Janus kinase 2 (JAK2) and signal transducer and activator of transcription 3 (STAT3) signaling, triggering a feedback mechanism to increase TPO levels and promote platelet formation [23,24] (Figure 2A). AMR preferentially binds to complex branched glycans, suggesting that N-glycans are the main site of ligand recognition [25]. However, desialylation of O-glycans on GPIbα is known to favor receptor signaling and surface expression of neuraminidase which, by desialylating platelet N-glycans, would allow AMR-mediated clearance. On the other hand, Kupffer cells also play an important role in the clearance of aged platelets and during immune-mediated thrombocytopenia [20,26]. Platelet clearance by aging (senescence) induces signals including loss of sialic acid mediated by up-regulation of platelet sialidases Neu1 and Neu3, which are expressed in the granular and plasma membrane compartments, respectively [27]. Neu1 and Neu3 usually impact sialic acid binding on GPIbα, leading to its degradation [27]. In addition, antibodymediated platelet destruction occurs via Fc receptors on primarily splenic macrophages and it is frequent in primary immune thrombopenia (ITP) [28]. In this disease, circulating autoantibodies with specificity for membrane glycoproteins, such as GPIIb-IIIa or GPIbα, can bind to platelets, thus triggering platelet desialylation by secretion of active Neu1, and additionally favoring their clearance by cytotoxic CD8 T lymphocytes [29,30]. unclear how senescent platelets are removed from circulation, many cells undergoing apoptosis shift the redistribution of phosphatidylserine (PS) from the inner to the outer lamella of the plasma membrane, which serves as a molecular signal for removal by phagocytes [35] ( Figure 2B). Overall, it remains to be elucidated whether loss of sialic acid triggers the intrinsic apoptotic machinery in platelets during the clearance mechanisms that regulate platelet counts. Thousands of enzymes regulated by glycosylation processes are involved in platelet formation and clearance. Alterations in any of them could result in an imbalance between the two processes and consequently impact platelet counts. Until relatively recently, a very limited number of molecular variants had been described in only few genes that were related to IT [36]. Platelet survival also depends on the interplay between antiapoptotic and proapoptotic factors of the Bcl-2 family, which are critical regulators of the intrinsic apoptotic pathway [31] ( Figure 2B). However, it is still unclear whether Bcl-2 family members alter the sialic acid content on the surface of platelets. Platelet loss of function and death is governed by unclear mechanisms that share some similarity to those used by nucleated cells for programmed cell death [32]. In addition, platelets express certain components of the extrinsic pathway of apoptosis, including caspase 8, but the limited data available to date do not support their critical role in regulating platelet lifespan [33]. The consequences of platelet death include the formation of a new platelet-platelet interaction that occurs between nonviable platelets, and the shedding of the collagen receptor GPVI and GPIbα. Both processes appear to be regulated by metalloproteinase activity [34]. Although it is unclear how senescent platelets are removed from circulation, many cells undergoing apoptosis shift the redistribution of phosphatidylserine (PS) from the inner to the outer lamella of the plasma membrane, which serves as a molecular signal for removal by phagocytes [35] ( Figure 2B). Overall, it remains to be elucidated whether loss of sialic acid triggers the intrinsic apoptotic machinery in platelets during the clearance mechanisms that regulate platelet counts. Thousands of enzymes regulated by glycosylation processes are involved in platelet formation and clearance. Alterations in any of them could result in an imbalance between the two processes and consequently impact platelet counts. Until relatively recently, a very limited number of molecular variants had been described in only few genes that were related to IT [36]. Disorders of Glycosylation Associate with Syndromic Thrombocytopenia Congenital disorders of glycosylation (CDG) include a rapidly growing group of metabolic diseases that are caused by molecular defects in genes involved in glycoprotein synthesis. To date, more than 100 types of CDGs has been described [37,38]. These inherited disorders are associated with a wide variety of multiorgan symptoms, although the molecular alterations associated with IT and/or other hematologic manifestations involve a small number of genes that have been described recently [10]. Disorders of Glycosylation Described in Patients with Thrombocytopenia The first documented evidence of IT associated with a molecular alteration in an enzyme involved in glycosylation occurred in 2014; Izumi et al. reported two siblings with myopathy, rimmed vacuoles, and inherited thrombocytopenia harboring two compound heterozygous GNE mutations, p.Val603Leu and p.Gly739Ser, in accordance with autosomal recessive inheritance of the disease. The authors speculated that decreased GNE activity would lead to decreased sialic content in platelets [39], as the GNE encodes for UDP-Nacetylglucosamine 2-epimerase, a bifunctional enzyme that catalyzes the initial two steps in sialic acid biosynthesis and regulates total levels of N-acetylneuraminic acid, a precursor of sialic acids [40] (Figure 3). In the same year, Zhen et al. reported two adult siblings with thrombocytopenia and compound heterozygous GNE mutations (p.Tyr217His and p.Asp515Glnfs*2). These patients showed mild to moderate thrombocytopenia and no overt bleeding [41]. The GNE-related disorder was expanded in 2018, with the publication and characterization of several unrelated pedigrees [42,43]. One of the pedigrees was an inbred family carrying GNE p.Gly416Arg in homozygosis, in which the patients had severe macrothrombocytopenia with a high immature platelet fraction [42]. Similarly, Revel-Vilk et al. reported nine affected individuals from three unrelated families with severe macrothrombocytopenia, bleeding tendency, and a high proportion of reticulated and desialylated platelets [43]. Of note, none of the patients in the different pedigrees had myopathy. These studies suggest that several mechanisms of platelet clearance and production may be affected by desialylation. Patients have a rapid platelet clearance associated with loss of the platelet surface GPIb/IX receptors and changes in surface sialylation, suggesting a strong link between sialylation, altered surface GPIb/IX, increased platelet size, and platelet clearance [42,43]. However, it is still unclear whether platelets from patients with GNE-related disorder without sialic acid are cleared from the circulation by AMR, nor is it known whether variants in GNE are associated with alterations in MK maturation and platelet formation. It has been hypothesized that mutations in GNE cause thrombocytopenia only when co-segregated with other genetic factors, such as ANKRD18A, FRMPD1, FLNB, and PRKACG, which have been described in other cases [44]. Recently, new patients carrying biallelic variants of GNE have been published [45,46], however, it is still unclear why some patients present with isolated thrombocytopenia while others present with myopathy. Considering that GNE-related myopathy usually appears in the third decade of life, we cannot exclude that patients presenting with only thrombocytopenia develop myopathy later in life [40]. Further studies are needed to better understand why variants in GNE are associated with three distinct clinical phenotypes: myopathy, sialuria, or isolated thrombocytopenia. N-acetylglucosamine 2-epimerase, a bifunctional enzyme that catalyzes the initial two steps in sialic acid biosynthesis and regulates total levels of N-acetylneuraminic acid, a precursor of sialic acids [40] (Figure 3). In the same year, Zhen et al. reported two adult siblings with thrombocytopenia and compound heterozygous GNE mutations (p.Tyr217His and p.Asp515Glnfs*2). These patients showed mild to moderate thrombocytopenia and no overt bleeding [41]. SLC35A1-Related Disorder In 2011, the biallelic genetic mutation in the SLC35A1 gene, which encodes the cytidine-5 -monophosphate [CMP]-sialic acid transporter that transfers CMP sialic acid from the nucleus to the Golgi apparatus for sialylation, was described for the first time ( Figure 3). Sialyltransferases constitute a family of glycosyltransferases that transfer sialic acid from the donor substrate to acceptor oligosaccharide substrates. Thus, impaired transporter function results in a defect of α2,3-sialylation, causing thrombocytopenia in patients due to decreased platelet sialylation and increased clearance [47]. In addition, the authors demonstrated the presence of giant platelets with morphological abnormalities, such as open canalicular membrane system of platelets, and showed an increased number of small MKs. These results suggest defective megakaryopoiesis based on hyposialylation that may interfere with membrane-forming processes. However, in 2018, Kauskot et al. reported the congenital deficiency in SLC35A1 in two siblings born to consanguineous parents, who presented with delayed psychomotor development, epilepsy, ataxia, microcephaly, choreiform movements, and mild macrothrombocytopenia. In fact, they had a high proportion of immature platelets, suggesting that platelet formation may also be impaired, as previously reported. The authors speculated that SLC35A1 is relevant for platelet life span but not for proplatelet formation, and that the giant platelets could correspond to a compensatory mechanism in a context of thrombocytopenia, as suggested by elevated levels of reticulated platelets and an increased MK count in the bone marrow [48]. Recently, Ma et al. provided new insights into the role of sialylation in platelet homeostasis and the mechanisms of thrombocytopenia in SLC35A1-related disorder by generating a mouse model of the disease. They demonstrated that the number of bone marrow MK in Slc35a1-/mice was reduced, and their maturation was also impaired. In addition, the authors reported an increased number of desialylated platelets that were removed by Küpffer cells in the liver of Slc35a1-/mice [49]. Overall, further studies are needed to demonstrate the exact role of SLC35A1 in megakaryopoiesis. Although thrombocytopenia is known to be associated with increased clearance, the mechanisms are still unclear, and there is great controversy about its role in MK maturation and proplatelet formation. Further studies in new patients are mandatory in order to clarify the discrepancies detected between the patient described by Kauskot et al. [48], and the animal model generated by Ma and colleagues [49]. GALE-Related Disorder The GALE gene encodes uridine diphosphate [UDP]-galactose-4-epimerase, which catalyzes the bidirectional interconversion of UDP-glucose to UDP-galactose, and of UDP-N-acetyl-glucosamine to UDP-N-acetyl-galactosamine ( Figure 3). Thus, GALE balances, by reversible epimerization, the pool of four sugars that are essential during the biosynthesis of glycoproteins and glycolipids [50,51]. The first evidence of UDP-galactose-4-epimerase deficiency associated with hematological alterations was reported in 1995, in a four-year-old girl presenting with bruising, thrombocytopenia, and dysplastic cells in the bone marrow. However, the molecular diagnosis was not performed, and the underlying GALE variants are unknown [52]. In 2019, Seo et al. reported six members of a consanguineous family carrying the GALE variant p.Arg51Trp in homozygosis, all affected by anemia, febrile neutropenia, and severe thrombocytopenia, associating increased hemorrhagic tendency, without symptoms of systemic galactosemia [53], providing the first evidence of GALE variants and hematologic alterations. In 2020, Febres-Aldana et al. described a child with bone marrow dysfunction and complex congenital heart disease associated with compound heterozygosity in GALE (p.Arg51Trp and p.Gly237Asp) [54]. In addition, in 2021, Markovitz et al. reported a patient with pancytopenia and immune dysregulation due to the previously described homozygous p.Thr150Met variant of GALE [55]. Although three pedigrees carrying GALE variants associated with hematological abnormalities and different phenotypes had been reported, there was no evidence on the mechanism leading to disease in patients carrying GALE variants. In 2022, we unveiled four GALE variants associate with reduced glycosylation of GPIbα and β1 integrin causing impaired externalization to the surface of MK and platelets, altering the distribution of F-actin and filamin A in MKs, and affecting platelet production. In addition, hypoglycosylated and non-functional platelets prone to apoptosis were observed. Overall, these findings demonstrated the essential role of GALE in glycosylation, platelet formation, function and clearance, providing new clues to understand the biological mechanisms underlying the biology and pathophysiology of the β1 integrin and GPIb-IX-V complex [22]. Notwithstanding, the nature and severity of symptoms in epimerase deficiency remain unclear, as do the mechanisms by which some variants are associated with severe syndromic disorders that include hematological manifestations, while others are not [56]. Extending the analysis to additional receptors or other crucial glycoproteins may open new avenues toward understanding the impact of glycosylation on megakaryopoiesis. In addition, further studies are needed to provide new insights into the mechanisms associated with platelet clearance to elucidate a possible link between hypoglycosylated platelets, clearance by AMR or Kupper cells, and mechanisms of apoptosis. β4GALT1-Related Disorder The β-1,4-galactosyltransferase 1 (β4GALT1) is an enzyme that transfers galactoses from UDP-Gal to terminal N-acetylglucosamine (GlcNAc) (Figure 3). To date, only a few cases of inherited disorders of glycosylation by β4GALT1 have been described. Until 2020, these comprised three patients, all with clinical features including hypotonia, coagulopathy, elevated serum transaminases and a type 2 biochemical pattern on serum transferrin isoform analysis [57][58][59]. Staretz-Chacham et al. described three additional patients homozygous for a novel mutation in β4GALT1 (p.Arg21Trp), located within its transmembrane domain. These patients showed a uniform clinical presentation with intellectual disability, profound pancytopenia requiring chronic treatment, and novel features including pulmonary hypertension and nephrotic syndrome [60]. In addition, Giannini et al. generated a B4galt1-/mouse and observed that β4GALT1 deficiency increases the number of differentiated MKs. The resulting lack of glycosylation potentiates β1 integrin signaling, resulting in the differentiation of dysplastic MKs with severe alterations in the formation of the demarcation system and thrombopoiesis. Impaired thrombopoiesis also led to increased plasma TPO levels and defective hematopoietic stem cells (HSCs), justifying the observed thrombocytopenia [61]. These finding were in agreement with those published by Di Buduo et al. who reported increased B4GALT1 gene expression and plasma TPO levels in patients with myeloproliferative neoplasms (MPNs) [62]. Here, the altered B4GALT1 expression in MPN MKs led to the production of platelets with aberrant galactosylation, which in turn promoted hepatic TPO synthesis independently of platelet count [62]. The characterization of a larger number of patients with β4GALT1 deficiency is required for a better understanding of the pathophysiological mechanisms underlying the disease, and to establish a correlation between the molecular alteration and the disease manifestations. Other CDGs with Potential Relation to Inherited Thrombocytopenia in Patients ALG1-CDG: This autosomal recessive disorder is caused by the deficiency of the 1,4-mannosyltransferase 1 enzyme, encoded by ALG1 gene. Patients commonly suffer from severe neurological manifestations, developmental and psychomotor delay, with variable affectation of other organs (nephrotic syndrome, ascites, hepatomegaly, cardiomyopathy, ocular manifestations, and immunodeficiency). Hematological abnormalities, including thrombocytopenia, were found in approximately 50% of the patients, but detailed platelet analyses have not been reported yet [63]. ALG8-CDG: The ALG8 encodes the α-1,3-glucosyltransferase. The dysfunction of the enzyme leads to a severe disease characterized by gastrointestinal and cognitive impairment, edema, and dysmorphism, resulting in the death of patients within the first year of life. In addition, most patients presented thrombocytopenia, but mechanisms were not characterized [64]. MPI-CDG: The mannose phosphate isomerase (MPI) is involved the first step of the GDP-mannose synthesis (i.e., the conversion of fructose-6-phosphate to mannose-6phosphate). It plays a critical role in maintaining the supply of D-mannose derivatives required for most glycosylation reactions [65]. MPI-CDG does not cause as significant neurologic and multi-systemic involvement, but patients show a hepatic-intestinal presentation comprising life-threatening gastrointestinal bleeding. Pancytopenia, including moderate thrombocytopenia, has been reported in one adult, but it is still essential to confirm the role of MPI in thrombopoiesis to rule out a different etiology for the inherited thrombocytopenia [66]. PMM2-CDG: the phosphomannomutase 2 (PMM2) catalyzes the isomerization of mannose 6-phosphate to mannose 1-phosphate, which is subsequently converted into GDP-mannose (the source of mannose for the glycosylation branches). It is by far the most common N-glycosylation disorder. Biallelic pathogenic variants associate with a multisystem disease with highly variable phenotype. In the infantile multisystem presen-tation, infants show axial hypotonia, hyporeflexia, esotropia, and developmental delay. During late-infantile and childhood, they display ataxia-intellectual disability stage (ataxia, severely delayed language and motor development, inability to walk, among others). In the adult stable, the peripheral neuropathy is variable, and it is common to diagnose progressive retinitis pigmentosa and myopia, thoracic and spinal deformities with osteoporosis worsen, and premature aging. Moreover, females may lack secondary sexual development and males may exhibit decreased testicular volume [67]. Despite the increased risk to deep venous thrombosis is a common characteristic of the disease, patients with unusual thrombocytopenia have also been reported [68,69]. However, additional studies are required to establish the causality of thrombocytopenia and thrombosis in these patients. MAGT1-CDG: The magnesium transporter 1 (MAGT1) critically mediates magnesium homeostasis. Its alteration results in X-linked immunodeficiency, thus, most patients developed chronic EBV-associated B cell lymphomas, caused by the altered homeostasis in T-helper, cytotoxic T-lymphocytes, and natural killer cells. Moreover, these patients present with a phenotype that is mainly characterized by intellectual and developmental disability [70]. Some patients develop mild to moderate thrombocytopenia, although the mechanisms of pathogenicity affecting megakaryocytes and platelets have not been reported. Magt -/y mice have normal platelet count and size but altered ploidy of megakaryocytes [71]. It is important to mentioned that, in MAGT1-deficient cells, Mg 2+ supplementation increased the free intracellular Mg 2+ levels, most likely through TRPM7 (Transient receptor potential cation channel subfamily M member 7), which molecular alteration has been related to thrombocytopenia [72]. Therefore, it is necessary to further investigate the role of MAGT1 in platelet formation and its association with TRPM7. ST3GAL4-Related Disorder The gene ST3GAL4 codifies for the ST3Gal-IV enzyme, a sialyltransferases that transfers the sialic acid in α2,3 linkage to the acceptor oligosaccharide substrates, i.e., glycans with terminal Galβ1-4GlcNAc, Galβ1-3GlcNAc, and Galβ1-3GalNAc sequence. A recent study published in 2022 by Wiertelak W. and colleagues, demonstrate that ST3GAL4 associate with SLC35A1 forming a complex essential for N-glycan α2,3 sialylation [73]. Therefore, it is expected that molecular alterations in ST3GAL4 are associated with a phenotype similar to that observed in patients with SLC35A1-RD, where platelets have an increased clearance from bloodstream, leading to thrombocytopenia. Moreover, the investigations performed on knock-out (KO) mice for ST3Gal-IV (ST3Gal-IV −/− ) demonstrate that platelets were removed rapidly from circulation, and that biphasic kinetics was followed by a fast initial clearance and a prolonged clearance phase [23]. These ST3Gal-IV −/− platelets were removed in the liver by asialoglycoprotein receptors on macrophages and hepatocytes. Among the major desialylated proteins in ST3Gal-IV −/− lysates, authors revealed the presence of GPIba with increased exposure of βGlcNAc residues (thus, desialylated platelets). Finally, authors revealed that megakaryopoiesis was not increased in ST3Gal-IV −/− mice despite accelerated platelet clearance [23]. These results are in accordance with those observed in Slc35a1 KO mice [49]. In addition, Qi F, et al. revealed that the α2,3-sialylation levels of β1 integrin were clearly suppressed in the ST3GAL4 KO cells lines, supporting another target of molecular defects in genes involved in congenital disorders of glycosylation [74]. However, no patients with alterations in ST3GAL4 have been reported to date, so it cannot be ruled out that patients may have a defect in thrombopoiesis, as these same discrepancies between humans and mice remain unresolved for alterations in SLC35A1. ST3GAL1-Related Disorder ST3GAL1, encoded by ST3GAL1 gene, is a sialyltransferase that transfer sialic acid to the galactose residue of type III disaccharides (Galβ1,3GalNAc). The conditional KO mice model in the MK lineage (St3gal1 MK−/−) displayed a 50% reduction in platelet counts vs. control, with increased mean platelet volume (MPV) and immature platelet fraction (IPF). Erythrocytes and leukocytes counts were normal. Moreover, St3gal1 MK−/− platelet life span and expression of the platelet surface receptors glycoprotein IIb (GPIIb), GPIIIa, GPIbα, GPIX, GPV, and GPVI were comparable to controls, in contrast to alterations in the GNE, GALE or ST3GAL4 genes, where we detected reduced levels of GPIbα and/or β1 integrin. Lastly, transfused ST3Gal1 MK−/− platelets were not recognized by the AMR, as evidenced by similar survival in WT, and hepatic TPO production was also indistinguishable between ST3Gal1 MK−/− mouse and control livers [75]. Conversely, recent research revealed that both ST3GAL1 and ST3GAL2 became highly expressed during the differentiation of human-induced pluripotent stem cells (iPSCs) into hematopoietic progenitor cells (HPCs), but their expression decreased markedly upon differentiation into MKs. Interestingly, the HPC markers CD34 and CD43, as well as the MK membrane marker GPIbα, were identified as major GP substrates for ST3GAL1 [76], contrary to what has been described in the animal model ST3Gal1 MK−/− [75]. The authors concluded that disruption of ST3GAL1 had little impact on MK production, but its absence resulted in dramatically impaired MK proplatelet formation [76]. C1GALT1-Related Disorder Core 1 β1,3-galactosyltransferase (C1GalT1) catalyzes the formation of core 1 O-glycan structures, a common precursor for mucin-type O-glycans. Impaired C1GalT1 activity has been associated with different disorders in humans, such as the Tn syndrome (a rare autoimmune disease in which subpopulations of blood cells in all lineages carry an incompletely glycosylated membrane) and IgA nephropathy, a common primary glomerulonephritis [77,78]. The murine model expressing very low residual enzymatic activity (C1GalT1 mice) revealed a 40% reduction of platelet counts compared to WT mice and increased platelet volume. Other blood cells counts were unaffected. There was no reduction in megakaryocyte numbers and DNA ploidy, and the electron microscopic evaluation of MKs and platelets from C1GalT1 mice vs. WT suggested no major obvious ultrastructural abnormalities. Moreover, the half-life of platelets in C1GalT1 mice was similar to control mice, but the generation of unlabeled platelets after pulse labeling occurred at a slower rate. Thus, authors suggested that the thrombocytopenia in C1GalT1 mice is not caused by impaired megakaryocyte production or accelerated clearance of platelets but seems to be caused by compromised thrombopoiesis [79]. In accordance, Kudo T and colleagues exploited an interferon-inducible Mx1-Cre transgene to conditionally ablate the C1galt(flox) allele (Mx1-C1). Mx1-C1 mice exhibit severe thrombocytopenia, giant platelets, and prolonged bleeding times. Both the number and DNA ploidy of megakaryocytes in Mx1-C1 bone marrow were normal. However, they found very few proplatelets in Mx1-C1 primary megakaryocytes. Protein levels revealed a reduced expression of GPIbα in Mx1-C1 mice and circulating Mx1-C1 platelets exhibited an increase in the number of microtubule coils, despite normal levels of αand β-tubulin [80]. Results in both mice models of C1GalT1 deficiency demonstrate that O-glycan is required for terminal megakaryocyte differentiation and platelet production. Considering that the biological importance of O-glycans in platelet clearance was unclear, Li Y and colleagues generated mice with a hematopoietic cell-specific loss of O-glycans (HC C1galt1-/-). These mice also exhibit reduced peripheral platelet numbers with reduced levels of α-2,3-linked sialic acids and increased platelet accumulation in the liver compared to WT platelets, demonstrating that hepatic AMR promotes preferential adherence and phagocytosis of desialylated platelets by the Kupffer cell through its C-type lectin receptor CLEC4F [81]. Conclusions and Perspectives Glycoconjugates are major components of animal cells with an essential role in many physiological processes. Advances in glycobiology and the development of mass spectrometry-based proteomics and glycomics have uncovered the mechanism of aberrant glycosylation in a wide spectrum of congenital disorders and elucidated the functions of specific glycans and related genes [84,85]. In recent years, these approaches have led to the discovery of novel genes involved in different pathologies. In the field of hematology, no gene involved in glycosylation affecting megakaryopoiesis was known until 2014 [39]. To date, only alterations in the GNE, SLC35A1, GALE and B4GALT1 genes causing inherited thrombocytopenias have been described and probed in patients. However, the mechanism of thrombocytopenia and platelet clearance associated with variants in these genes remains to be fully elucidated. Studies in both human patients and animal models of the disease reveal that altered N-and O-glycosylation of essential platelet proteins such as GPIbα and the β1 integrin underlie the mechanism of pathogenicity causing an increased platelet clearance, mainly mediated by the liver, and abnormal thrombopoiesis with no remarkable changes in megakaryocyte maturation. Further research of these mechanisms is essential, as well as to understand why not all patients carrying biallelic mutations in these genes develop thrombocytopenia and severe syndromic manifestations. The emerging increase over the last few years in the study of glycosylation disorders is allowing the discovery of novel genes involved in platelet formation and function. So far, alterations in genes such as ST3GAL4, ST3GAL1, C1GALT1 or COSMC have only been reported in animal models, but it is expected that in the coming years, and with the rise of high-throughput sequencing techniques, patients with these alterations will be reported. The discovery of aberrant glycans and exploration of the underlying mechanisms would broaden their applications as diagnostic markers or therapeutic targets, improving patient care. It is important to mention that disorders of glycosylation affect people from birth, though symptoms may manifest later. Considering the serious syndromic manifestations, an accurate and early diagnosis is essential for treatment of these patients. TPO receptor agonist could be an alternative to platelet transfusion as described in other Its [86] and, in selected severe patients, the hematopoietic stem cell transplantation (HSCT) may be an option. A recent case study was published documenting the first HSCT in a patient with an inherited defect of GNE resulting in a normal platelet count [87], raising the horizon in the field of congenital disorders of glycosylation. Finally, gene therapy may be a promising approach for the future of these patients by ex vivo correction of variants detected in patients by the wild-type form of the protein.
7,466.2
2023-03-01T00:00:00.000
[ "Biology", "Medicine" ]
Can Model Checking Assure, Distributed Autonomous Systems Agree? An Urban Air Mobility Case Study Advancement in artificial intelligence, internet of things and information technology have enabled the delegation of execution of autonomous services to autonomous systems for civil applications. It is envisioned, that with an increase in the demand for autonomous systems, the decision making associated in the execution of the autonomous services will be distributed, with some of the responsibility in decision making, shifted to the autonomous systems. Thus, it is of utmost importance that we assure the correctness of distributed protocols, that multiple autonomous systems will follow, as they interact with each other in providing the service. Towards this end, we discuss our proposed framework to model, analyze and assure the correctness of distributed protocols executed by autonomous systems to provide a service. We demonstrate our approach by formally modeling the behavior of autonomous systems that will be involved in providing services in the Urban Air Mobility framework that enables air taxis to transport passengers. Keywords—Formal methods; autonomous systems; distributed algorithms; assurance for distributed protocols; distributed protocol modeling and verification; distributed autonomous systems I. INTRODUCTION Advancement in technologies associated with autonomous systems have significantly increased the use of autonomous systems in day to day activities. Additionally, communication capabilities have enabled the use of multiple autonomous systems to be used for executing autonomous missions. Unmanned Aerial Systems (UAS) are used across diverse applications, such as structural health monitoring [1], data driven path planning [2], and object classification [3]. Research by Cesare and Hollinger presented in [4] explores execution of multi-UAS missions under unreliable communication and limited battery life, for search and rescue applications that include urban search and rescue, military reconnaissance, and underground mine rescue operations. With the increase in UAS applications several research efforts have started focusing on handling contingency scenarios such as investigating emergency landing for UAS by evaluating data available from population census and occupancy estimates from mobile phone activity [5]. Additionally, Automatic Supervisory Adaptive Control (ASAC) method enables the UAS to fly with a damaged wing [6]. As the applications start focusing on safety critical operations it becomes evident that we need to develop and deploy methods and frameworks for assuring multiple autonomous systems working together can complete the operations successfully. One of the essential elements of an intelligent system design is in the formulation of the logic to intelligently respond to the environment. We in this research effort, focus on representing the logic as in artificial intelligence that enables automated reasoning to verify the correctness of the design. The automated reasoning involves the utilization of theories in formal methods, which is a branch of artificial intelligence that allows the design of logic as models on which we can execute queries, that prove through automated searches if the design satisfies the required properties. This paper describes work on the verification and assurance of agreement among UASs by designing and implementing a distributed protocol with a case study for Urban Air Mobility (UAM) [7]. The implementation of the logic involved in distributed reasoning and its verification is done using Uppaal [8], a real time model checking tool. In order to accomplish the goal we present a mapping of requirements as identified from UAM model, that is implemented as queries in Uppaal [8]. The rest of this paper is organized as follows. Section II of this paper talks about the previous work that has been done in the area of formal methods and distributed protocols. Section III specifically discusses the framework for the formal modeling and analysis of the behavior of autonomous systems for UAM. It also discusses the expected architecture of distributed autonomous agents providing service in UAM. In Section IV, formal modeling paradigm is discussed in detail. This section elaborates upon the mathematical representation within the modeling paradigm and formal modeling tool Uppaal [8], which is used to build the formal model for the logic involved in distributed protocol for multiple autonomous systems to cooperatively provide a service. This section also states the behavioral model of autonomous systems in Uppaal [8] and the various verification properties used to verify the model. Experimental results are presented in Section V and finally the conclusion along with future work is inferred in Section VI. A. Formal Methods or Assurance Methods There has been considerable previous work done in the area of formal methods for assurance [9], [10], [11]. In [9] the research discusses a method to perform run-time assurance for learning systems with an assurance architecture designed in Architecture Analysis and Design Language (AADL) and formal contracts for each of the components modeled and verified in Assume Guarantee (AGREE) annex. In [12] Davis discusses an approach to use architectural analysis to prove that the protocol designed for multi-agents satisfies the specified properties. This effort also emphasized the use of AADL and AGREE for formal assurance. For formal assurance of cooperative agents [10], discusses the development of a framework to represent cognitive architecture which is then translated into a formal environment Uppaal to verify that the autonomous agent along with interaction with the human achieves the objective. These studies emphasizes on the fact that how the use of formal methods can greatly increase our understanding of a system by revealing inconsistencies, ambiguities, and incompleteness that might otherwise go undetected. Further, Kern and Greenstreet utilize the emergence of formal methods as an alternative approach to ensuring the quality and correctness of hardware designs [13]. Also, they emphasize the two main aspects to the application of formal methods in a design process which are modeling a formal framework that specifies the desired properties of a design. The second and more important aspect is the verification process and tools that are used to reason about the relationship between a specification and a corresponding relationship. In [14], Devillers et al. present a formal modeling and verification approach for a leader election algorithm. It describes how formal methods is used to formally model the leader election algorithm as an I/O automaton, and then it describes the verification process to prove that the implementation matches the specification. The authors emphasize the importance and use of formal methods to increase confidence in the correctness of protocols, hardware and software systems [14]. The above-cited works depict the evolution of formal methods as a formal modeling technique over the years and why it is of utmost importance to model any hardware or software specification before deploying them in a real-world environment. Formal methods have been used over the years not only for modeling designs and software but also for verification and validation of these complex designs that help in identifying subtle errors during the design process which can be later eliminated during the implementation stage. In formal methods, model checking or theorem proving are two of the prominent methods, that are used to verify satisfaction of properties within a designed system, where model checking is automated. Model checking is a method for checking whether a model of a system meets a given specification (correctness). This is mainly associated with hardware or software systems, where we want to check liveliness requirements, as well as safety requirements. To algorithmically solve this, both the model as well as its specifications are formulated in a precise mathematical language. A model, is generally a graph such as a state machine diagram, representing the behavior of a system. The state machine diagram includes, states, transitions, condition checks and actions associated with the transitions. The main purpose of model checking is to examine whether the evolving traces of a model, generated as an execution tree satisfies the user-given property specification. Model checking for formal verification has been used as a successfully adjunct to simulation-based verification and testing. B. Distributed Protocols and Analysis Phillips in [15] describes the characteristics of distributed systems and their protocols. It specifically focuses on the client-server model which is used to develop a set of requirements for a distributed system along with a description of the architecture [15]. With the advancement of networking technologies such distributed systems have significantly grown in numbers so, it has become really important to apply formal methods to the field of distributed protocols [16] to prove that the distributed systems correctly operate to achieve the required functionality. In [17], Bhattacharyya et al. discuss the formal modeling and verification of distributed systems modeled with quasisynchrony. It mainly provides an intuitive modeling environment that allows specification of high-level architecture and synchronization logic of quasi-synchronous systems [17]. As an example a leader selection problem is discussed where the objective had been to verify a leader is elected among a set of autonomous systems. A more elaborate explanation of verification of quasi synchronous systems is described by Miller et al. in [18] where they discuss the importance of distributing critical systems to make them redundant and faulttolerant so that they can meet the reliability requirements. The authors specifically describe the integration and enhancement of distributed systems with innovative formal verification tools such as Satisfiability Modulo Theories (SMT) based model checkers for timed automata to provide system engineers with immediate feedback on the correctness of their designs. This work mainly focuses on the design of distributed complex systems using formal method techniques, but our approach proposes the modeling and verification of the distributed logic required for successfully executing distributed operations autonomously. Also, [18] uses examples of quasi-synchronous systems to model and verify the Pilot Flying System, the Leader Selection Case, the Active-Standby System, and the Wheel Breaking System (WBS). In a presentation [19] by Thomas Ball from Microsoft at the NUS university recently, he explains the importance of formal methods as model checking tool for distributed systems. The presentation mainly focuses on automated checking of the complex design implementation using formal methods for infinite-state systems. It also shows the importance of automatically verifying distributed systems before they can be deployed so that they are provably correct. It also talks about how formal methods find bugs in system designs that cannot be found through any other known technique. The work in [20] exhibits a methodology to develop mathematically checkable parameterized proofs of the correctness of fault-tolerant round-based distributed algorithms. It focuses on how to replace informal and incomplete pseudo code by syntax-free formal and complete definitions of a global-state transition system. In [21], Fakhfakh et al. discuss various formal verification approaches for distributed algorithms. The study shows how there has been a rapid increase in the field of distributed algorithms due to the advances in networking technologies. It also provides information for researchers and developers to understand the contributions and challenges of the existing formal verification technologies for distributed algorithms and paves the way to enhance the reliability of these distributed algorithms [21]. In [22], the work focuses on how formal methods can be used to analyze, design, and verify security protocols over open networks and distributed systems. As we can see that there has been considerable work done in the field of distributed protocols and formal methods [16]. But none of the work specifically focuses on modeling the logic of distributed autonomous systems using formal methods for UAM. Our contribution has been in the design of a framework that can be applied to the formal modeling and verification of logic designed for distributed autonomous systems to successfully execute services. We have also formally mapped the requirements for autonomous services to prove that the distributed autonomous systems have a consensus among themselves. We also propose an architectural representation of how autonomous services can be designed and verified before deployment. III. FRAMEWORK Fig. 1 shows the process flow diagram for formal verification of distributed protocol for multiple distributed autonomous systems. The process starts with stating the requirements i.e. the goal that needs to be satisfied by the distributed autonomous systems. The requirements in our research flow from the emerging services provided by autonomous systems such as, Last Mile Delivery [23], Air Taxi and Air Metro [7]. Among these services Urban Air Mobility [7] is a futuristic concept that is being researched and developed all around the world. As a result, there is an immediate need for research thoroughly investigating possible scenarios for such emerging technologies which are agnostic to the actual implementation, but helps the process of identifying the infrastructure and correctly specifying the logic involved in the successfully deploying distributed autonomous operations. Our framework describes such an approach to design and implement behavioral models for autonomous systems, that can be formally verified and is independent of the technology to implement it. The formally verified models will help to deploy trusted, secure and reliable autonomous systems in real-world environment. These requirements led to the generation of a Formal Model designed as automata and formal properties defined or stated in temporal logic. The formal model is developed using a formal verification tool called Uppaal [8]. Uppaal has an inbuilt simulator and verifier to simulate and verify the behavior of models (in our case autonomous system). Verifier aids the process of identifying errors in the model by executing properties that generate a counterexample along with a simulation trace if the property is violated, which helps to rectify the generated errors. This process is repeated until all the errors along are identified and corrections made. The verifier also helps to list and model many path properties that help in verifying various behavioral characteristics of the stated model, which otherwise is hard to identify and verify. Once the final verification is done and all the errors have been removed, we have a model that is formally verified and the logic of which can be trusted based on the formal verification. It is envisioned that this model can then be translated into a graphical simulation environment in order to see the exact behavior of autonomous system and also to generate real-time data. The simulation environment can be any environment that supports the integration of multiple autonomous agents, multiple drones or VTOL Planes such as X-Plane [24], AirSim [25] or Robot Operating System (ROS) [26]. This translation from formal model to simulation environment is not realized in this research. Fig. 2 below graphically shows a hypothesized distributed architecture to support services as expected in UAM. Fig. 2(a) shows how a city can be decomposed into zones supported by multi agent environments and it's various components. Each zone is further composed of several drones that are distributed in nature, managed by a server. There is constant interaction between the drone modules and the server within a zone. Each zone interacts with other zones present within the city with the help of servers present inside each zone. There can be multiple servers based on the requirements but, for simplicity we have defined only one in the model. This constant interaction between various zones makes it a multi agent distributed environment. The Distributed Autonomous Agent Environment (DAAE, Zone) consists of various components such as buildings or nodes from where service requests are generated, drones or agents that serve requests that are generated and server. Each individual drone also comprises of it's internal server and a module that can interact with the simulation environment. The building or the nodes are responsible for generating a service request which is then passed on to the server. Along with the request, the X and Y coordinates of the building are also sent to the server, which will later be passed on to each individual drone to compute the linear distance. The server is responsible for validating the request which is then broadcasted to all the drones in the zone. Once the request is received by all the drones, they go through various checks such as verification of sensor values, battery level, authenticity of the request etc. After successful validation of the checks, all the available drones calculate their linear distance from the node that generates a request. After calculation the drones exchange their distance to the requested node, with all other drones. All drones mutually agree upon the drone that is closest to the requesting Node. This mutual agreement without the interference or involvement of any kind of central observer makes the whole model distributed and de-centralised where the decisions are taken by drones present in the distributed system. The drone module is responsible for all the communication with the server and is also responsible to carry out various checks and the linear distance calculation. After a drone has been mutually selected to serve the request, the other drones go to the start location and are available to serve any other request in the network. The described architecture of a zone is modeled in Uppaal, where each of the components are a template/process. The behavior of each component is modeled in the formal verification tool Uppaal [8] and the verification of the logic is carried out using Uppaal verifier. The tool also helps to This verified logic can be then translated and used to simulate this model in any real simulation environment such as AirSim [25], X-Plane [24] or ROS [26] that supports interaction of multiple autonomous agents. This real time simulation will help to generate real time data of the scenarios for a targeted service, which can be stored and later processed to improve the efficiency of the whole system. This data can also be used to develop distributed learning models for autonomous systems that will be more robust in nature and will be much more efficient. This discussed simulation is part of the future work while, this paper mainly focuses on proposing the architecture, generating formal models for the embedded components and finally, verification of the same using temporal logic to represent the requirements. IV. FORMAL MODELING PARADIGM The modeling paradigm was selected after looking at several possible techniques of modeling, including Markov chains and architectural representations. We decided that the most appropriate method of representing user behavior is through the use of a Finite-State Automata (FSA) because it allows us to visualize the graphical diagram of the user's behavior easily. It enables the use of well-defined tools to perform automated analysis early in the design phase, which would empower us to reason about the logical representations of the user's behavior at the time and to evaluate alternative design options in case there were profound implications. We developed the models that are representing our knowledge base by following the principles of Finite-State automata (FSA) [27]. In order to choose the correct platform for the purpose of designing and verifying the formal model of user's behavior, several formalism such as NuSMV [28], Uppaal [8], PVS [29], and Z3 [30] were considered carefully. We chose Uppaal [8] [31], due to its ability to model timing aspects that are critical for cybersecurity, as well as its ability to generate and visualize counterexamples. Uppaal represents models as timed automata, and Uppaal formalism enables compositionality supports model checking over networked timed automata using temporal logic. This modeling paradigm allows the execution of requirements as temporal logic queries to check the satisfaction of relevant safety properties exhaustively. We next describe the timed automata formalism used by Uppaal. A. Modeling Paradigm for Timed Automata The modeling paradigm is an extension of finite automaton with clocks, more popularly known as Timed Automata [32]. One of the tools implementing this formalism is Uppaal [8], which allows the modeling of network of Timed Automata. Clock or other relevant variable values used in guards on the transitions within the automaton. Based on the results of the guard evaluation, a transition may be enabled or disabled. Variables can be reset and implemented as invariants at a state. Modeling timed systems using a timed-automata approach is symbolic rather than explicit. It allows for the consideration of a finite subset of the infinite state space on-demand (i.e., using an equivalence relation that depends on the safety property and the timed automaton), which is referred to as the region automaton. There also exists a variety of tools to input and analyze timed automata and extensions, including the model checker Uppaal and Kronos. • Timed Automaton (TA) A timed automaton is a tuple (L, l 0 , C, A, E, I), where: L is a set of locations; l 0 ∈ L is the initial location; C is the set of clocks; A is a set of actions, co-actions, and unobservable internal actions; E ⊆ L×A×B(C)×2 C ×L is a set of edges between locations with an action, a guard and a set of clocks to be reset; and I : L → B(C) assigns invariants to locations. We define a clock valuation as a function u : C → R ≥0 from the set of clocks to the non-negative reals. Let R C be the set of all clock valuations. Let u 0 (x) = • Timed Automaton Semantics Let (L, l 0 , C, A, E, I) be a timed automaton T A. The semantics of the T A is defined as a labelled transition system S, s 0 , → , where S ⊆ L × R C is the set of states, s 0 = (l 0 , u 0 ) is the initial state, and →⊆ S × {R ≥0 ∪ A} × S is the transition relation such that: if ∃ e = (l, a, g, r, l ) ∈ E such that u ∈ g, u = [r → 0] u and u ∈ I(l) where for d ∈ R ≥0 , u + d maps each clock x in C to the value u(s) + d, and [r → 0]u denotes the clock valuation which maps each clock in r to 0 and agrees with u over C \ r. Note that a guard g of a T A is a simple condition on the clocks that enable the transition (or, edge e) from one location to another; the enabled transition is not taken unless the corresponding action a occurs. Similarly, the set of reset clocks r for the edge e specifies the clocks whose values are set to zero when the transition on edge executes. Thus, a timed automaton is a finite directed graph annotated with resets of and conditions over, non-negative real-valued clocks. Timed automata can then be composed into a network of timed automata over a common set of clocks and actions, consisting of n timed automata T A i = (L i , l i0 , C, A, E i , I i ), 1 ≤ i ≤ n. This enables us to check reachability, safety, and liveness properties, which are expressed in temporal logic expressions, over this network of timed automata. An execution of the T A, denoted by exec(T A) is the sequence of consecutive transitions, while the set of execution traces of the T A is denoted by traces(T A). B. Uppaal Uppaal [8], an acronym based on a combination of UPPsala and AALborg universities, is an integrated tool environment for modeling, simulation and verification of real-time systems as networks of timed automata, extended with data types (bounded integers, arrays, etc.). It is used to model the logic of real time systems. For our work we have used to model the behavioral of the components for an UAM architecture [7]. We further use Uppaal [8] to verify our modeled logic in timed automata and then propose meaningful insights and results. The tool consists of three main features. First is the editor window where we model the behavioral logic for each of the modules described in detail below. Next is the simulator window, where we run a step by step simulation of the modeled logic. This helps to understand the real time functioning of the behavior of each module and further helps to refine our logic. The last and the most important part is the verifier. The verifier, utilizes a model-checker to perform an exhaustive exploration of the dynamic behavior of the system for proving safety and bounded liveliness properties. Properties are written in temporal logic to verify the logic developed. The verifier helps verify important aspects of the model and gives a deep understanding of the functioning of the model in real time scenario. It also helps to find flaws in the model that can rectified in the editor window. As a result, we are able to model a logic that has been verified and can be deployed in real time scenarios. The implemented model for UAM services as case study is described in detail in the next subsection along with detailed description and functionality of each module. C. Model in Uppaal In this subsection we elaborate on our approach to address distributed modeling and analysis for DAAE in UAM. For now, we consider that, there are three drones represented by a Drone (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 11, No. 12, 2020 module (Section 3) serving in a zone which is inside a city that has many such similar zones along with a Server module (Section 2), Sensor module (Section 4) and an Input module (Section 1). All these specific modules have specific roles and functions in the UAM architecture whose behavioral logic has been modeled in Uppaal. Three instances have been created for the drones in the system declaration since, all the three drones are assumed to have similar behavior for now. Algorithm 1, maps the step by step behavioral logic for drone module in Uppaal. The request is generated by a random function by the input module. The request is sent as a synchronisation event by the input module to the server module. Along with the synchronisation action, coordinates of the requesting node or building are also sent to the server module. The server module then processes the request and broadcasts it to all the individual drones available to server a request. The drone before receiving a request, checks for all sensor values using the help of sensor module (Section 4). After all checks have been performed, they then process the request received and mutually elect a drone that will serve the request without the interference of the Server module. This process of mutual selection makes the whole UAM architecture distributed and decentralised in nature. The design and functionality of each individual module along with their role in the whole behavioral model is described below: 1) Input Module The instances of request are generated by the Input module. It is the one responsible for generating a random request which then goes as a synchronisation event to the server where it is processed and broadcasted to all the drones in the environment. As seen in Figure 3, the Input module makes a random transition from the Start state to Generate Request state. This transition generates a random integer less than 100 and based on the integer generated it further makes a transition to one of the buildings in the environment i.e Building A, B or C. Through this process, we have tried to depict a random request generator which sends a request for service synchronisation command to the Server module. Along with the request f rom building synchronisation, the Input module also sends the coordinates of the building from where the request is generated. These coordinates are further sent to each individual drone by the Server module. These coordinates are used in distance calculation of each drone from the building. After generating a random request, the Input module makes a transition back to the start state to generate a new request for service. This process continues and random requests are generated which are then served by the drone. 2) Server Module The Server module describes the behavioral logic for Server which is responsible for routing the request generated by the Input module. As seen in Fig. 4, the Server module transitions from Start state to W ait F or Request state when it receives a request for service from the Input module. It immediately sends a synchronisation request to the Drone module. This request goes as a synchronisation event and is received by each drone that is available to serve a request. The request is broadcasted to all the available drones along with the location coordinates of the building from where the request is generated. Only after the drones have received an authentic request from the Server module, they proceed further to calculate linear distance in-order to mutually elect the nearest drone to serve the request. At this point, the server module waits until it receives a synchronisation serve! event from the drone that is chosen to serve the request. The drone which is chosen to serve the request sends a synchronisation action to the server module indicating that, the request generated is being served by one of the drones present in the environment. Only after receiving the serve! synchronisation, it transitions from W ait F or Communication state to Repeat Request state. During this transition, the time taken from the moment a request is sent and until it is accepted by the nearest drone is stored in a variable called time server. After this state, the server modules makes a transition back to the Start state to process and send any other request if available to the Drone module. This process continues repeatedly until there are no more service requests. 3) Drone Module The Drone Module defines the behavioral logic of drone architecture in the UAM model. There are many instances of the drone module that can be defined in the system declaration of UPPAAL editor environment. Fig. 5 below graphically shows the drone module in the UPPAAL editor window. Initially every drone is in the Start state. Once the drones are ready, they go to Ready state. While making the transition from Start to Ready,certain counters are initialized. The variable i in the module represents the identification number of the drone that is being referenced. Whilst in the Ready state, Fig. 4. Behavioral Model in UPPAAL for Server Module each drone waits for every other available drone and also waits for the Server to generate a request. Once the request has been generated, it is decrypted by each drone to check if the request is coming form authentic server or not and if proved, the drones make transition from Ready to Sensor Check state. The Sensor Check state is where each individual drones will check if the various parameters are working normally and if the drone is in good condition to fly and serve a request. If the sensors are normal and the condition of the drone is healthy to fly it will make a transition to Availability Check state. If any of the instrument or parameter is not working properly the drone will exit the loop by making a transition to Report Error state. In the next state, each drone performs a linear distance calculation to calculate it's distance from the Node where request is generated. After calculating the distance, all drones will update their respective distances to a global list along with their specific identification number. After updating the distance, all drones mutually agree upon the drone which is closest to the requesting Node and select the drone nearest to the Node, to serve the request. Here the drones also perform a check for principle of quasi-synchrony [15] [17] i.e no drone should serve more than twice while others have not served once. This way all the drones get a chance to serve the requests if they are not the closest to the requesting node. This process where the drones mutually agree upon the one to serve the request without the interference of the server module or any other central module, makes the architecture distributed [17] and de-centralized in nature. After completing all these steps and mutually selecting the drone to serve a request, all drones wait at the state M ake Decision where the decision is made by each individual drone according to the mutual agreement. The drone chosen to serve the request makes a transition to the state Serving Request while others make a transition to Ready T o Serve. All the other drones are available again to serve any new request generated by the server module. The drone serving the request, updates certain variables and makes itself unavailable for any new request. It also sends a serve synchronisation command to the server to indicate that the request generated by the server is being served. After serving the request, the drone calculates the total time taken to serve the request and makes itself available again to serve any new request. This process continues for each and every request generated at the server side. Every time a new request is generated all the available drones perform sensor checks, authenticity check and shortest distance calculation. Always, the drone closest to the requesting node is chosen to serve the request keeping in mind the principle of quasi synchrony is satisfied [15] [17]. 4) Sensor Module The Sensor Module as shown in Figure 6 consists of the Start and the Get Sensor states respectively. Every time a drone transitions from Start state to Ready state a synchronisation event start is sent to the Sensor module. The sensor module then synchronises and makes a transition from the Start state to Get Sensor state. While transitioning, it gets the latest real-time sensor values such as altitude, fuel, temperature etc. of the respective drone that sends the synchronisation and returns it to the Drone module. These values are later used by the Drone to check if it is healthy to serve the request and if all parameters are above the safe threshold limit. D. Formal Verification Requirements Uppaal allows for verifying requirements modeled as properties, that are useful for ensuring correctness, detecting inconsistencies, as well as flaws in the design according to the proposed modeling and analysis framework for UAM model. For example, Uppaal is capable of detecting whether there is a deadlock in the model, the results of which can further be used to find out logical flaws in the behavior of the developed model. In this subsection, we present various requirements modeled as properties, that one may want to verify with respect to UAM model, and also present meaningful insights into them along with brief description of each. The verification helps to check for any inconsistencies or flaws that may be present in the behavioral logic. After identifying the flaws, they are corrected and a consistent and a robust UAM model is presented through this work. Requirement 1 : The existence of deadlock within the system should be verified This requirement is stated to check if there exists any deadlock in the system. The requirement is modeled as a property in UPPAAL verifier, that proves and presents a simulation trace of the state where a deadlock exists. After examining the particular case and scenario we find out that Similarly, for every specific system, this scenario will have to be examined to figure out if the deadlock is necessary or it is a reflection of faults and inconsistencies in the system. For example, if the service provider does not want to provide any service during night time, then a deadlock at night will indicate correct and consistent logic. Therefore, specific to the model, deadlocks have to be examined to see if it's needed or the logic has to be changed in order to remove them. Requirement 2 : All the drones present in the system shall be able to provide service at the same time The requirement checks if all the drones present in the model can be busy at the same time to provide service to different requests i.e, all of them are servicing three individual requests simultaneously. For our model, this requirement proves indicating, that there exists a path where eventually, all three drones can be serving at the same time which shows that each drone functions independent of the other drones, but the decision are taken with mutual agreement. The following requirement also helps to justify the distributed nature of each autonomous agent in the environment that functions independent of the other agents. The requirement checks if there exists a path where when a request is sent by server, it is always served by the available drones. This helps us to know that there are no neglected requests and that whenever a request is sent, it is always served and not ignored. This requirement helps to verify if any service requested is left unattended in the environment. Requirement 4 : All the drones shall mutually agree upon who should be the service provider This requirement modeled as a property proves indicating for all paths eventually, all the drones mutually agree on the drone that will provide the service. The global list selected drone contains same elements which tells us that the service provider has been chosen with mutual consent without the interference of any external or central server. This specific requirement helps to verify that even though the proposed framework is distributed in nature but the drones take certain www.ijacsa.thesai.org The above stated requirement is modeled as a property in the Uppaal verifier. We get a simulation trace indicating that for all paths eventually, all drones are able to decide upon the drone that is closest to the requesting node or building. All drones individually update the global list shortest distance indicating the distance of the drone that is closest to the requesting building and is available to serve the request. Using the above stated requirement, we implemented the principle of quasi synchrony. As a result, we try to check, if there exists a path where either of the drones have served more than two times, while others have not served even once. This requirement modeled as a property keeps processing and does not indicate a yes or no since it is an unbounded system. This implies it is a liveliness property and hence it does not find a state where the following condition holds true. We run the execution for almost 11,700 states till we get server connection lost error and until that time it does not hold true. In a way this implies that there isn't any path where this property holds true (i.e principle of quasi synchrony holds till the time we don't lose connection with the server) but we cannot say that for sure. Yes, for all paths eventually, all drones check if the request is coming from the authorised server or not. The server while sending a synchronisation service request, sends an encrypted key along with it. Each drone individually decrypts the key and compares it with the existing shared key. Only if the request is authentic, it will be served by the available drones otherwise it will be ignored. The above requirement tries to find if there exists a path eventually where a drone is already in the process of serving a request, goes to serve a request again i.e an unavailable drones serves a new incoming request. The requirement modeled as property keeps on running for approximately 12,065 states until connection to the server is lost. This indicates that till 12,065 states, there is no state where the above condition holds true. To prove or disprove the property we need to consider a bounded automata. Therefore, for now we cannot say for sure if the above property is true since it keeps on running in search for a simulation trace without generating a counterexample. Property 9 : A drone with poor health shall not be chosen to serve an incoming request The above requirement tries to find if there exists a path eventually where the battery of a drone is less than 50% and it is chosen to serve the request. We assume a threshold of 50% and do not want any drone with a battery value of less than the threshold to serve a request. This threshold value can be changed if needed. The property keeps on running to find a path until the server connection is lost. We need to make the model bounded in order to prove the following liveliness property. We can say that for at-least 12,458 states there doesn't exist any such path, but cannot guarantee for the whole model since it keeps on running without generating a counterexample. Through this requirement we try to investigate if there exists a path eventually, where a drone whose sensor has been malfunctioned or is not working properly, is chosen to serve the request. The requirement stated as property keeps on running until server connection is lost indicating it is unable to find such path for the number of states it runs. We need to make the model bounded to accurately indicate if it holds true or not. As of now, we cannot say for sure that the property holds true for the whole model since it keeps on running infinitely without generating a counterexample. V. RESULTS This section evaluates the results of the various properties that are mentioned above. In general, we are able to verify that the distributed drones in the autonomous environment mutually agree and take decisions without the interference of any central server or module. Table 1 below evaluates the experimental results for each property. The first two columns of the Table 1 show the time taken (in seconds) by each property to execute and the total run-time memory (in megabytes) consumed. The next column indicates if the property proves or not. As we can see, some of the properties prove and some do not. Few properties keep running in loop until we get a server connection error. For these properties, we can't say for sure if they hold true or not since it is an unbounded system. The next column describes the number of states each property iterates through. Some properties that prove, iterate through all reachable states. If the verifier finds a counterexample for a particular property, it gives a simulation trace and indicates that the property does not hold true. These properties also iterate through all possible reachable states to look for a counterexample. The properties that keep on running without proving, iterate through many states as listed in the table until we get server connection error. The next column indicates if a simulation trace is generated while verifying a property. It is noteworthy that in Uppaal a simulation trace is generated when a property does not hold true i.e. the model checker finds a counterexample. Some properties which keep on running, do not generate any simulation trace and we get server connection error. An automated simulation trace is also generated when the "There exists (E<>)" property proves. Through this verification process, we are able to verify the formal behavioral logic and develop a model which is consistent and free of errors. During the verification process, a counterexample was generated along with a simulation trace which showed that the above stated property was not satisfied. As seen in Fig. 7 the two available drones (Drone1 and Drone2) are not initialized yet since they are at S0 Start state. The Server (Server1) receives request f rom building! synchronisation event from the Input Module (Input Request) indicating that a request for service has been generated, which needs to be sent to all the available drones. The server module then broadcasts the request to all the available drones available by sending a request! synchronisation. As observed, the broadcasted request! synchronisation is not received by the drones since they are still at S0 Start state and hence, the service request goes unattended. The property verification process helped to identify the flaw in the logic design that the request generated by the server would sometimes go unattended and will never be served. This identification of flaw led to redesign of the logic and later we were able to rectify the logic and were able to verify the above stated property. This specific counterexample shows how formal verification and formal model checking helps in identifying and removing flaws and inconsistencies in proposed logic during design time of complex automated systems. Fig. 7 depicts one among several counterexamples which we encountered during model checking process. The property stated during model checking intends to verify if all available drones in the system are ready to receive a request when a server is sending it. Through this study, we proposed a formally verifiable framework to represent logically the behavioral that should be satisfied by the components in the infrastructure, required for distributed autonomous agents to successfully provide services. Through the property verification, we were able to prove that the distributed autonomous agents mutually agree without the interference of any central server or module. The autonomous agents are able to take decisions independently and also in synchronisation when needed. The representation is formally verified and is free of any flaws and inconsistencies. We plan to further extend this work by incorporating scalability and heterogeneity analysis to the present study and see how heterogeneous autonomous systems behave in a distributed environment. We also plan to model and formally verify similar distributed models with other model checking tools such as, nuXmv, PRISM and see how the results vary and further try to generalise our model. Finally, we envision to map the logic from the formal model to a simulation environment.
10,539.2
2020-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
A Fractional-Order Mathematical Model of Banana Xanthomonas Wilt Disease Using Caputo Derivatives : This article investigates a fractional-order mathematical model of Banana Xanthomonas Wilt disease while considering control measures using Caputo derivatives. The proposed model is numerically solved using the L1-based predictor-corrector method to explore the model’s dynamics in a particular time range. Stability and error analyses are performed to justify the efficiency of the scheme. The non-local nature of the Caputo fractional derivative, which includes memory effects in the system, is the main motivation for incorporating this derivative in the model. We obtain varieties in the model dynamics while checking various fractional order values. Introduction Banana Xanthomonas Wilt (BXW), a destructive bacterial disease caused by a bacterium called Xanthomonas campestris pv.musacearum (Xcm), has been identified as the major disease that threatens banana farming in East Africa [1].The vectors, like bats, birds, and flying insects (e.g., bees), spread the Xcm bacteria from an infected banana plant to a susceptible banana plant.The long-distance spread of Xcm is mainly caused by birds and bats [2].The common symptoms of BXW are yellowing and withering of leaves, untimely ripening and rotting of the fruit, shriveling and blackening of male gusset bloom, yellow drip presented on the cross-cut of the banana plant pseudo-trunk, and lastly, plant death [3,4].The authors in [5] observed community mobilization as a key to controlling BXW disease management.In [6], the authors analyzed the possibilities of removing the infected plant and leaving the uninfected plant to grow.The authors in [7] also investigated that time-to-time removing the infected plants from the mat is the best control compared to removing the complete mat, which is costly, time-consuming, and requires more labor.In [8], the authors explored BXW control techniques in Rwanda. Several mathematical models have been derived by researchers to understand the transmission dynamics of BXW disease and provide possible control techniques.In [9], the authors derived a model to understand the transmission structure of the BXW epidemic by vectors with control measures.In [10], the authors proposed a non-linear model to analyze the role of contaminated measures in the reiteration of BXW.In [11], the researchers proposed a model considering roguing and debudding controls for the BXW transmission.Nakakawa et al. [12] considered the vertical and vector mode transmissions in the BXW dynamical model.Kweyunga et al. [13] developed a model of BXW considering both horizontal and vertical modes of transmission.In [14], the authors derived a non-linear model to analyze the role of neglected control measures in the BXW transmission. Nowadays, fractional calculus [15][16][17] is being applied to solve various real-world problems in terms of mathematical modeling.Different types of fractional derivatives [18,19] have been successfully used to model various problems.More specifically, several deadly epidemics have been modeled by using mathematical models in a fractional-order sense.It is a well-known fact that fractional-order operators are non-local in nature and may be more effective for modeling history-dependent systems.Moreover, a fractional order can be fixed as any positive real number that better fits the real data.So, by using such an operator, an accurate adjustment can be made in a model to fit with real data for better predicting the outbreaks of an epidemic.Recently, several applications of fractional derivatives have been recorded in epidemiology.In [20][21][22][23][24][25], the authors have studied the dynamics of the COVID-19 disease by using fractional-order models.In [26], the authors proposed the mathematical modeling of typhoid fever in terms of fractional-order operators.In [27], a fractional-order model of the Chlamydia disease is proposed.In [28], the dynamics of the Chagas-HIV epidemic model using various fractional operators are explored.In [29], the authors derived a novel non-linear model for the dynamics of tooth cavities in the human population.In [30], the authors performed an analysis of the stability and bifurcation of a delay-type fractional-order model of HIV-1.In [31], the authors solved a fractionalorder HIV-1 infection of CD4+ T-cells considering the impact of antiviral drug treatment.In [32], the authors defined a fractal-fractional model of the AH1N1/09 virus.In [33], the authors studied the dynamics of a fractional-order hostparasitoid population model describing insect species.In [34], the authors used a wavelet-based numerical method for a fractional-order model of measles using Genocchi polynomials.In [35], some theoretical analyses of the Caputo-Fabrizio fractional-order model for hearing loss due to the mumps virus with optimal controls were proposed. Several numerical methods have been proposed by researchers to solve fractional-order problems.In [36], the authors derived a new generalized form of the predictor-corrector (PC) scheme to investigate fractional initial value problems (IVPs).Kumar et al. [37] introduced a new method to simulate fractional-order systems with various examples.In [38], the PC method was derived to simulate delayed fractional differential equations.A modified form of the PC scheme in terms of the generalized Caputo derivative to solve delay-type systems has been introduced in [39].Odibat et al. [40] have derived the generalized differential transform method for solving fractional impulsive differential equations.The authors in [41] introduced a novel finite-difference predictor-corrector (L1-PC) scheme to solve fractional-order systems in the sense of the Caputo derivative.In [42], the authors proposed a new form of L1-PC scheme to solve multiple delay-type fractional-order systems.In [43], a novel numerical scheme to solve fractional differential equations in terms of Caputo-Fabrizio derivatives was proposed.In [44], the authors derived a difference scheme for the time fractional diffusion equations.In [45], a second-order scheme for the fast evaluation of the Caputotype fractional diffusion equations has been derived.In [46], the authors defined a fractional clique collocation method for numerically solving the fractional Brusselator chemical model.In [47], the researchers derived efficient matrix techniques for solving the fractional Lotka-Volterra population model. To date, the aforementioned studies of mathematical modeling of the BXW disease [9][10][11][12][13][14] have yet to be analyzed using fractional derivatives.In this paper, we generalize the non-linear control-based model of BXW [14] by using Caputo fractional derivatives.The motivation behind this generalization is that fractional derivatives are non-local and may be more effective to include memory effects in the model. The rest of this paper is designed as follows: In Section 2, some preliminaries are recalled.The model description in the Caputo sense is given in Section 3. The numerical analysis containing the solution algorithm, error estimation, and stability are given in Section 4. The graphical simulations are performed in Section 5. Concluding remarks are given in Section 6. Preliminaries The preliminaries are as follows: Definition 1.A function (real) f(s), s > 0 belongs to the space (a) , [16] The Riemann-Liouville (R-L) fractional integral of ( ) ∈ ≥ − is defined as follows: is given by ( ) where m = [ω] + 1 and [ω] are the integer-part of ω.Definition 4. [16] The Caputo fractional derivative of ) Remark 1.The most common difference between the R-L and Caputo fractional derivatives is that the R-L derivative problems contain fractional initial conditions, whereas Caputo's definition uses classical conditions.Also, the derivative of a constant function is zero by the Caputo derivative but not by the R-L definition. Model description Here, we define the Caputo-type fractional-order generalization of a BXW disease model, including some control measures, which was given in [14].We know that fractional derivatives are non-local differential operators that allow memory effects in the system, which is a very important feature for studying disease outbreaks more accurately.The model contains two population sizes: the banana population (N p ) and the insect vector population (N v ).The population of banana plants involves three different classes: susceptible plants (S p ), asymptomatic infectious plants (A p ), and symptomatic infected plants (I p ).The population of vectors involves two classes: susceptible vectors (S v ) and vectors contaminated with Xcm bacteria (I v ).An environment contaminated with Xcm bacteria is defined by E b .The model is given as follows: (1 ) , ( ) with the initial conditions (0) 0, (0) 0, (0) 0, (0) 0, (0) 0, (0) 0, where C D ω is the Caputo fractional derivative operator of order ω.For setting the same dimensions t −ω at both sides of the fractional-order model, we applied the power ω on the parameters, those are in time unit t −1 . The compartmental diagram of the model is given in Figure 1.(1 ) ( ) , . The model parameters with numerical values are defined in Table 1.The positivity and boundedness of the model solution can be explored by considering the invariant region of the model, derived as follows: From the system (4), we have , ( ) Therefore, the invariant region for N p is given by Again, from the system (4), we have ( ) Therefore, the invariant region for N v is given by Furthermore, Considering the aforementioned non-negative initial conditions, the proposed model ( 2) is positive invariant and solutions remain positive and bounded in the region The disease-free equilibrium  0 of the model ( 2) is defined by , , , , ) ,0,0,0, ,0 . By using ( 14), we have Solution existence Here, we check the existence and uniqueness of the solution with the application of some well-known mathematical results.In this regard, let us consider the above given IVP Consider the Volterra integral equation of the given IVP in equation ( 17) Using the iterative scheme methodology on the non-linear kernel Φ, we define the expression Therefore, we get the following expression Theorem 1.The given IVP in equation ( 17) has a unique solution under the contraction for Φ.Proof.From equation (18), we have 1) Then, the successive iterations give [ ] Applying the norm, we get If n → ∞, the right-hand side of equation ( 19) converges to zero.Then, ∫ which gives the existence of the solution ζ(t). Now, for the uniqueness, consider two different solutions ζ(t) and ζ 1 (t).Then, Hence, there exists a unique solution for the proposed IVP (17).Therefore, we conclude that the proposed fractional-order model (2) has a unique solution. , where 0 and ( ) is a positive, non-decreasing, and continuous function.Then, there exists a solution ζ * of equation (17), such that ( ) where ( ) and δ is a positive constant. Proof.We define the metric d on space  by ( ) Define an operator : , It is easy to say 0 0 ( , ) The operator  is a strictly contractive operator, which can be seen by the following expression: Since ρ is non-decreasing, we have By using Theorem 2, there is a solution ζ * of IVP (17), such that ( , ) .( 1) Hence, the solution of the proposed model is stable. Numerical analysis on the model In this section, we perform the necessary numerical simulations (solution derivation, error estimation, and stability) to derive the solution of the proposed fractional-order model ( 2) by using the L1-PC scheme [41]. Derivation of the solution According to the L1-PC method, the Caputo fractional derivative is numerically defined by where 1 1 ( 1) .( 2) We approximate C D ω ζ(t) by the formula (33), and put it into (32) to get where ζ k defines the approximate value of the solution of (32) at t = t k and 2) (34) can be rewritten as: After rewriting the terms (35), we get the following from Volume 5 Issue 1|2024| 147 Contemporary Mathematics ( 2) Define Remark that k a s ′ has the following characteristics: . In view of equations ( 38) and (37), take the following form: We can see that equation ( 39) is of the form ( ), . Hence, using the scheme of Daffatardar-Gejji-Jafari method gives an approximate value of ζ n given by 1 ,0 1 0 Therefore, this approximated solution of the DGJ scheme gives the following predictor-corrector algorithm called the L1-PC method.), Using the above given methodology, the approximation equations of the proposed model ( 2) in terms of L1-PC method are derived as follows: 1 where ), Error analysis The brief analysis on the error estimation of L1-PC scheme has been given in the studies [41,49,50] and now investigated below.The error estimate is given by here C is a positive constant depends on ω and ζ.Derive r n by [ ] In view of (44), . To derive the error estimation, we will use the lemmas given below.Lemma 1. [51] For 0 < ω < 1 and k a s ′ (as given in equation ( 38)), we have ζ be the approximate solution calculated from the algorithm (40).Then, for 0 < ω < 1, we have where ' k a s are given in equation (38).Lemma 3. [41] Using Lemma 3 in equation ( 48), we get . where C 1 is a constant defined above. Stability analysis Here, h is the step size given in equation (32). Further, observe that Volume 5 Issue 1|2024| 151 Contemporary Mathematics Using discrete form of Gronwall's inequality and equation (38), we obtain where c is a constant and Using ( 50) and ( 51) in ( 49), we get where C is a constant. Graphical simulations In this section, we perform the graphical simulations to understand the behavior of the proposed model in a time range t  [0,20].The initial conditions are used as follows: S p (0) = 4,000, A p (0) = 500, I p (0) = 200, E b (0) = 500, S v (0) = 3,500, and I v (0) = 500.The parameter values are taken from Table 1 along with the control measures; the participatory community education programs (ξ = 0.7), vertical transmission control (δ = 0.6), and the clearance of Xcm bacteria in the soil (ψ = 0.5). In Figure 2, the variations in the susceptible plants Sp and susceptible vectors Sv are plotted at fractional-order values ω = 0.9 and ω = 0.8, along with the integer-order case ω = 1.Here, we notice that as the fractional order decreases, the susceptible plant and vector population also decreases. In Figure 3, the changes in the population of asymptomatic infected plants Ap and infected plants Ip are plotted at the same orders: ω = 1,0.9and ω = 0.8.Here, we notice the variations at given fractional orders after the time range [0, 5].Between the time range [5,20] months, when the fractional order decreases, the infection slightly increases. In Figure 4, the variations in the Xcm bacteria in the soil Eb and infected vectors Iv are plotted at the given fractional-order values.From Figure 4(b), we notice that, reaching the end point of the time t = 20, all fractional-order outputs nearly converge. In From the given graphical simulations, we notice that the fractional-order values result in variations in the behavior of the model dynamics.Such effects cannot be captured by using integer-order derivatives, which justifies the advantage of fractional derivatives.The graphs are plotted using MATLAB-2021a. Conclusion A fractional-order mathematical model of the BXW disease using Caputo derivatives has been considered in this study.The proposed model has been numerically solved using an L1-based predictor-corrector scheme.The analysis of the stability and error approximation of the proposed method has been established to justify the efficiency of the scheme.The graphical simulations justified the fact that fractional-order values result in variations in the model dynamics and that variations cannot be captured in the case of the integer-order model.In the future, some other fractional-order operators can be incorporated to analyze the proposed model's dynamics.Moreover, some other fractional-order models can be proposed to forecast the outbreaks of BXW. Figure 1 . Figure 1.Compartmental diagram of the model ,..., ) ( then solution ζ(t) exists and continuous.where Λ n (t) is the amount of error with Λ n (t) → 0 when n → ∞.Then, Now, is absolutely continuous and satisfies n z are the predictors and c n ζ is the corrector. of the proposed IVP and p k positive quantities k and h' Theorem 5 . Suppose ( , )t ζ Φfollows the Lipschitz property with respect to the variable ζ with a constant L and ( established from the scheme(40), then the scheme (40) is stable.Proof.We have to prove that 0 0 Figure 2 .Figure 3 .Figure 4 .Figure 5 . Figure 2. Variations in the susceptible population Sp and Sv at fractional order values ω Table 1 . Parameters with numerical values [14] Rate of recruitment of susceptible suckers Birth rate of susceptible vectors Rate of harvesting of old plants Vertical transmission rate from an infected plant Removal rate of infected plant Death rate caused by BXW Rate of infection caused by contaminated farming measures from asymptomatic infected plants Rate of infection caused by contaminated farming measures from symptomatic infected plants Contact rate between vectors and banana plants Probability of Xcm bacteria transmission from an infected vector to a susceptible plant when in contact Probability of Xcm bacteria transmission from contaminated soil to a susceptible plant Probability of Xcm bacteria transmission from an infected plant to a susceptible vector Death rate of the vectors Rate of recovery of infected vectors Transmission rate of asymptomatic infectious class to symptomatic infectious plants class Spreading rate of Xcm bacteria from symptomatic infectious plant to the soil Half saturation constant of Xcm bacteria in the environment Rate of natural clearance of bacteria in the environment bv αpAp (αp + d +r)Ip (1 ₋ ξ) βaSp qAp (1 ₋ δ) θIp ϕIp μvIv μvSv μvIv αγ3Sv (ψ+ μb)Eb , Consider ( )
4,011
2024-01-09T00:00:00.000
[ "Mathematics", "Agricultural and Food Sciences" ]
Cross-Layer Throughput Optimization in Cognitive Radio Networks with SINR Constraints Recently, there have been some research works in the design of cross-layer protocols for cognitive radio (CR) networks, where the Protocol Model is used to model the radio interference. In this paper we consider a multihop multi-channel CR network. We use a more realistic Signal-to-Interference-plus-Noise Ratio (SINR) model for radio interference and study the following cross-layer throughput optimization problem: (1) Given a set of secondary users with random but fixed location, and a set of traffic flows, what is the max-min achievable throughput? (2) To achieve the optimum, how to choose the set of active links, how to assign the channels to each active link, and how to route the flows? To the end, we present a formal mathematical formulation with the objective of maximizing the minimum end-to-end flow throughput. Since the formulation is in the forms of mixed integer nonlinear programming (MINLP), which is generally a hard problem, we develop a heuristic method by solving a relaxation of the original problem, followed by rounding and simple local optimization. Simulation results show that the heuristic approach performs very well, that is, the solutions obtained by the heuristic are very close to the global optimum obtained via LINGO. Introduction Cognitive radio technology [1][2][3] provides a novel way to solve the spectrum underutilization problem.In cognitive radio (CR) networks, there are two types of users: primary users and secondary users.A primary user is the rightful owner of a channel, while a secondary user periodically scans the channels, identifies the currently unused channels, and accesses the channels opportunistically.The secondary users organize among themselves an ad hoc network and communicate with each other using these identified available channels.As a result, a multihop multichannel CR network is formed.How to efficiently share the spectrum holes among the secondary users, therefore, is of interest. In this paper, we are interested in studying the opportunistic spectrum sharing problem among the secondary users, but our concern is on a cross-layer design of spectrum sharing and routing with SINR constraints.The main issues we are going to address include the following. (1) Given a set of secondary users with random but fixed location, and a set of traffic flows, what is the max-min achievable throughput? (2) To achieve the optimum, how to choose the set of active links, how to assign the channels to each active link, and how to route the flows? There have been some research works on cross-layer protocols in CR networks.Hou et al. [4] characterized the behaviors and constraints for a cognitive radio network from multiple layers, including modeling the spectrum sharing and subband division, scheduling and interference constraints, and flow routing.Shi and Hou [5] developed a formal mathematical model for scheduling feasibility under the influence of power control; the formulation is a crosslayer design optimization problem encompassing power control, scheduling, and flow routing.Subsequently, on the basis of the work in [5], Shi and Hou [6] implemented their cross-layer optimization framework in a distributed manner and compared the performance of the distributed optimization algorithm with the upper bound and validated the efficacy.The work in [4][5][6] assume that the links are unidirectional, and to avoid collision only the designated receiver is need to be out of the interference caused by another transmitter.Ma and Tsang [7] proposed a crosslayer design on spectrum sharing and power control, where bidirectional links were considered and all nodes were operated at an optimal common power level at which the total spectrum utilization is maximized.Ma and Tsang [8] also proposed a cross-layer design on spectrum sharing and routing, where the channel heterogeneity (which is a unique feature for cognitive radio) was considered and modeled. In the previous work, however, a common limitation exists since all such cross-layer protocols [4][5][6][7][8] are designed on the basis of the Protocol Model for radio interference, where the interference range is assumed to be limited and no interference is caused beyond the interference range.As a result, in the Protocol Model the conflict relationships among the wireless links are binary.However, in reality the aggregate interference of a large number of far transmitters could be significant and may cause interference on a receiver, and a near transmitter may not necessarily cause interference on a receiver if the transmitter properly controls its transmission power.Therefore, a definite criticism of the Protocol Model is that interference is not a binary relationship [9][10][11]. In order to solve the above realistic problems, the Signalto-Interference-plus-Noise Ratio (SINR) model is adopted.The rationale of SINR model is to compare the SINR with the additive interference calculation at the receiver with a threshold.Some researchers have adopted the SINR model when they consider the link scheduling, power control, or throughput improvement and and so forth. in wireless networks.For example, Brar et al. [11] investigated throughput improvements in wireless mesh networks by replacing CSMA/CA with an STDMA scheme where transmissions were scheduled according to the SINR model.Chafekar et al. [12] studied a cross-layer latency minimization problem in wireless networks with SINR model for interference.Behzad and Rubin [13] developed a new mathematical programming formulation for minimizing the schedule length in multihop wireless networks while meeting the requirements on the SINR at intended receivers. In this paper, we consider a multihop multi-channel CR network.We adopt the (more realistic) Signal-to-Interference-plus-Noise Ratio (SINR) model to study the wireless channel interference.Different from the work in [4][5][6], we consider the links being bidirectional because we believe the link level acknowledgments in an ad hoc network are a must.We propose a cross-layer optimization framework which jointly considers the spectrum sharing and routing with SINR constraints.The optimization problem is in the forms of a mixed integer nonlinear programming (MINLP) and the objective is to maximize the minimum end-to-end flow throughput.Since the MINLP formulation is NP-hard in general, we present a heuristic methodology by solving a relaxation of the original problem, followed by rounding and simple local optimization.Simulation results show that the heuristic approach works very well; that is, the solutions obtained by the heuristic are very close to the global optimum obtained via LINGO [14]. The rest of this paper is organized as follows.In Section 2, we describe the assumptions and system model.Section 3 introduces two interference models: one is protocol model and the other is SINR model.Section 4 presents the crosslayer design of spectrum sharing and routing with SINR constraints, and the formulation is in the forms of a mixed integer nonlinear programming (MINLP) problem.The heuristic approach is proposed in Section 5 to solve the MINLP problem.Section 6 presents the simulation results.Finally, Section 7 concludes the paper. Assumptions and System Model We consider a cognitive radio (CR) network with n secondary users, denoted by the set V and the cardinality |V | = n.There are M orthogonal channels in the network, denoted by the set C and the cardinality |C| = M.Each secondary user individually detects the available channels, and the set of available channels that can be used for communication is different from node to node.Let C i denote the set of available channels observed by node i, and we have Each secondary user i (where 1 ≤ i ≤ n) has a programmable number of radio interfaces, denoted by γ i .We assume that the radio interface is able to tune in a wide range of channels, but at a specific time each radio interface can only operate on one channel [15]. Static Node Location with a Centralized Server.We assume that the node locations are static.We also assume the set of available channel at each secondary user is static.This corresponds to the applications with a slow varying spectrum environment (e.g., TV broadcast bands).We assume that there exists a centralized server in the CR network.Each secondary user reports its location and the set of available channels to the spectrum server.The spectrum management and flow routing, therefore, is simple and coordinated.Note that the formulations in the work [4,5,8] are also centralized and for static scenario (i.e., both node location and set of available channels at each node are static).Table 1 lists the notations used in this paper. Bidirectional Links. We consider bidirectional links, rather than unidirectional links, due to two reasons [16]. (1) Wireless medium is lossy.We cannot assume that a packet can be successfully received by a neighbor unless the neighbor acknowledges it.In an ad hoc network, the link level acknowledgments are necessary. (2) Medium access controls such as IEEE 802.11 implicitly rely on bi-directionality assumptions.For example, a RTS-CTS exchange is usually used to perform virtual carrier sensing. Thus, if node i can transmit data to node j and vice versa, then we represent this by a (bidirectional) link, denoted by Source node for session q d(q) Destination node for session q f q i, j Traffic flow from i to j for session q f q j,i Traffic flow from j to i for session q e = (i, j), between node i and node j.Moreover, we let C e denote the set of available channels for the link e, and we have C e = C i ∩ C j . Common Transmission Power. According to the study by Narayanaswamy et al. [16], to ensure that links are bidirectional, the simplest approach is to assume that nodes are homogeneous; that is, nodes transmit at the same power.In this paper we assume that each secondary user is equipped with an omnidirectional antenna.Similar to [16], we also assume that each secondary user transmits at the same power.Note that this assumption is used in [4] as well. A bidirectional link, denoted by e = (i, j), can be established between nodes i and j if there exists a transmission power P under which the Signal-to-Noise Ratio (SNR) in the absence of cochannel interference at nodes i and j is not less than a threshold β, that is, where β is signal-to-noise Ratio (SNR) threshold, G i j (and G ji ) denotes the channel propagation gain from i to j (and from j to i), and N j (and N i ) denotes the noise power at node j (and node i).Since it has been commonly assumed that G i j is equal to G ji [4][5][6][7][8][17][18][19][20][21], and N i is equal to N j , we make the same assumptions here and thus we have We let E i denote the set of links incident on node i, which can be obtained by Let E denote the union of E i ; we have As a result, we obtain an undirected connectivity graph G = (V , E) to represent the CR network, where V is the set of secondary users denoted by the vertices of the graph, and E is the set of edges between two vertices (i.e., secondary users). The Interference Model In wireless networks, there are three types of interference: duplexing interference, primary interference, and secondary interference.In this paper, we assume that links using different channels do not interfere with each other.Interference only occurs among the links sharing the same channel. The duplexing interference constraint [22] only prohibits any node from simultaneously transmitting and receiving on any frequency band (i.e., the case in Figure 1 is not allowed). The primary interference constraint prohibits any node from simultaneously transmitting or receiving on any band (i.e., neither case in Figures 2(a), 2(b) nor 2(c) is allowed).In other words, links that shared a common node cannot transmit or receive simultaneously on any channel.Obviously, the duplexing constraint is less stringent than the primary interference constraint.And also, the duplexing and primary interference constraints are applicable to the links which share a common node (see Figures 1 and 2), and particularly, these constraints hold irrespective of the interference model. The secondary interference constraint, on the other hand, further prohibits any node from transmitting when a neighbor node within its interference range is receiving from another node.Different from the duplexing and primary interference constraints, the secondary interference constraints are applicable to those links which do not share a common node (see Figure 8 shown in the appendix for better understanding). For the purpose of modeling the secondary interference, there are two models [23,24]: the Protocol Model and the Physical Model.Since rationale of the Physical Model is based on the SINR at the receiver, we call the Physical Model as the SINR model hereinafter in this paper.The relationship of these three types of interferences is shown in Figure 3. [23] which implicitly assumed that links are unidirectional.With this assumption, collisions only occur when the designated receiver is interfered by another transmitter.Basically, the Protocol Model assumes that the interference range is limited and no interference will be caused beyond the interference range.We let r i and R i denote the transmission range and interference range for any node i, respectively; then we have R i = (1+Δ)r i , where Δ is the guard zone to prevent a neighboring node from transmitting on the same channel at the same time [23].The Protocol Model claims that a transmission from node i to node j is successful if and only if any node k which may cause interference on node j (i.e., if d k j ≤ R k where d k j denotes the distance between k and j) is not simultaneously transmitting. The Protocol Model. Gupta and Kumar proposed the Protocol Model A more realistic version, however, assumes that IEEE 802.11MAC is employed and thus the links are bidirectional (due to RTS-CTS and ACK exchange).We usually call this version as 802.11-styleProtocol Model [7,8].Suppose that link e = (i, j) and link e = (k, h) are established and both are bidirectional.They are also active on a same channel.The 802.11-style Protocol Model states that a transmission on link e between nodes i and j is successful if and only if for any link e = (k, h), such that Note that the Protocol Model leads to binary conflict relationships among the wireless links.In other words, any two links either interfere with each other or can be active simultaneously, regardless of the other ongoing signal transmissions.3.2.The SINR Model.As we mentioned before, in reality the aggregate interference of a large number of far transmitters could be significant and may cause interference at the receiver, and a near transmitter may not necessarily cause interference at the receiver if the transmitter properly controls its transmission power.Thus, the main limitation of the Protocol Model is that interference is not a binary relationship.These problems can be overcome by means of the SINR model, whose rationale is as follows. Duplexing Primary 3.2.1.Unidirectional Links.Before we consider bidirectional link, let us first consider unidirectional link.For clarification, we let e = i → j and e = k → h denote two unidirectional links and suppose that they are active on a same channel.The transmission from node i is successfully received by node j if and only if the SINR at the receiving node j is not less than a threshold β.That is, where I j denotes the interference power at node j.To calculate I j , we need to sum all the links {e = k → h} that have simultaneous transmissions with link e on a same channel.Therefore we have where E contains all links that have transmissions concurrent with link e by using a same channel.The SINR model accurately captures the fact that interference is caused by aggregate effect of the simultaneous active links. Bidirectional Links. Next, we extend the SINR model from unidirectional link to bidirectional link.To distinguish from the unidirectional links, we let e = (i, j) and e = (k, h) denote two bidirectional links and suppose that they are active on a same channel.Because the interference raised by node k and node h might be different, we need to choose the maximum one.To ensure the transmission on link e between nodes i and j to be successful, the SINR at both nodes i and j is not less than a threshold β.We also need to sum all the links {e = (k, h)} that have simultaneous transmissions with link e on a same channel.To the end, we obtain Note that the SINR model is more accurate than the Protocol Model since it better captures the physical propagation.Moreover, in the SINR model a correct packet reception is allowed even in the presence of (moderate) interference, and the cumulative character of interference is taken into account.The main drawback of this model lies in its high complexity, as the interference is described as the complex mathematical relationships. Cross-Layer Design of Spectrum Sharing and Routing In this section, we present a cross-layer optimization framework which jointly designs the spectrum sharing and routing.Spectrum sharing can be done either in time domain or in frequency domain.In this paper, we consider frequency domain channel assignment.Spectrum sharing is to determine which link is going to be active and which channel will be assigned to each active link, and our target is to form a conflict-free topology.Routing is to determine which path each traffic flow is going to travel from the source node to the destination node.We allow multipath for each traffic flow.Different from the previous work, in this paper, we adopt the SINR model for radio interference and consider links being bidirectional. 4.1.Link Assignment.We say that link e is active only if there is a transmission on channel m over link e.We define a 0-1 binary variable x m e as follows: x m e = ⎧ ⎨ ⎩ 1 if link e is active on channel m, 0 otherwise. (8) Interference Constraints. In this paper, we consider both primary and secondary interference constraints.We term the secondary interference constraints as the SINR constraints hereinafter in this paper, since we use the SINR model to model them. In the remaining part of this subsection, we let e = (i, j) denote a link and let e = (k, h) denote another link.Both links are active and use a same channel m for transmission. Primary Interference Constraints . By using a same channel, each node can either transmit or receive but not both, at a given time.In other words, links that share a common node cannot transmit or receive simultaneously on any channel. For ease of presentation and also for notational convenience, each link is also understood as a set of two nodes; then we define Clearly, we use the notation e ∩ e = ∅ to denote that the two links e and e do not share a common node, and the notation e ∩ e / = ∅ to denote that the two links e and e share a common node. Thus, the primary interference constraint can be expressed as follows: where E m contains all links that have transmissions concurrent with link e by using channel m.Similar to [13], we introduce a sufficiently large positive number Υ in the constraint (11), where the constraint becomes "redundant" when link e is not active (i.e., x m e = 0).Notice that we only sum the interference caused by those active links because link e will not cause any interference on link e whenever link e is not active (i.e., x m e = 0).And also note that when we calculate the interference caused by the active link e = (k, h), we choose the maximum interference caused from either node k or node h to either node i or node j (due to bidirectional link). Node-Radio Constraint. A node can establish multiple links with its neighboring nodes if it can tune each of its radio interface to a different channel.However, the number of established links at each node is constrained by the number of its radio interfaces.This leads to the following constraint: 4.4.Multipath Routing Constraints.We consider multiple traffic flows in the network.We term the traffic flow for each source-destination pair as a communication session and use q (q = 1, 2, . . ., Q) to index each session.Let s(q) and d(q) represent the source node and destination node for session q.Because the links are bidirectional, the traffic flow on each link can be in either direction.Thus, for any link e = (i, j), we let f q i, j (and f q j,i ) denote the traffic flow traveling from i to j (and from j to i) for the session q, where (i, j) ∈ E, i / = j.For each traffic flow, we allow multi-path routing. Our definition of the maximum throughput is maxmin flow rate [25].That is, our target is to maximize the minimum end-to-end flow throughput that can be achieved International Journal of Digital Multimedia Broadcasting in the network.Therefore, the multi-path routing constraints are listed as follows: where T is the minimum end-to-end throughput for every session. The constraint (13) restricts the amount of flow on each link to be nonnegative.The constraint (14) states that at each node, except the source node and destination node, the amount of incoming flow is equal to the amount of outgoing flow.The constraint (15) represents that the minimum outgoing flow from each source node is at least T. The constraint (16) states that the minimum incoming flow to the destination node is at least T. The constraint (17) indicates that the sum of the flows over all sessions traversing a link cannot exceed the link capacity. To calculate the link capacity, we let W denote the bandwidth of each channel, and let B m e denote the capacity of link e by using channel m.Assuming Gaussian noise and interference, we have 4.5.Problem Formulation.We aim to maximize the minimum end-to-end throughput, and this optimization problem can be formulated as max T (19) Subject to: x m e + x m e ≤ 1 (m ∈ C e ∩ C e , e ∩ e / = ∅, e / = e, e ∈ E, e ∈ E), (21) e∈Ei m∈Ce x m e ≤ γ i (i ∈ V ), ( 23) where γ i is constant and B m e can be obtained by (18).x m e (binary integer) and f q i, j and f q j,i (rational number) are decision variables.The objective function is a linear function; however, (28) is a nonlinear constraint.The optimization problem is in the form of mixed integer nonlinear programming (MINLP) problem and can be solved by LINGO. Finding the Optimal Solution by Heuristic We are interested in finding the optimal solution under which the minimum end-to-end flow throughput is maximized.However, the original problem is in the form of MINLP since the constraint (28) is nonlinear due to the logarithm function.But if we investigate the formulation more carefully, we find that the complexity of the MINLP problem formulation does not lie in the nonlinear logarithm function in constraint (28).Instead, the complexity comes from the existence of the binary variable x m e .The reason is that as long as the values of x m e are determined (i.e., the set of active links and the channel assignment on each active link are determined), then this MINLP reduces to an LP, which can be solved in polynomial time.To this end, we develop a heuristic method by solving a relaxation of the original problem, followed by rounding and simple local optimization [26].constraints ( 21)-( 28). ( That is, we allow the variables x m e to take values between 0 and 1.The relaxed problem can be solved in polynomial time.By solving the relaxed problem, we obtain an upper bound of the optimal value of the original problem, and we let X * denote the relaxed solution that produces the upper bound. Getting Independent Sets. In order to determine the set of active links and form a conflict-free topology, we need to obtain the independent sets (i.e., the set of links that can be simultaneously active on a channel).The reason is that we can significantly speed up the search process by combining the independent sets together with the rounding and local optimization (introduced below). To obtain the independent sets, Karnik et al. [25] proposed a smart enumerative technique.In this paper, we extend this technique into a more general case from the following two aspects.(1) In [25], all nodes are assumed to transmit at a single channel.But in our formulation we consider a more realistic scenario where each node (i.e., secondary user) is able to access a set of available channels, and especially, the set of available channels is different from node to node.(2) In [25] the links are unidirectional, but in our formulation the bidirectional links are considered. For this technique, similar to [25] we make the following additional assumptions.( Interested readers are advised to refer [25] for details on why the above three assumptions are reasonable.)(A1) The propagation gains are modeled by isotropic path loss.That is, the propagation gain from node i to node j is where d 0 is the far-field crossover distance and η denotes the path loss exponent. (A2) The minimum distance for any pair of nodes is d min . (A3) The nodes are located in a square size L × L area. Theorem 1. Under the assumptions (A1)-(A3), the number of simultaneous transmissions on a same channel (i.e., the size of maximum independent sets on a channel) is upper bounded by Proof.Please see the appendix. Section 6 will show the extent of complexity reduction by using this technique together with rounding and local optimization. Rounding. The next step is to round the relaxed problem solution X * to a valid binary integer solution X.To create X, we can simply round the one (say x m e ) with the largest value to 1.According to the independent set, with x m e = 1 we can immediately decide some variables which share the same channel m with the link e to be 1 or 0. After fixing some decision variables to 1 or 0 in the first iteration, we update a new relaxed LP for the second iteration.We can solve this new LP, then again round the one with the largest value to 1, and set some additional variables to 0 accordingly.The iteration continues and eventually we can determine all {x m e } to either 0 or 1. Upon fixing all the x m e values, the original MINLP reduces to an LP problem, which can be solved in polynomial time.It is worth emphasizing that, unlike the solutions obtained by relaxation, the final solution obtained here is a feasible solution since all x m e values are binary instead of rational numbers. Local Optimization. Further improvement can be obtained by a local optimization method, starting from X. Suppose that for channel m there are n m independent sets, and we use 1, 2, . . ., n m to index each independent set.Since, in the initial solution X, one of n m independent sets is active on channel m, then we use v (1 ≤ v ≤ n m ) to index the active independent set.Then for channel m, we observe its independent sets and cycle through k = 1, 2, . . ., (n m − 1) while k / = v, and at the kth step replacing the kth independent set as 1.If this change leads to an improvement for the objective function, we accept the change and continue.Otherwise we go on to the next independent set of channel m.We continue until we have tried all the independent sets for channel m.The same process repeats for all the channels.Numerical experiments show that this local optimization method can lead to significant improvement on the objective function. Simulation Results In this section, we present simulation results for our heuristic method and compare it with the upper bound and the global optimum.The upper bound is obtained by solving a relaxation of the original problem, while the global optimum is obtained by LINGO which is a mathematical software package.The default settings for the simulations are as follows.The noise power at every receiver is equal to −100 dBm.η and d 0 are taken to be 4 and 0.1 m, respectively.The minimum threshold (β) is set to 2.3 dB. We consider two scenarios: one is regular topology and the other is random topology.We make no claims that these topologies are representative of typical cognitive radio networks.The reason that we have chosen these two simple topologies is to facilitate detailed discussion of the results and for the illustration purpose.For the propagation model, we adopt the isotropic path loss shown in (31).However, we stress that the validity of the conclusions drawn in the following holds for any scenario and also when more complicated propagation models are used to determine G i j parameters. Performance in Regular Topology. We first look at the performance of the proposed approach in the regular topology, as illustrated in Figure 4(a).A total number of n = 9 nodes are placed in a 3 × 3 grid, and the deployment area is a square size of 80 × 80.The unit grid separation (i.e., distance between adjacent nodes along the grid-side) is 20 m.All nodes use a common transmit power of 3 mW, which results in a transmission range of 23.4 m.The transmission range is greater than the unit grid separation but is less than the unit grid diagonal (i.e., distance between adjacent nodes along the diagonal).This results in a simple topology where all nodes can only communicate with their physical onehop neighbors on the grid.Figure 4(b) shows the connection graph.There are M = 6 channels that can be used for the entire network.Every node has 3 radio interfaces (γ i ).The set of available channels at each node is randomly generated; see Table 2.Note that the set of available channels is different from node to node. Complexity Reduction. For this scenario, there are 12 potential links and 24 binary variables (i.e., {x m e }).By using the enumerative technique, we obtain 22 independent sets and the size of the maximum independent set is 2. The exhaustive search space to determine the binary variable is 2 24 ; however, combining the independent sets together with the rounding and local optimization, the search space is significantly reduced from 2 24 to 1800. Throughput. Regarding the traffic flow, we consider |Q| = 1, 2, or 3 active sessions and run 3 experiments, respectively.In each experiment, the source node and destination node for each session are randomly generated.Figure 5(a) shows the results of the throughput obtained by our heuristic, upper bound and global optimum.Since the regular topology is simple, the heuristic method includes rounding technique only.It is observed that such heuristic results (obtained by rounding technique only) are equal to the global optimum, therefore no need to carry out local optimization.It is also found that there are gaps between the heuristic results and the relaxation bound. For comparison purpose Figure 5(b) shows the results of the optimality ratio (which is defined as the normalized throughput over the global optimum) obtained by our rounding technique.It is found that the optimality ratio obtained by rounding is 1, while the optimality ratio of the relaxation bound is within (1,1.6).Simulation results show that the rounding technique performs very well in this scenario. Performance in Random Topology.We next relax the regularity of node placement and look at the performance of the proposed approach in the random topology.As Figure 6(a) shows, we assume that n = 10 nodes are uniformly distributed in a square size of 40 × 40 area.All By using the enumerative technique, we get 29 independent sets and the size of the maximum independent set is 2. The exhaustive search space to determine the binary variable is 2 32 ; however, combining the independent sets together with the rounding and local optimization, the search space is significantly reduced from 2 32 to 13500. Throughput. Regarding the traffic flow, we consider |Q| = 1, 2, 3, 4, or 5 active sessions and run 15 experiments, respectively.In each experiment, the source node and destination node for each session are randomly generated.Different from the results obtained in the regular topology, in this random topology we show not only the heuristic results obtained by rounding technique but also the heuristic solutions obtained by rounding and local optimization.Table 4 shows the results.It is observed that there are some minor gaps between global optimum and the heuristic results obtained by rounding technique.However, by further using local optimization method, we find that the heuristic results are very close to the the global optimum.This observation demonstrates that the local optimization can lead to significant improvement on the objective function.Also note that there are some moderate gaps between the global optimum and the bounds obtained by relaxation. For comparison purpose Table 5 shows the optimality ratio obtained by our heuristic and relaxation.It is observed that the heuristic results obtained by rounding and local optimization are very close to 1, while the heuristic results obtained by only rounding are within (0.48,1.0) and the ratio of the relaxation is within (1,1.8).The simulation results show that the combination of rounding and local optimization performs very well in this scenario. Spectrum Sharing and Routing. For illustration purpose, we show the results of spectrum sharing and routing when there are 5 communication sessions.The source node and destination node for each communication session are randomly generated; see Table 6. By solving the MINLP problem by heuristic, we obtain that the optimal achievable throughput for each traffic flow is 111.3 (which is the 15th experiment shown in Figure 7).Figure 7(a) illustrates the optimal spectrum sharing.It is noticed that there are 10 active links in total, and channels 4 and 8 are reused.The nodes form themselves as an ad hoc network and all links can be active simultaneously (i.e., the topology is conflict-free).Figures 7(b)-7(f) illustrate the routing path(s) for each traffic flow.Figure 7(b) shows that the traffic flow generated by node 8 first travels to node 5, and then the traffic is split into 2 paths: one is 5 → 3 → 6 → 9 → 10 and the other path is 5 → 2 → 1 → 10. Figure 7(c) indicates that the traffic from node 7 to node 4 is via a single path, that is, 7 → 1 → 10 → 9 → 4. Similarly, as shown in Figures 7(d) and 7(e), the routing path for traffic flow from node 5 to node 9 is 5 → 3 → 6 → 9, while the traffic flow from node 3 to node 1 is via the path 3 → 5 → 2 → 1.Finally, the traffic flow generated from node 2 travels through 2 paths, one is through 2 → 5 → 3 → 6, and the other is via 2 → 1 → 10 → 9 → 6. Conclusion In this paper, we consider a multihop multi-channel CR network.We present a cross-layer optimization framework by jointly designing the spectrum sharing and routing with the SINR constraints.Distinguished from the previous studies, we adopt a more realistic SINR model to capture the conflict relationships among the links, rather than using the Protocol Model.Our objective is to maximize the minimum end-to-end flow throughput, and our study addresses the following two cross-layer throughput optimization problem. (1) Given a set of secondary users with random but fixed location, and a set of traffic flows, what is the max-min achievable throughput?(2) To achieve the optimum, how to choose the set of active links, how to assign the channels to each active link, and how to route the flows?We answer these questions via a formal mathematical formulation in the forms of mixed integer nonlinear programming (MINLP). Since the MINLP formulation is generally an NP-hard problem, we develop a heuristic method by solving a relaxation of the original problem, followed by rounding and simple local optimization.Simulation results show that the heuristic approach performs very well; that is, the solutions obtained by the heuristic are very close to the global optimum. For the future work, we need to consider how to design a distributed algorithm for a multihop CR network.Since in reality, there may not exist a centralized server, and also, the available channels are highly dynamic, in such situation, how to choose the set of active links and how to allocate channels and route the flows to obtain the max-min achievable throughput is a highly desirable and challenging work. Figure 3 : Figure 3: Relationship of three types of interferences. 5. 1 . Relaxation.We start by relaxing the MINLP problem to the following format.max T (29) Subject to: 0 ≤ x m e ≤ 1 (m ∈ C e , e ∈ E) Figure 5 : Figure 5: Comparison between heuristic, upper bound and global optimum for regular topology. Figure 7 : Figure 7: Spectrum sharing and routing for random topology. For the links that do not share a common node but share a common channel, they are applicable to the SINR constraints if the links are active simultaneously.That is, a transmission on a bidirectional link e = (i, j) is successful if and only if the SINR at either node i or node j is not less than the minimum required threshold β.This leads to the following constraint: e ≤ 1 (m ∈ C e ∩ C e , e ∩ e / = ∅, e / = e, e ∈ E, e ∈ E). Table 2 : Set of available channels at each node (i.e., C i ) for regular topology. Table 3 : Set of available channels at each node (i.e., C i ) for random topology.There are M = 8 channels that can be used for the entire network.Every node has 4 radio interfaces (γ i ).The set of available channels at each node is shown in Table3.Again, the set of available channels is different from node to node.6.2.1.Complexity Reduction.For this scenario, there are 21 potential links and 32 binary variables (i.e., {x m e }). Table 4 : Throughput comparison for random topology. Table 5 : Optimality ratio comparison for random topology. Table 6 : Rate requirements of 5 sessions for random topology.
8,832.6
2010-07-05T00:00:00.000
[ "Computer Science", "Engineering" ]
RESIDENTIAL COMMUNITY MICRO GRID LOAD SCHEDULING AND MANAGEMENT SYSTEM USING COOPERATIVE GAME THEORY This paper proposes a residential community based microgrid using cooperative game theory to share excessive energy within a community’s neighbor homes for optimal load scheduling and management. The proposed model is a grid connected residential community where smart homes are connected through central energy management system (EMS) to share the benefits of excessive distributed energy resources (DERs) from solar PV or wind turbine by selling to other community residents at a price lower than the utility gird but higher than the feed–in tariff. The community smart homes are categorized as Externally Importing Homes, Internally Exporting Homes and Externally Exporting Homes which are further classified as passive consumers, active prosumers and proactive prosumers based on the facilities they possess in form of DERs and battery storage (BS). With the cooperative energy transaction mechanism, the smart community homes after fulfilling their own load requirements can place the excessive energy on community poll using decentralized or centralized approach through peer to peer trading or smart community manager (SCM) respectively. The excessive energy can be sold or purchased to and from other community homes as per some defined preferences and priorities. This will benefit the entire community in terms of cost compared to the utility grid’s Time of Use (ToU) pricing. Proposed system will not only share, schedule and manage the community load optimally but will reduce the overall energy cost, system operational stress, improves system operational efficiency and reduces carbon emission. ABSTRACT This paper proposes a residential community based microgrid using cooperative game theory to share excessive energy within a community's neighbor homes for optimal load scheduling and management. The proposed model is a grid connected residential community where smart homes are connected through central energy management system (EMS) to share the benefits of excessive distributed energy resources (DERs) from solar PV or wind turbine by selling to other community residents at a price lower than the utility gird but higher than the feed-in tariff. The community smart homes are categorized as Externally Importing Homes, Internally Exporting Homes and Externally Exporting Homes which are further classified as passive consumers, active prosumers and proactive prosumers based on the facilities they possess in form of DERs and battery storage (BS). With the cooperative energy transaction mechanism, the smart community homes after fulfilling their own load requirements can place the excessive energy on community poll using decentralized or centralized approach through peer to peer trading or smart community manager (SCM) respectively. The excessive energy can be sold or purchased to and from other community homes as per some defined preferences and priorities. This will benefit the entire community in terms of cost compared to the utility grid's Time of Use (ToU) pricing. Proposed system will not only share, schedule and manage the community load optimally but will reduce the overall energy cost, system operational stress, improves system operational efficiency and reduces carbon emission. KEYWORDS Residential Microgrid, Distributed Energy Resources, Cooperative Game Theory, Load Scheduling, Energy Management System. INTRODUCTION Considering the scarcity of conventional energy sources, increasing environmental carbon emission and requirement of improved operational efficiency needs diverse, and smarter solutions for meeting energy efficiency and conservations at the same time (Halepoto, Uqaili, & Chowdhry, 2014). Residential sector contributes almost one third share of energy consumption (Sahito, et al., 2015) and unfortunately this sector mostly relies on conventional energy sources. There is a strong need to shift the residential load by utilizing small DERs to minimize the concerns about polluted carbon emission and to meet the ever-growing energy requirements and operational stability (Basu, Chowdhury, Chowdhury & Paul, 2011). Recently, the electric power industry has seen the acceleration in DERs installation and utilization. Microgrids are the most complex and dynamic form of DERs. A microgrid is setup by integrating the interconnected electric loads and DERs acting as a single controllable entity with respect to the grid (Planas, Gil-de-Muro, Andreu, Kortabarria & de Alegría, 2013). In recent years, microgrids have evolved from a growing concept to a significant source of opportunity however it is still in the developing phase as there isn't one-size-fitsin all microgrid development system. In the microgrid industry, the immense focus has been given to the institutional campus, commercial or industrial segments, but now there is a growing trend to expand these applications to serve broader needs. Residential communities can serve this purpose as it is broadly accepted that the electric utility future is only sustainable and reliable with resilient communities to supplement the existing energy infrastructure with microgrids. Since DERs are internment in nature and their availability is subject to natural concerns and climatically variations, these resources must be operated in a coordinated manner. For the interactive operation of DER in a coordinated approach, the introduction of multi-agent system (MAS) can provide hierarchical control architecture for the optimization of resources and to avoid any operational uncertainty (Halepoto, Sahito, Uqaili, Chowdhry & Riaz, 2015). This can framework an efficient load sharing, load scheduling and EMS especially for residential sector by developing a community based residential microgrid where each resident is cooperative with each other in game theory concept. In a game theory, instead of each user utilizing the DERs individually for its own load usage and management, a better approach can be to use DERs as cooperative load management scheme (Parisio, Wiezorek, Kyntäjä, Elo, Strunz & Johansson, 2017). The community microgrid can potentially serve the needs of both community residents and utility grids by selling or purchasing the excess amount of energy optimally, as every residential community home consumer can be a prosumer (producer and consumer) at the same time. The reminder of paper is organized as follows. In section 2 a residential smart home system model is proposed and discussed being the mandatory requirement of community microgrid. A residential smart home community based microgrid developed in Section 3 which categorizes three types of community homes on base of facilities they possess. In Section 4 a cooperative game theory based energy management system for community microgrid is proposed using EMS. A prosumer-centric residential community microgrid system using decentralized and centralized design is proposed and analyzed in Section 5. Section 6 concludes the work and point to the future work directions. RESIDENTIAL SMART HOME SYSTEM MODEL The residential community microgrid is strongly dependent on residential smart home system (RSH) as proposed in Figure 1. The smart homes are utility grid connected and are equipped with RESs (PV solar system, and wind turbine), solar charge controller, wind charge controller; advance metering infrastructure (AMI) based smart meter, energy scheduling unit (ESU), energy management controller (EMC), DC/AC inverters, battery storage (BS) and home appliances. Through smart meter, not only the bi-directional communication between utility grid and consumers can be established but the consumer can get real time information about load demand, energy consumption data and electricity prices especially ToU pricing. The solar PV system generates the electricity in DC form which is then converted into AC form via DC/AC converter. The BS is utilized for both source and sink purpose to store DERs produced energy. The ESU is responsible to generate and schedule the appliance energy consumption data and send the scheduling patterns to the EMC. According to the generated schedules by ESU, the EMC controls the BS and appliance's operation. The home appliances are categorized into three types by end consumers according to schedulability, flexibility and interruptibility; (i) Partially Flexible Appliances (PFA), (ii) Totally Flexibly Interruptible Appliances (TFA), (iii) Always Running Appliances (ARA). With PFA, the consumers are partially flexible to shift or schedule the appliances load to another time slots. The starting and finishing time slots are defined already with mostly one hour time interval. Once any appliance is started, it will complete the one hour operating time slot; after that, the consumer will follow the utility request to schedule or curtail the load for next time slot. With TFA, as per defined agreement between the utility and consumers, the end consumer must cutoff, curtail or schedule the electric load as per utility request at any time. The ARA is most inflexible type as the consumer's home appliances are not non-interruptible, non-deferrable and non-shiftable. These types of appliance are always run type of appliances. RESIDENTIAL COMMUNITY HOMES BASED MICROGRID This paper aims to model a residential community based microgrid to generate, utilize and serve multiple residential home prosumers in cooperate way to share or sell excessive DERs energy to the other community residents or even to main grid at a price lower than the utility gird's price but higher than the feed-in tariff. The community homes are grid connected and are classified into three types; Externally Importing Homes, Internally Exporting Homes and Externally Exporting Homes . The considered model homes and their parameters are shown in Figure 2. Externally Importing Homes: The residents of does not possess any DERs (either cannot afford DERs or not willing to install) and are totally dependent to utility grid supply and on community homes which one offers low electricity prices. Such residents are a greedy and passive consumer whose only focus is low electricity prices, even not on the energy availability. Internally Exporting Homes: The residents of are active prosumers. They have DERs installed to meet their energy demands but they do not possess any battery storage. After meeting their energy demands, the owners become part of residential community through a SCM to sell excessive energy to their neighbor homes especially to external importing homes on priority. In case, the neighboring homes do not require energy at that time since these homes do not have any backup battery system, they try to sell energy to the grid rather than wasting it. Externally Exporting Homes: The residents of are proactive prosumers. These are ideal community homes possessing both the DERs and BS. Such homes after filling their own energy requirements store the additionally available energy through BS. After that, they try to share or sell the excessive energy to the neighbors especially to through the smart community manager. As externally importing homes are without DERs so they always need energy either to be purchased from the utility grid or from other community homes, depending which one is offering lowest prices. The offered price from utility will base on ToU price, so for case of high peak price periods, they can purchase energy form community homes through SCM using cooperative game theory. Being the part of community microgrid, they may get the energy at a price lower than the utility gird price but still higher than the feed-in tariff. On the other hand, the internally exporting homes will put their excessive energy into community poll for sell through the SCM at a very low feed-in tariff during the day. Since such homes do not have BS, so during nights they may also need to purchase the energy either from externally exporting homes or from the utility grids depending on the ToU pricing periods. As a special case, since externally exporting homes have both DERs and BS, so they can easily store the energy which can be utilized during nights. Even with storage, if they sell out the stored energy to community homes at some earlier time of the day, they may also face power shortage during night's occasionally. COOPERATIVE GAME THEORY BASED ENERGY MANAGEMENT SYSTEM Game theory is a multi-agent based concept where different autonomous rational players interact with each other for mutual benefits (Nguyen, Kling & Ribeiro, 2013). This game theory concept can be very effective to energy related applications especially in community based micro grids for optimizing the energy resources (Mei, Chen, Wang & Kirtley, 2019). The game theory is classified into cooperative and non-cooperative game theory (Stevanoni, Grève, Vallée & Deblecker, 2019). In non-cooperative game theory, different players which are the part of the system, partially interact with each other to achieve their own individual objective(s) which can be contradictory to overall system objectives. On the other hand, cooperative game theory is based on a coalitional game theory, where all the set of players are always ready to cooperate, coordinate and communicate with each other without any conflict of interests to achieve one common goal. Figure 3 illustrate the community based microgrid configuration connectivity of different homes using EMS. The prime objective of cooperative game in a community microgrid is to schedule and optimize the electricity load requirements within the community smart homes. Being the prosumer, the locally generated electricity from DERs is used by homes to fill their load demands and after meeting the requirements the excess energy is either sold back to the community resident or to utility grid. Although this is an attractive solution for both utility and community prosumers, this can be more effective if managed optimally through community microgrid manager using MAS to develop the communication at different layers. Figure 4 shows the MAS configuration for a central EMS which comprises of three communication levels; primary, secondary, and tertiary EMS. The community smart homes are considered as the primary EMSs. At the primary level, every smart community home having its own EMS, communicates its ongoing energy status with the secondary EMS, which on receiving the information shares the energy status and places an excess amount of energy on community poll for other community homes less than the utility grid's prices according to priorities defined by community homes for sell or purchase. Finally, the tertiary EMS being the overall decision maker, based on secondary EMS information, accumulates the overall energy status and decide(s) for buying or selling energy to and from the utility grid. PROPOSED PROSUMER-CENTRIC RESIDENTIAL COMMUNITY MICROGRID In this paper, we have proposed two stage prosumer-centric residential cooperative community microgrid; decentralized cooperative community microgrid design and centralized cooperative community microgrid design. We have molded six community homes, two from each category being passive consumer, active prosumer and proactive prosumers. Decentralized Cooperative Community Microgrid Design: This is a decentralized or distributed agent based design approach where the community homes' residents can directly negotiate with each other in form of grouping or peer to peer (P2P) to make energy transactions (selling or purchasing) without involving any centralized supervisory mechanism like smart community manager, as shown in Figure 5. ) with other one. The information is only shared to those community homes that are willing to trade the energy (sell or purchase) in P2P form. The price of energy transaction is kept in secrecy. Figure 6 illustrates the model example microgrid consisting of 6 community homes trading in P2P. The peers 1 and 2 are passive consumers; peers 3 and 4 are active prosumers while peers 5 and 6 are proactive prosumers. For the illustration purpose we have chosen prosumer 3 is one community home which has some excessive amount of energy. Since it equipped only with DERs but does not possess ant battery back, so he desperately wants to sell the excessive energy. Since this is a decentralized approach so community home by itself tries to find out the target homes. This example illustration of prosumer 3 is shown in Figure 6. Consumers have prosumers as energy transaction trading partners and while prosumers have both consumers and producers as trading partners. Considering the negotiation process, the pricing priorities for bilateral trade can be defined and may vary accordingly for trade between the peers. Considering the prosumer 3 case scenario, the bilateral trading between four prosumers can be made by using reciprocity property as shown in Figure 6. This confirms that all four prosumers have equal opportunities of balance trade during bilateral trading but with opposite sign. This design is truly a consumers' preference centric where the owner has total freedom to whom energy transaction is to be made. At the same time the negotiation process (price, time horizon) scalability and limitations are the main issues. The negotiation process can become more complex for the scenarios where a greater number of community homes are involved, as both the seller and the buyer are unaware about the requirements and priorities of each other. Centralized Cooperative Community Microgrid Design: This is centralized poll based design approach where all community homes put their excessive energy on the community energy poll and is managed, decided and shared through a CSM which acts as an intermediator between all the community homes. This system design is shown in Figure 7. Based on three community homes categories; a centralized cooperation through SCM can be made based on different priorities or preferences mainly on electricity prices and available time horizon defined by community homes owners. Based on defined priorities, the SCM is responsible to decide to whom the excess energy is to be sold to or purchased from on the bases of auction schemes where the energy seller's and buyer's demands are met. It is not necessary for the seller and the buyer to know each other as energy transactions are handled through the SCM. This design is more structured and optimized where most of the community members can not only be in a social relationship by helping other community residents but can also earn good revenues as energy can also be sold to utility grid at higher prices when the utility grid is under system stress through smart community manager. This scenario can be more realistic if through SCM an aggregated energy is sold to the utility grid and total collected revenue is shared (e.g., in a logical proportional way) midst all community members. At the same time, SCM has the strong responsibility of being fair and unbiased so that every community home gets equal opportunities of energy transaction. CONCLUSION In this paper, a grid connected residential community based microgrid is proposed using cooperative game theory to share, manage and schedule the excessive amount of energy generated through DERs with the community neighboring homes. In this work, three types of community home residents' passive consumers, active prosumers and proactive prosumers are used as the main agents which makes the energy trading and transaction to other community residents after fulfilling their own load requirements. This energy trading and transaction is based on two stage prosumer-centric residential cooperative approach in centralized or decentralized way to sell or purchase the energy from community neighboring homes at a price lower than the utility gird but higher than the feed-in tariff. With decentralized approach community residents can directly negotiate with each other in form of grouping P2P to make energy transactions; whereas in centralized approach all community homes put their excessive energy on the community energy poll and is managed, decided and shared through a smart community manager which acts as an intermediator between all the community homes. It is concluded that the proposed system will not only share, schedule and manage the community load optimally but reduces the overall energy cost, system operational stress, improves the system operational efficiency and reduces the carbon emission. This work can be extended further to develop the rural electrification using community microgrid especially in energy deficit countries like Pakistan to avoid grid system stress.
4,429.6
2019-05-17T00:00:00.000
[ "Engineering", "Economics", "Environmental Science", "Computer Science" ]
ON THE EKR-MODULE PROPERTY A bstract . We investigate permutation group satisfying the EKR-Module property . This property gives a characterization of the maximum intersecting sets of permutations in the group. Specifically, the characteristic vector of a maximum intersecting set is a linear combination of the characteristic vectors of cosets of stabilizer subgroups. In [22], the authors showed that all 2-transitive groups satisfy the EKR-Module property. In this article we find a few more infinite classes of permutation groups satisfying this property. Introduction The Erdős-Ko-Rado (EKR) theorem [12] is a classical result in extremal set theory.This celebrated result considers collections of pairwise intersecting k-subsets of an n-set.The result states that if n ≥ 2k, for any collection S of pairwise intersecting k-subsets, the cardinality |S| ≤ n−1 k−1 .Moreover in the case n > 2k, if |S| = n−1 k−1 , then S is a collection of k-subsets containing a common point.(When n = 2k, the collection of k-subsets that avoid a fixed point, is also a collection of n−1 k−1 pairwise intersecting subsets.)From a graphtheoretic point of view, this is the characterization of maximum independent sets in Kneser graphs. There are many interesting generalizations of this result to other classes of objects with respect to certain form of intersection.One such generalization given by Frankl and Wilson [14] considers collections of pairwise non-trivially intersecting k-subspaces of a finite ndimensional vector space, which corresponds to independent sets in q-Kneser graphs.The book [17] is an excellent survey, including many generalizations of the EKR theorem. In this article, we are concerned with EKR-type results for permutation groups.The first result of this kind was obtained by Deza and Frankl [13], who investigated families of pairwise intersecting permutations.Two permutations σ, τ ∈ S n are said to intersect if the permutation στ −1 fixes a point.A set of permutations is called an intersecting set if στ −1 fixes a point for any two members σ and τ of the set.Clearly the stabilizer in S n of a point or its coset is a canonically occurring family of pairwise intersecting permutations, of size (n − 1)!.In [13], it was shown that if S is a family of pairwise intersecting permutations, then |S| ≤ (n − 1)!.In the same paper, it was conjectured that if the equality |S| = (n − 1)! is met, then S has to be a coset of a point stabilizer.This conjecture was proved by Cameron and Ku (see [8]).An independent proof was given by Larose and Malvenuto (see [20]).Later, Godsil and Meagher (see [15]) gave a different proof.A natural next step is to ask similar questions about general transitive permutation groups. Let G be a finite group acting transitively on a set Ω.An intersecting subset of G with respect to this action is a subset S ⊂ G in which any two elements intersect.Obviously, a point stabilizer G α , its left cosets gG α , and right cosets G α g are intersecting sets, which we call canonical intersecting sets.An intersecting set of maximum possible size is called a maximum intersecting set.Noting that the size of a canonical intersecting set is |G α | = |G|/|Ω|, we see that the size of a maximum intersecting set is at least |G|/|Ω|.It is now natural to ask the following: (A) Is the size of every intersecting set in G bounded above by the size of a point stabilizer? (B) Is every maximum intersecting set canonical?As mentioned above, the results of Deza-Frankl, Cameron-Ku, and Larose-Malvenuto show that the answer to both these questions in positive for the natural action of a symmetric group.However, not all permutation groups satisfy similar properties, although there are many interesting examples that do.We now formally define the conditions mentioned in the above questions. Definition 1.1.A transitive group G on Ω is said to satisfy the EKR property if every intersecting set has size at most |G|/|Ω|, and further said to satisfy the strict-EKR property if every maximum intersecting set is canonical. When the action is apparent, these properties will be attributed to the group.We have already seen that the natural action of S n satisfies both the EKR and the strict-EKR property.EKR properties of many specific permutation groups have been investigated (see [1,3,22,25,26,27]).In particular, it was shown that all 2-transitive group actions satisfy the EKR property, see [25,Theorem 1.1], but not every 2-transitive group satisfies the strict-EKR property; for instance, with respect to the 2-transitive action of PGL(n, q) (with n ≥ 3) on the 1-spaces, the stabilizer of a hyperplane is also a maximum intersecting set.However, it is shown in [25] that 2-transitive groups satisfy another interesting property called the EKR-module property, defined below. For a transitive group G on Ω and a subset S ⊂ G, let v S = s∈S s ∈ CG, the characteristic vector of S in the group algebra CG.For α, β ∈ Ω and g ∈ G with g • α = β, we write the characteristic vector of the canonical intersecting set gG α , which we call a canonical vector for convenience.The next definition was first introduced in [23]. Definition 1.2.A finite transitive group G on a set Ω is said to satisfy the EKR-module property if the characteristic vector of each maximum intersecting set of G on Ω is a linear combination of canonical vectors, that is, the vectors in {v α,β | α, β ∈ Ω}. The name is from the so called "module method" described in [2].We remark that (a) a group action satisfying the strict-EKR property also satisfies the EKR-module property, but the converse statement is not true; (b) and there exist group actions that satisfy the EKR-module property but not the EKR property, see [ gHg −1 = N.This shows that any maximum intersecting set must be coset of N. Any coset of N is a union of two disjoint cosets of H, and thus this action satisfies the EKR-module property. We will now construct an example of a group action that satisfies the EKR property, but not the EKR-module property.Prior to doing so, we mention a well-known result.Consider a transitive action of a group G on a set Ω. A subset R ⊂ G is said to be a regular subset, if for any (α, β) ∈ Ω 2 , there is a unique r ∈ R such that r • α = β.Corollary 2.2 of [3] states that permutation groups which contain a regular subset, satisfy the EKR property. For a group G and a subgroup H G, let We first show that this action satisfies the EKR property, by demonstrating the existence of a regular subset.We consider the cyclic subgroup C := (1, 2, 3, 4, 5) and the 4-cycle t := (2, 3, 5, 4).We claim that R := C ∪ tC is a regular set.As |R| = |Ω|, this claim will follow by showing that for r, s ∈ R with r s, we have rgH sgH, for all g ∈ G.This is equivalent to showing that g −1 r −1 sg H, for all g ∈ G.It is easy to verify that for any two distinct r, s ∈ R , r −1 s is either a 4-cycle or a 5-cycle, and thus g −1 r −1 sg H.We can now conclude that R is a regular subset.Therefore, by [3, Corollary 2.2], G satisfies the EKR property.Thus the size of any intersecting set is bounded above by |H| = 12.Now we consider subgroup K A 4 of G.It is easy to check that KK −1 = K ⊂ ∪gHg −1 and thus K is a maximum intersecting set.In this case, canonical intersecting sets are cosets of a conjugate of H. Every canonical intersecting set contains exactly 3 even permutations.Also every permutation in K is even.Now consider the sign character λ.For every canonical intersecting set S, we have λ(v S ) = 0. On the other hand, we have λ(v K ) 0. From this, we see that v K cannot be a linear combination of the characteristic vectors of the canonical intersecting sets.Therefore this action does not satisfy the EKR-module property. We will now describe the main results of our paper. 1.1.Main Results.Our first result is a characterization of the EKR-module property of a group action, in terms of the characters of the group in question.Given a group G, a complex character χ of G, and a subset A ⊆ G, by χ(v A ), we denote the sum a∈A χ(a).We now describe our first result, a characterization of the EKR-module property in terms of character sums.We note that Example 1.4 can be viewed as an application of the above result.In Example 1.4, the sign character λ is a character such that λ(v H ) = 0.However, for the maximum intersecting set K, we have λ(v K ) 0. Thus by Theorem 1.5, the action in Example 1.4 does not satisfy the EKR-module property. Given an action of G on Ω, the derangement graph Γ G, Ω is the graph whose vertex set is G, and vertices g, h ∈ G are adjacent if and only if gh −1 does not fix any point in Ω.Then a set S ⊆ G is an intersecting set if and only if it is an independent set in Γ G, Ω .Therefore, the study of intersecting sets could benefit from the various results from spectral graph theory about independent sets.Many authors (for instance, see [26], [27]) have studied the EKR and strict-EKR properties of various group actions from this point of view.Theorem 3.4 is a characterization of the EKR-module property of a group action in terms of spectra of weighted adjacency matrices of the corresponding derangement graph. It is well known (see [3,Corollary 2.2]) that permutation groups with regular subsets satisfy the EKR property.However, as observed in example 1.4, such groups do not necessarily satisfy the EKR-module property.The following theorem shows that every permutation group with a regular normal subgroup satisfies the EKR-module property. Theorem 1.6.Transitive groups actions with a regular normal subgroup satisfy the EKRmodule property. A few classes of permutation groups with a regular normal subgroup are Frobenius groups, affine groups, primitive groups of type HS, HC, and TW (for a description of these, we refer the reader to [28]). After showing that all 2-transitive groups satisfy the EKR-module property, the authors of [25] mention that the next natural step is to consider rank 3 permutation groups.As a first step, we consider this problem for the class of primitive rank 3 permutation groups.The next theorem reduces the problem to almost simple groups.(Recall that a finite group is called almost simple if it has a unique minimal normal subgroup, which is non-abelian and simple.)Theorem 1.7.Let G be a primitive permutation group on Ω of rank 3. Then either G has the EKR-module property, or G is an almost simple group. We would like to mention that when n is sufficiently large, the rank 3 action of S n on 2-subsets of [n], satisfies the strict-EKR property, and thereby the EKR-module property.This was proved in [11].Example 1.4 shows that this fails when n = 5. In [6], a finite group G is defined to satisfy the weak EKR property, if every transitive action of G satisfies the EKR property.A finite group G is defined to satisfy the strong EKR property, if every transitive action of G satisfies the strict-EKR property.Theorem 1 of [6] shows that nilpotent group satisfies the weak EKR property.This result was extended to supersolvable groups in [21].It is easy to check that every abelian group satisfies the strong EKR property.Theorem 3 of [6] states that a finite non-abelian nilpotent group satisfies the strong EKR property if and only if it is a direct product of a 2-group and an abelian group of odd order.We now make the following analogous definition.Definition 1.8.A finite group G is said to satisfy the EKR-module property if every transitive action of G satisfies the EKR-module property. As mentioned before, groups with the strict-EKR property have the EKR-module property.It is then natural to ask whether the converse statement is true or not.It is shown in [6,Theorem 3] that there are infinitely many nilpotent groups of nilpotency class 2 that do not satisfy the strict-EKR property.The next result then answers the question in negative. It is shown in [6,Theorem 2] that a group G satisfying the EKR property for every transitive action is necessarily solvable.However, it is shown in Lemma 6.2 that there do exist non-solvable groups which have the EKR-module property. An analogue of the EKR-module property has been observed in other generalizations of the EKR theorem.Consider a graph X and a prescribed set of "canonically" occurring cliques.We say that the graph satisfies the EKR-module property, if the characteristic vector of any maximum clique is a linear combination of the characteristic vectors of the canonical cliques.In the context of permutation groups satisfying the EKR-module property, the complement of the corresponding derangement graph satisfies the EKR-module property.In Chapter 5 of [17], there are a few examples of strongly regular graphs satisfying the EKR-module property.In [4], the authors show that Peisert-type graphs satisfy the EKR-module property.Let q be an odd prime power.Let F and E be finite fields of order q 2 and q respectively.A Peisert-type graph of type (m, q) is a Cayley graph of the form Cay(F, S ), where the "connection" set S is a union of m distinct cosets of the multiplicative group E × in F × .It is clear that any set of the form sE + b, with s ∈ S and b ∈ F, is a clique.We deem these to be the canonical cliques.In [4], the authors show that characteristic vector of any maximum clique in a Peisert-type graph, is a linear combination of the characteristic vectors of canonical cliques.In § 7, we give a shorter independent proof of the same. EKR-module property and character theory. In this section, we gather some tools which are used to prove our main results, and then prove Theorem 1.5. Let K = G (Ω) be the kernel of G on Ω. Here Ω = [G : H] for some H ≤ G.The following simple lemma shows that we may assume without loss of generality that K is trivial. Lemma 2.1. Let π : G → G/K be the natural quotient map. Then a subset S ⊂ G is a maximum intersecting set of G if and only if π(S) is a maximum intersecting set of G/K. Proof.Given an intersecting set A ⊂ G, we note that AK := {ak : a ∈ A & k ∈ K} is also an intersecting set.So any maximum intersecting set in G must be a union of K-cosets.Let s 1 , s 2 , . . .s r ∈ G be such that S = s i K is a maximum intersecting subset of G.We see that π(S) As an immediate consequence, we get the following corollary. Corollary 2.2. Let G be a finite transitive group on Ω with kernel K = G (Ω) , and let π : G → G/K be the natural quotient map. Then the following hold: (i) G satisfies the EKR (respectively strict-EKR) property if and only if G/K satisfies the EKR (respectively strict-EKR); (ii) G satisfies the EKR-module property if and only if G/K satisfies the EKR-module property. For any g ∈ G, we denote by H g the subgroup gHg −1 .Given α = aH ∈ Ω, we have G α = H a .Thus, with respect to this action, we see that the set {aH b : a, b ∈ G} is the set of canonical intersecting sets.By I G (Ω), we denote the subspace of CG spanned by the set {v aH b : a, b ∈ G} of the characteristic vectors of the canonical intersecting sets.By the definition of the EKR-module property, the action of G on Ω satisfies the EKR-module property if and only if v S ∈ I G (Ω) for every maximum intersecting set S in G. We observe that for every a, b, g, h ∈ G, we have gv aH b h = v gahH h −1 b .Therefore, I G (Ω) is a two-sided ideal of the group algebra CG.The two-sided ideals of complex group algebras are characterized by the Artin-Wedderburn decomposition. We will now recall some basic facts on group algebra, proofs of which can be found in any standard text on representation theory such as [19].Let Irr(G) be the set of irreducible complex characters of G.For χ ∈ Irr(G), we define are the right submodules of CG that afford the character χ.By Maschke's theorem, we have the decomposition For each χ ∈ Irr(G), we have dim C (M χ ) = χ(1) 2 and that M χ is a minimal two-sided ideal containing the primitive central idempotent By orthogonality relations among characters, we have Using the fact that CG is a semi-simple algebra, we now get the following description of two-sided ideals of CG Lemma 2.3.Given a two-sided ideal J of CG, there is a subset Our investigation of the EKR-module property of the action of G on Ω, will benefit from the description of I G (Ω) as a direct sum of simple ideals of CG.We recall that I G (Ω), is the subspace of CG spanned by the set {v aH b : a, b ∈ G}.We also showed that it is a two-sided ideal.Therefore for any χ ∈ Y H , we have 0 . As e χ is a minimal ideal, we conclude that e χ ⊂ I G (Ω). Now consider θ ∈ Irr(G) \ Y H .In this case, we have θ(v 1) (C) be a unitary representation affording θ as its character.Given a subset S ⊂ G, we define M S := s∈S Θ(s).As Θ is a unitary representation, we have M S −1 = M † S , that is, M S −1 is the conjugate transpose of M S .Now given a, g ∈ H, we have aH g , this can only happen when M aH g = 0. Therefore, θ(v aH g ) = 0, and we conclude that e θ v aH g = 0. Thus e θ annihilates I G (Ω).As e 2 θ = 1 0, e θ cannot be an element of the ideal I G (Ω). Now the result follows by applying Lemma 2.3 and equation (1). As an immediate application, we obtain a significantly shorter proof of Lemma 4.1 and Lemma 4.2 of [2].The content of these two results is presented as the following corollary.We will use the following technical result in the proof of Theorem 1.7.We would like to mention that it was a key result that led to the "Module Method" described in [2].(2) and the set is a basis set of I G (Ω) for any ω ∈ Ω. We observe that for α ∈ Ω, we have v α, β .Therefore, every vector of the form v γ, δ is in the linear span of the elements of B ω , and thus B ω spans I G (Ω). Linear independence follows as We are now ready to prove Theorem 1.5. Proof of Theorem 1.5: , by Lemma 2.4 and (1), we have e χ x = 0, for all x ∈ I G (Ω).By the definition of the EKR-module property, for any maximum intersecting set S, we have v S ∈ I G (Ω).The equality χ(v S ) = 0 follows from e χ v S = 0. We now prove the other direction.Suppose that for any χ ∈ C and any maximum intersecting set S, we have χ(v S ) = 0. Fix a maximum intersecting set S. If S is a maximum intersecting set, then so is Sg −1 , for all g ∈ G. Therefore, χ(v Sg −1 ) = 0, for all χ ∈ C and all g ∈ G. Thus e ψ = 1, we have By Lemma 2.4, ψ C e ψ ∈ I G (Ω).Since I G (Ω) is an ideal, we have v S ∈ I G (Ω). Thus the EKR-module property is satisfied.We note that if S is a maximum intersecting set, then for any t ∈ S, the set St −1 is a maximum intersecting set that contains the identity element.So every maximum intersecting set is a "translate" of an intersecting set containing the identity.The following corollary shows that, as far as the EKR-module property is concerned, we can restrict ourselves to maximum intersecting sets containing the identity. Corollary 2.6.Let G be a finite group with the identity 1 G , H < G, and Then G on Ω satisfies the EKR-module property if and only if χ(v S 0 ) = 0 for any S 0 ∈ S 0 and any χ ∈ C. Proof.At first, we assume that χ(v S 0 ) = 0, for all S 0 ∈ S and χ ∈ C. Fix a χ ∈ C and a maximum intersecting set S .Let P : G → GL n (C) be a unitary representation affording χ as its character.Given a set X ⊂ G, define M X := x∈X P(x).We observe that For any t ∈ S, the set St −1 is a maximum intersecting set containing the identity 1 G .Therefore, we have χ(v St −1 ) = χ(v H ) = 0, for all t ∈ S. Thus by (2), we have T r(M S M † S ) = 0.As M † S is the conjugate transpose of M S , the matrix M S M † S is a diagonal matrix whose entries are the norms of rows of M S .Thus, T r(M S M † S ) = 0 implies that M S = 0. We can now conclude that χ(v S ) = T r(M S ) = 0.By Theorem 1.5, the action of G on Ω satisfies the EKR-module property. The other direction follows directly from Theorem 1.5. Results from spectral graph theory have proved useful in characterizing maximum intersecting sets in some permutation groups (for instance, see [25], [26], [27]).Let G be a group acting on Ω = [G : H], for some H ≤ G.An element g ∈ G is called a derangement if it does not fix any point in Ω.Let Der(G, Ω) denote the set of derangements in G.It is easy to see that Der(G, Ω) = G \ g∈G gHg −1 .By Γ G,Ω , we denote the Cayley graph on G, with Der(G, Ω) as the "connection set".We now observe that intersecting sets in G are the same as independent sets/co-cliques in Γ G,Ω .This observation enables us to use some popular spectral bounds on sizes of independent sets in regular graphs.Before describing these, we recall some standard definitions. For graph X on n vertices, a real symmetric matrix M whose rows and columns are indexed by the vertex set of X, is said to be compatible with X, if M u,v = 0 whenever u is not adjacent to v in X.Clearly, the adjacency matrix of X is compatible with X.Given a subset S of the vertex set, by v S , we denote the characteristic vector of S .We now state the following famous result which is referred to as either the Delsarte-Hoffman bound or the ratio bound. The application of the above lemma on clever choices of Γ G,Ω -compatible matrices, proved useful in characterization of maximum intersecting sets for many permutation groups (for instance see [26] and [27]).We will now describe these in detail.Definition 3.2.Let G be a group acting transitively on a set Ω. A (G, Ω)-compatible class function is a real valued class function f : G → R such that: (i) f (g) = 0 for all g Der(G, Ω); and (ii) We now describe the spectra of such matrices.The description of spectra of matrices of the form M f is a special case of well-know results by Babai ([5]) and Diaconis-Shahshahani ( [10]).The following lemma, which is a special case of Lemma 5 of [10], describes the spectra of matrices of the form M f . We are now ready to give a sufficient condition for EKR-module property in terms of spectra of Γ G,Ω -compatible matrices.Let f : G → R be a (G, Ω)-compatible class function.Then the row sum of M f is r f := g∈G f (g).Let τ be the least eigenvalue of M f .By Lemma 3.1, for any intersecting set S , we have Let us assume that equality holds for some intersecting set S. By Lemmas 3.1 and 3.3, if S is an maximum intersecting set, then v S is in the 2-sided ideal At this point, we remark that the proofs of EKR ( [27]) and EKR-module properties ( [25]) of 2-transitive groups, involved finding a class function that satisfies the conditions of Theorem 3.4. Proof of Theorem 1.6 In this section, we prove Theorem 1.6.By Corollary 2.2, we can restrict ourselves to permutation groups that contain a regular normal subgroup.Let A be a finite group and H Aut(A).We consider the permutation action of G := A ⋊ H on A, defined by (a, σ) • b = aσ(b), for all a, b ∈ A and σ ∈ H.It is well-known that any permutation group with a regular normal subgroup, is of the form G Sym(A).By [1, Corollary 2.2], permutation groups which contain a regular subgroup, satisfy the EKR property.Thus the action of G on A satisfies the EKR property. Before starting the proof, we prove an elementary result that we will use later.Every element of G is of the form (a, σ), where a ∈ A and σ ∈ H.Note that (a, σ)(b, π) = (aσ(b), σπ).We need the following well-known result for technical reasons.We will now prove the theorem by using Corollary 2.6.Let S 0 be any maximum intersecting set with 1 G ∈ S 0 .As S 0 is an intersecting set, for all s ∈ S 0 , the element s = s1 −1 G fixes some point.Thus by Lemma 4.1, given s ∈ S 0 , there exists a unique element σ s ∈ H and an element a s ∈ A, such that a −1 s sa s = σ s ∈ H.We now claim that {σ s : s ∈ S 0 } = H.Since G satisfies the EKR property, we have Suppose that for some s, r ∈ S 0 , we have σ s = σ r .Then, we have As S 0 is an intersecting set, sr −1 ∈ A fixes a point.Since A acts regularly, we must have sr −1 = 1.Thus s → σ s is injective, and {σ s : s ∈ S 0 } = H.As s ∈ S 0 is conjugate to σ s , for any ψ ∈ Irr(G), we have and S 0 is a maximum intersecting set containing 1 G , we have χ(v S 0 ) = 0. Now, by Corollary 2.6, Theorem 1.6 is proved. EKR-module property for primitive rank 3 group actions. In this section, we study the EKR-module property for primitive permutation groups of rank 3, and prove Theorem 1.7.Let G be a primitive permutation group on Ω of rank 3. To prove Theorem 1.7, we may assume that G is not an almost simple group.Then either (a) G is affine, so that G has a regular normal subgroup, or (b) G is in product action, and G T ≀ S 2 on Ω 2 , where T Sym(Ω) is 2-transitive. If G is affine, then G indeed has the EKR-module property by Theorem 1.6.We thus assume further that G is in product action in the rest of this section. In view of Corollary 2.6, it is beneficial to obtain descriptions of the set Irr(G) of irreducible characters of G, and of the maximum intersecting sets in G containing the identity.As one would expect, the 2-transitive action T on Ω plays a major role.Before going any further, we establish some notation.In G = T ≀ S 2 = (T × T ) ⋊ S 2 , by π, we denote the unique 2-cycle in S 2 .Elements of G \ (T × T ) are of the form (s, r)π, where s, r ∈ T .By (s, r)π, we denote the product of elements (s, r) and π of G. We start by describing Irr(G).The subgroup N := T × T of G is a normal subgroup of index 2.By Clifford theory ([19, 6.19]), restriction of any irreducible character ν ∈ Irr(G) to N is either an irreducible G-invariant character of N, or the sum of two G-conjugate irreducible characters of N. From well-known results on characters of direct products, we have Irr(N) = {χ × λ : χ, λ ∈ Irr(T )}. Let χ, λ be two distinct irreducible characters of T , then the inertia subgroup in G of χ × λ is N, and therefore σ χ,λ := Ind G N (χ × λ) is an irreducible character of G, with Res G N (σ χ,λ ) = χ × λ + λ × χ.Now consider an irreducible character of N, of the form χ × χ.Let P : T → GL(V) be a representation affording χ as its character.Then P ⊗ P : N → GL(V ⊗ V) is a representation of N that affords χ × χ as its character.Let π be the unique 2-cycle in S 2 .Define Ψ : G → GL(V ⊗ V) to be the representation such that Ψ| N = P ⊗ P and Ψ(π)(v ⊗ w) = w ⊗ v for all v, w ∈ V.The character ρ χ afforded by Ψ is an irreducible character of G that extends χ × χ.We also have ρ χ ((s, r)π)) = χ(rs) for all r, s ∈ T .By a result of Gallagher ([19, 6.17]), there is exactly one other irreducible character of G whose restriction to N is χ × χ, namely βρ χ , where β is the unique non-trivial linear character with kernel N. Therefore by Clifford theory any irreducible character is one of the characters defined above.We now describe the permutation character for the action G on Ω 2 .As T is a 2-transitive group, there is ψ ∈ Irr(T ) be such that 1+ψ is the permutation character for T .Computation shows that Λ := 1 + ρ ψ + σ ψ,1 is the permutation character for G. The next lemma follows from the proof of Lemma 3.5 of [18], which is essentially the same as Lemma 4.2 of [3]. Lemma 5.2. Every maximum intersecting set for the action of T × T on Ω 2 is of the form S × R, where S and R are maximum intersecting sets with respect to the action of T on Ω We now give the following characterization of maximum intersecting sets in G = T ≀ S 2 . Lemma 5.3.The action of G on Ω 2 satisfies the EKR property.If S is a maximum intersecting set in G that contains the identity, then there are maximum intersecting sets X, W, Z in T such that: )π, and (ii) W and Z contain the identity of T . Proof.As T is a 2-transitive group, by the main results of [27] and [25], the action of T on Ω satisfies both EKR and EKR-module properties.By Lemma 5.2, a maximum intersecting set for the action of N := T × T on Ω 2 is of the form S × R, where S and R are maximum intersecting sets in T .Therefore, the action of N on Ω 2 also satisfies the EKR property.The subgroup N of G is a transitive subgroup satisfying the EKR property, and so by Lemma 3.3 of [27], we see that the action of G also satisfies the EKR property.We consider a maximum intersecting set S with respect to the action of G on Ω 2 .We further assume that S contains the identity element.With this assumption, every element of S must fix a point in Ω 2 .Now S ∩ N and (S ∩ Nπ)π −1 are intersecting sets with respect to the action of N on Ω 2 .We note that H × H ≤ N is a point stabilizer for this action.Given x ∈ X and y ∈ Y, consider the element (x, y)π ∈ (S ∩ Nπ) ⊂ S. As we assume that S contains the identity, (x, y)π must fix a point.That is to say, 0 Λ((x, y)π) = 1 + ψ(xy), where Λ and ψ are as described prior to the statement of the lemma.As 1 + ψ is the permutation character for the action of T on Ω, 1 + ψ(xy) 0 if and only if xy ∈ T fixes a point of Ω.Thus for for a given y ∈ Y, the set X ∪ {y −1 } is an intersecting set in T .As X is a maximum intersecting set in T , we must have y −1 ∈ X.This shows that Y = X −1 . We recall that Λ = 1 + ρ ψ + σ ψ,1 is the permutation character for the action of G on Ω 2 , where ψ ∈ Irr(T ) is such that 1 + ψ is the permutation character for the action of T on Ω.By Corollary 2.6, EKR-module property of G is equivalent to showing that ν(v S ) = 0 for all maximum intersecting sets S that contain the identity and ν ∈ Irr(G) \ {1, σ ψ,1 , ρ ψ }.Let S 0 be a maximum intersecting set in G such that 1 G ∈ S 0 .By Lemma 5.3, there are maximum intersecting sets X, W, Z in T such that : Z and W contain the identity of T ; and S 0 = W ×Z ∪ Z × Z −1 π.For any distinct pair χ, λ ∈ Irr(T ), we can compute the following character sums: We need to compute χ(v S ), for all χ ∈ Irr(T ) and all maximum intersecting sets S in T .To do so, we use the EKR and EKR-module properties of 2-transitive groups. Proof.As T is 2-transitive, it satisfies both the EKR and EKR-module properties.By Corollary 2.5, v S is in the ideal J = e 1 + e ψ .By the orthogonality relations among primitive central idempotents, we see that left multiplication by e 1 + e ψ is a projection onto J.That is (e 1 + e ψ )(v S ) = v S .Writing both sides as a linear combination of the elements in the basis set {t ∈ T } of the group algebra CT , and equating the coefficients of the identity element on both sides, yields the first two formulae.Part (iii) is a direct consequence of Corollary 2.6. Pick a maximum intersecting set S 0 in G such that 1 G ∈ S 0 , and let ν ∈ Irr(G) \ {1, σ ψ,1 , ρ ψ , βρ ψ }.Now applying Lemma 5.4 (ii) and the character sum formulas (I) (II) and (III) given above yields that ν(v S 0 ) = 0.As 1, σ ψ,1 , ρ ψ are the only irreducibles that contribute to the permutation character for the action of G on Ω 2 , in view of Corollary 2.6, we need to show that βρ ψ (v S 0 ) = 0.This is indeed true by Lemma 5.4, and then Theorem 1.7 follows from an application of Corollary 2.6. 6. Some groups satisfying the EKR-module property. In this section, we study groups satisfying the EKR-module property.Recall (from Definition 1.8) that a finite group G satisfies the EKR-module property if every transitive action of G satisfies the EKR-module property.We first prove Theorem 1.9, and then prove the smallest non-abelian simple group A 5 satisfies the EKR-module property. 6.1.Proof of Theorem 1.9.In this subsection, we consider transitive actions of nilpotent groups of nilpotency class 2. By [6, Theorem 3], all transitive actions of nilpotent groups satisfy the EKR property.In the same paper, it was also shown that there are examples of class-2 nilpotent groups that do not satisfy the strict-EKR property.We will show that all transitive actions of class-2 nilpotent groups satisfy the EKR-module property.Our proof is a proof by contradiction. Recall the following well-known result from character theory.Lemma 6.1.Let G be a group, ψ an irreducible complex character of G, and z an element of the centre of G. Then for all g ∈ G, we have ψ(gz) = ψ(g)ψ(z). Assume that Theorem 1.9 is false.Let N be a class-2 nilpotent group N, and H N such that the action of N on Ω = [N : H] does not satisfy the EKR-module property.We may further assume that |N| + |Ω| is as small as possible.By the minimality of (N, Ω) and by Corollary 2.2, the action of N on Ω must be a permutation action.In other words, H is core-free, that is, As the action of N on Ω does not satisfy the EKRmodule property, by Theorem 1.5, there is a character χ ∈ {ψ : ψ ∈ Irr(N) & ψ(v H ) = 0} and a maximum intersecting set S such that χ(v S ) 0. We fix one such pair χ, S. As N is nilpotent, it has a non-trivial centre, which we denote by Z.Given a character ψ of N, we denote its kernel, {n ∈ N : ψ(n) = ψ(1)}, by ker(ψ).As every non-trivial normal subgroup of a nilpotent group intersects non-trivially with the centre, we either have ker(χ) ∩ Z {1 N }, or that χ is a faithful character. We first assume that χ is faithful.As N is a class-2 nilpotent group, we have Z ⊃ [N, N].Since N is non-abelian, given y ∈ N \Z, we can pick x ∈ N be such that z := xyx −1 y −1 1 N .As χ is faithful, we have χ(z) 1.Since xyx −1 = zy, we have χ(y) = χ(xyx −1 ) = χ(yz).As z is a central element, by Lemma 6.1, we have χ(y) = χ(y)χ(z).As χ(z) 1, we must have χ(y) = 0 for all y ∈ N \ Z. Recall that H is core-free, and thus H ∩ Z = {1 N }.We can now conclude that χ(v H ) = χ(1) 0. This contradicts our initial condition that χ(v H ) = 0, and therefore χ cannot be faithful.Now we are left with the case when χ is not faithful.By ker(χ), we denote the kernel of a corresponding representation.We set Z χ = ker(χ) ∩ Z.We note that Z χ is a nontrivial normal subgroup of N. As H is a core-free subgroup, we have Z χ ∩ nHn −1 = {1 N }, for all n ∈ N. Thus the action of Z χ on Ω is semi-regular.If the action of Z χ is regular, then it is a regular normal subgroup, and thus by Theorem 1.6, the action of N on Ω must satisfy the EKR-module property.As this contradicts our assumption, the action of Z χ on Ω must be semi-regular and intransitive.As Z χ ⊳ N, the set Ω of Z χ orbits on Ω, is a block system for the action of N on Ω.Since We can now see that SZ χ is a maximum intersecting set with respect to the action of N on Ω.As Z χ ≤ ker(χ) is a central subgroup, by using Lemma 6.1, we have . By our choice of χ and S, χ(v H ) = 0 and χ(v S ) 0. Therefore SZ χ is a maximum intersecting set with respect to the action of N on Ω and χ is a character in {ψ ∈ Irr(G) : ψ(v HZ χ ) = 0}, such that χ(v SZ χ ) 0. So by Theorem 1.5, the action of N on Ω does not satisfy the EKR-module property.Now since |N| + | Ω| |N| + |Ω|, this conclusion contradicts the minimality of (N, Ω).Therefore our assumption that χ is not faithful must be false. Both cases return contradictions, and hence our initial assumption that Theorem 1.9 fails, must be false.This concludes the proof. Theorem 1.9 and Theorem 3 of [6] establish the existence of infinitely many groups that satisfy the EKR and EKR-module property, but not the strict-EKR property.Theorem 2 of [6] shows that groups that satisfy the EKR property are necessarily solvable.However, the EKR-module property is not so restrictive. 6.2. A group satisfying the EKR-module property is not necessarily solvable.Lemma 6.2.The simple group A 5 satisfies the EKR-module property. Proof.Let H be a subgroup of A 5 .We need to show that the action of A 5 on Ω H = [A 5 : H] satisfies the EKR-module property.Assume that H is a subgroup satisfying (3) {χ ∈ Irr(A 5 ) : χ(v H ) 0} = Irr(A 5 ). Then by Theorem 1.5, the action of A 5 on Ω H satisfies the EKR-module property.Computation shows that the relation (3) fails if and only if H is isomorphic to one of the groups: Z 2 2 , Z 5 , S 3 , D 10 , A 4 .(Here D 10 denotes the dihedral group of order 10.)We will deal with groups separately. When H is isomorphic to one of D 10 or A 4 , the action of G on Ω H is 2-transitive.Hence by the main result of [25], these group actions satisfy the EKR-module property. Consider a subgroup H 1 Z 5 .We will use Theorem 3.4 to show that the action of A 5 on Ω H 1 satisfies the EKR-module property.For this, we need the character table of A 5 , which is given as Table 1.In this case, the set Der(G, Ω H 1 ), of derangements, is the union Table Finally, we consider a subgroup H 3 S 3 in A 5 and the action of A 5 on Ω H 3 .The set Der(G, Ω H 2 ), of derangements, is the union of conjugacy classes C 3 and C 4 .Let f 3 be be the (G, Ω H 3 )-compatible class function satisfying we see that K is an intersecting set with respect to this action.Now, setting S = K and f = f 3 , Theorem 3.4 yields that this action satisfies the EKR-module property. EKR-module property in Strongly Regular Graphs In the section, we consider maximum cliques in the Peisert-type strongly regular graphs defined in [4].These are a subclass of strongly regular graphs found in [7].Consider a strongly regular graph X, with a prescribed set C of "naturally" occurring cliques.Cliques in C will be called canonical cliques.We say that X satisfies the EKR-module property with respect to C if the characteristic vector of every maximum clique in X is a linear combination of characteristic vectors of the cliques in C. We now define Peisert-type graphs.Definition 7.1.Let q be an odd prime power.Then a Peisert-type graph of type (m, q) is a Cayley graph on the additive group of F q 2 with its "connection" set S being a union of m cosets of F × q in F × q 2 such that F × q ⊂ S . Given a Peisert-type graph of type (m, q), with connection set S .For any s ∈ S and x ∈ F q 2 , the set sF q + x is a naturally occurring clique.By a canonical clique in a Peisettype graph, we mean a clique of the form sF q + x, where s ∈ S and x ∈ F q 2 .The main result of [4] is the following shows that Peiser-type graphs satisfy EKR-module property.In this section, we give a shorter and different proof of the same.We now collect some results about Peisert-type graphs and some general results about strongly regular graphs.The main result of [7] shows that Peisert-type graphs are strongly regular.A different proof of the same is given in [4].We will give a proof using a standard technique of finding eigenvalues of an abelian Cayley graph.Lemma 7.3.Peisert-type graph of type (m, q) is strongly regular with eigenvalues k := m(q − 1) with multiplicity 1, q − m with multiplicity m(q − 1), and −m with multiplicity q 2 − 1 − m(q − 1). Proof.Let X a Peisert-type graph of type (m, q) whose connection set is S = m−1 i=0 c i F × q (with c 0 = 1).Let A be the adjacency matrix of X. Considering an additive character of χ of F q as a column vector, we see that Aχ = χ(v S )χ, where χ(v S ) = s∈S χ(s). If χ is not the trivial character, Ker(χ) F q 2 , and so at most one of {c i F q : 0 ≤ i ≤ m−1} can be a subgroup of Ker(χ).Thus if c i F q ⊂ Ker(χ) for some i, then the restriction χ| c j F q of χ onto the subgroup c j F q , is a non-trivial character whenever j i.Otherwise, Ker(χ) will have two 1-dimensional subspaces of F q 2 and thus must be equal to F q 2 .Assume that χ is a non-trivial character with c i F q ⊆ Ker(χ).As the sum of values of a non-trivial character are zero, in this case, we have The set on non-trivial characters χ with c i F q ⊂ Ker(χ) is in one-one correspondence with the non-trivial characters of F q 2 /c i F q .Thus there are atleast m(q − 1) characters χ such that Aχ = (q − m)χ.As distinct characters are orthogonal the dimension of the (q − m)eigenspace of A is atleast m(q − 1). If χ is the trivial character χ 0 , we have Aχ 0 = |S |χ 0 .With this, we have found all the eigenvalues of A and their corresponding eigenspaces.As A has exactly three distinct eigenvalues, it is a strongly regular graph. Let X be a strongly regular graph with parameters (v, k, λ, µ), which is a k-regular graph on v vertices such that (i) any two adjacent vertices have exactly λ common neighbours, and (ii) any two non-adjacent vertices have exactly µ common neighbours.We further assume that X is primitive, that is, both X and its complement are connected.It is well known ([16, Lemma 10.2.1]) that the adjacency matrix A of X has exactly three distinct eigenvalue.As X is k-regular and connected, k is an eigenvalue of A with multiplicity 1.Let r, s with r > s be the other eigenvalues. Our proof uses some results on graphs in association schemes.For a quick introduction to the preliminaries on graphs in association schemes, we refer the reader to Chapter 3 of [17].We first recall the following well-known result linking strongly regular graphs with association schemes.By J and I, we denote the all-one matrix and the identity matrix respectively. Lemma 7.4.([17, Lemma 5.1.1])Let X be a graph with A as its adjacency matrix.Then X is strongly regular if and only if A X := {I, A, A := J − I − A} is an association scheme. By C[A X ], we denote the linear span of matrices in A X .This is referred to as the Bose-Mesner algebra.By a well-known result ( [17,Theorem 3.4.4]), the projections onto eigenspaces of A is an orthogonal basis of idempotents of C[A X ].The matrix J is the projection onto the k-eigenspace.We denote E r and E s to be the projections onto the reigenspace and the s-eigenspace respectively.We have A = kJ + rE r + sE s , I = J n + E r + E s , and so { J n , E r , E s } is an orthogonal basis of idempotents of C[A X ].We now mention a bound by Delsarte (see equation (3.25) of [9]) on cliques in strongly regular graphs.We state the formulation of this result as given in [17,Corollary 3.7.2].Lemma 7.5.Let X be k-regular strongly regular graph with s as the least eigenvalue of its adjacency matrix.If C is a clique in X, then |C| 1 − k s .Moreover, if C is a clique that meets the bound with equality, then the characteristic vector v C is orthogonal to the s-eigenspace. Given a subset B of the vertex set of X, by v B , we denote the characteristic vector of B, and by 1, the all-one vector.Consider the C-linear span V max of characteristic vectors of maximum cliques in X.By the above lemma, we have |C| ≤ 1 − k s , for any clique C. Assume that there is a clique C of size 1 − k s , then by the above Lemma, V max is orthogonal to the s-eigenspace.We will now show that V max is in the image of J n + E r .Lemma 7.6.Let X be k-regular strongly regular graph on n vertices with {k, r, s} with r > s as set of distinct eigenvalues of its adjacency matrix.If X has a clique of size 1 − k s , then v C − |C| n 1 is an r-eigenvector.Proof.From Lemma 7.5, we have E s v S = 0.As 1 is a k-eigenvector, it is also orthogonal to the s-eigenspace.Since Jv C = |C|1, the vector v C − |C| n 1 is orthogonal to both the k-eigenspace and the s-eigenspace, and so must lie in the r-eigenspace. We are now ready to prove Theorem 7.2. Proof of Theorem 7.2: Let X be a Peisert-type graph of type (m, q) whose connection set is S = m−1 i=0 c i F × q (with c 0 = 1).By Lemmas 7.3, 7.5 and 7.6, we obtain the next result. Lemma 7.7.If C is a maximum clique in X, then |C| = q and v C − 1 q 1 is a (q − m)eigenvector. Thus, given x ∈ F q 2 and 0 ≤ i m − 1, the canonical clique c i F q + x is a maximum clique and v c i ,x := v c i F q +x − 1 q 1 is a (q − m)-eigenevector.By Lemma 7.3, the dimension Theorem 1 . 5 . Let G be a finite group, H < G, and Ω = [G : H].Let C = {χ ∈ Irr(G) : χ(v H ) = 0}, and S be the collection of maximum intersecting sets in G. Then G on Ω satisfies the EKR-module property if and only if χ(v S ) = 0 for any S ∈ S and any χ ∈ C. Lemma 3 . 1 . ([17, Theorem 2.4.2])Let M be a real symmetric matrix with constant row sum d, which is compatible with a graph X on n vertices.If the least eigenvalue of M is τ, then for any independent set S in X,|S | ≤ n(−τ) d − τ ,and if equality holds, then e 1 + {χ : χ∈Irr(G) and λ χ, f =τ} e χ .Now by application of Lemma 2.4, we obtain the following sufficient condition for EKR-module property.Theorem 3.4.Let G be a group acting on the set Ω of left cosets of a subgroup H. Assume that there is an intersecting set S and a (G, Ω)-compatible class function f : G → R such that |S | = |G|(−τ) d − τ , where d = g∈G f (g) and τ is the least eigenvalue of M f .Then (a) |S | is the size of a maximum intersecting set in G; and (b) the action of G on Ω satisfies the EKR-module property if Since the action of N on Ω 2 satisfies the EKR property, we have |H × H| ≥ |(S ∩ N)| and |H × H| ≥ |(S ∩ Nπ)π −1 |.Now since the action of G on Ω 2 satisfies the EKR property, M is a point stabilizer, and S is a maximum intersecting set in G, we have 2|H × H| = |M| = |S| = |(S ∩ Nπ)π −1 | + |(S ∩ N)|.Therefore, both S ∩ N and (S ∩ Nπ)π −1 are maximum intersecting sets in N. Using Lemma 5.2, we see that there are maximum intersecting sets W, Z, X, Y in T , such that (i) S ∩ N = W × Z; and (ii) (S ∩ Nπ) = (X × Y)π.As S ∩ N contains the identity of N, W and Z must contain the identity of T .We will now show that X −1 = Y. Z χ acts intransitively, we have | Ω| |Ω|, and thus |N| + | Ω| |N| + |Ω|.We now consider the transitive action of N on Ω.The elements of HZ χ fix the Z χ -orbit containing the element H ∈ Ω. Observing that |N|/|HZ χ | = |N|/|H||Z χ | = |Ω|/|Z χ | = | Ω|, we can conclude that HZ χ is a stabilizer for the action of N on Ω.As S is an intersecting set with respect to the action of N on Ω, the set SZ χ is an intersecting set with respect to the action of N on Ω.Since Z χ is a central semi-regular subgroup in N ≤ Sym(Ω) and S is an intersecting set, we can conclude that |SZ χ | = |S||Z χ |.As we mentioned above, transitive actions of nilpotent groups satisfy the EKR property, and thus since S is a maximum intersecting set with respect to the action of N on Ω, we have |S| = |H|, and therefore |SZ χ | = |HZ χ |. Theorem 7 . 2 . ([4, Theorem 1.3])The characteristic vector of a maximum clique in a Peisert-type graph is a linear combination of characteristic vectors of its canonical cliques. St −1 is an intersecting set containing the identity.By the definition of an intersecting set, we have St −1 ⊂ 24, Theorem 5.2], and the following Example 1.3; Example 1.3.Consider the action of A 4 on the set Ω of cosets of a subgroup H Z 2 .We observe that the Sylow 2-subgroup N of A 4 is an intersecting set.As 4 = |N| > |A 4 /|Ω| = 2, this action does not satisfy the EKR property.Consider an intersecting set S. Then given t ∈ S, the set
12,762.4
2022-07-13T00:00:00.000
[ "Mathematics" ]
Reliable relay assisted communications for IoT based fall detection Robust wireless communication using relaying system and Non-Orthogonal Multiple Access (NOMA) will be extensively used for future IoT applications. In this paper, we consider a fall detection IoT application in which elderly patients are equipped with wearable motion sensors. Patient motion data is sent to fog data servers via a NOMA-based relaying system, thereby improving the communication reliability. We analyze the average signal-to-interference-plus-noise (SINR) performance of the NOMA-based relaying system, where the source node transmits two different symbols to the relay and destination node by employing superposition coding over Rayleigh fading channels. In the amplify-and-forward (AF) based relaying, the relay re-transmits the received signal after amplification, whereas, in the decode-and-forward (DF) based relaying, the relay only re-transmits the symbol having lower NOMA power coefficient. We derive closed-form average SINR expressions for AF and DF relaying systems using NOMA. The average SINR expressions for AF and DF relaying systems are derived in terms of computationally efficient functions, namely Tricomi confluent hypergeometric and Meijer’s G functions. Through simulations, it is shown that the average SINR values computed using the derived analytical expressions are in excellent agreement with the simulation-based average SINR results. Non-orthogonal multiple access (NOMA) is considered as a promising technology that will enable future wireless networks achieve massive connectivity, enhanced spectrum efficiency, energy efficiency, and user-fairness 9 .By utilizing superposition coding and successive interference cancellation (SIC), NOMA can serve multiple users simultaneously with different power levels 10 .This approach results in significant spectral efficiency gains over the conventional orthogonal multiple access (OMA) system 11 .Moreover, unlike the conventional opportunistic user scheduling, NOMA can serve users with different channel conditions reliably 12 . Cooperative relaying can be beneficial to future IoT networks as it has the ability to provide uninterrupted connectivity to the users whose channel conditions are not favourable.Recently, NOMA-based cooperative relaying system has gained significant research attention due to its ability to further increase the number of supported users and spectral efficiency 13 .In such systems, maximum ratio combining (MRC) is used at the destination to increase the spatial diversity.In addition, cooperative relaying systems can also help in enhancing the spectral efficiency of the network as two data symbols can be obtained at the destination in two time slots.Two common techniques for cooperative relaying include Amplify-and-Forward (AF) and Decode-and-Forward (DF). In this paper, we derive closed-form expressions of average received Signal to Interference and Noise Ratio (SINR) for NOMA-based AF and DF relayed systems for fall detection application.The average SINR performance is of key importance for the analysis and design of any communication network 14 .The knowledge of the statistics of SINR is useful in determining other important performance metrics such as spectral efficiency, coverage probability and symbol error rates.Moreover, the accurate characterization of the average SINR is essential as it is used to solve various communication network problems, e.g.link budget, user association and power control.The SINR analysis also helps in reliability analysis of communication reliability of different IoT applications such as fall detection. The major contribution of the paper are summarized below: • We consider an IoT based fall detection system and propose cooperative communications to improve the connectivity performance between the IoT sensors which act as source and fog devices which are the destination nodes.• We present an average SINR analysis of the NOMA-based AF relaying system, where the destination imple- ments MRC twice to obtain maximum diversity order.The average SINR expressions are derived for both data symbols which are transmitted in two time slots using the AF relaying mode.• We also derive average SINR expressions for the data symbols of the NOMA-based DF relaying system, where the destination utilizes a single MRC-based receiver structure to jointly decode the two data symbols.• Using the derived average SINR expressions for AF and DF relaying systems, we also present the upper bounded ergodic sum rate which is based on the Jensen's inequality.• Finally, we present numerical results to validate the analysis carried out in this paper.It has been shown that the results with the analytical expressions match with the results of Monte-Carlo simulations. Related works In this section, we provide an overview of fall detection application, recent work done in the area of IoT based fall detection and recent work done in the area of NOMA based cooperative communications. Overview of IoT based fall detection application The Fall detection application relies on the use of different type of sensors to monitor the movement of patients 5 . In case of an abnormality in the movement, it may further be scrutinized for a fall.In Fig. 1, we present a framework for fall detection application.There are four different layers of the fall detection application describe in the following. IoT layer The IoT layer involves the wearable device worn by the patients which can have different type of installed sensors.Three kind of sensors are normally used for fall detection.These include the motion sensors, physiological sensors and environmental sensors 3 . In the motion sensors, the movement of the patient is observed regularly.The sensors in this category can be an accelerometer which measures the velocity of the patient with respect to time.Similarly, a gyroscope can also be used to measure the angular position of the patient.Magnetometers can also track the position of the patients based on magnetic fields.Additionally, GPS sensors and indoor Wi-Fi fingerprinting mechanisms can also be used for fall detection 6 . The physiological sensors is another category of sensors that can help detection of fall.They rely on change in body parameters of the patient in case of a fall.These include sensors such as electrocardiography to monitor the electrical signals of the heart, spirometers to find the inward and outward flow of air in lungs, and galvanic skin response to measure the conductivity of the skin based on electrical signals 5 . The wearable device of the patients contain a variety of these sensors and regularly generate patient data which is to be sent to the fog devices installed in the hospitals or houses. Cooperative communication layer The goal of cooperative communication layer is to provide reliability in communications between the IoT devices and the fog devices.This communication can be severly degraded due to multi-path fading of the sender-receiver channel 15 .Hence, small relay nodes can be installed to boost the signal reception at the destination (fog devices).In this regard, AF and DF relays are used.In the AF relay, the sender's signal is simply amplified and forward towards the destination.On the other hand, in the DF relays, the signal is first decoded, modulated and then retransmitted towards the destination. Fog layer The Fog layer includes fog nodes placed at several locations in the hospital or homes.The reason of placing these fog servers near the edge is to reduce the transmission latency of the tasks.The purpose of these fog devices is to collect data from different patients, apply machine learning algorithms to detect falls and notify the patients/ healthcare providers about the fall.Classification algorithms can be easily applied at this layer which can monitor the regular movements data of the patient and detect abnormalities.Furthermore, complex machine learning algorithms can also be used to develop a local fall detection model. Cloud layer The data from fog layer can be transmitted towards the central data servers.A federated learning approach may be utilized to share data between the fog layer and the cloud layer.Instead of sharing the raw data with the cloud layer, the fog layer can only share the local fall detection model parameters with the cloud layer.The cloud server upon receiving local fall detection model parameters from many fog devices can apply sophisticated machine learning algorithms to come up with a global fall detection model.The revised model parameters are sent back to the fog devices for update of the local model parameters.The cloud server also conducts fall detection profiling at the patient and cohort level.This corresponds to use of patients' demographic data such as age, weight etc. and its sensor obtained data to find different correlations for fall detection and prevention. Literature review of IoT based fall detection Table 1 presents a summary of recent work done related to IoT based fall detection.In 16 , authors utilize accelerometer to measure the position of the patients in real time.Based on the movement data, a threshold based algorithm is developed to detect falls.The threshold algorithm is executed on the cloud server.The communication between the sensors and the cloud is maintained by using WiFi technology and fall detection event notifications are disseminated using SMS. The work in 17 utilize a fall detection dataset based on three different sensors namely accelerometer, gyroscope and magnetometer.Based on the datasets, the work utilizes two machine learning algorithms namely Convolution Neural Network (CNN) and Recurrent Neural Network (RNN).Patients are equipped with two werable devices, one at the waits and other at the neck.Both of these algorithms are executed at the wearable device itself and once a fall event is detected, the wearable communicate with each other using Bluetooth.The neck wearable device is a airbag helmet that saves the patient from injury in case of a fall event.Data from the wearble devices is also communicated to the cloud server for further analysis and improve the learning models. In 18 , authors use three sensors for fall detection, the first is the accelerometer, the second is Doppler sensor and third is sound senors.Data from these sensors is fed to an Artificial Neural Network (ANN) executed on the fog devices and cloud server.Communication is exchanged between the devices using WiFi. The work in 19 uses camera to record video of the area in which patients are present.The video data is given as input to a Deep CNN and fall detection events are detected at the cloud.Data from video is shared with the cloud server over WiFi. In 20 , a 3d accelerometer is used to obtain movement data of the patients.Different machine learning algorithms are used to detect fall events.These algorithms include decision trees, ensemble based techniques, logistic regression and DeepNets.The agorithms are executed at the cloud and sensor to cloud communication is achieved using 6 LowPAN technology. The work in 21 uses accelerometer dataset obtained from different online sources and applied CNN and clustering algorithms to detect falls at the cloud.Once the fall occurs, the ECG signal of the patient is observed and notifications are made using SMS. As compared to the existing frameworks, this paper focuses on a cooperative communication approach for sharing data between sensors and fog nodes.We assume patients are equipped with an accelerometer sensor and this data is needed to be transmitted to the fog nodes so that classificaiton algorithms can be applied. Literature review of NOMA based cooperative communications Many existing studies have investigated the performance gains of cooperative NOMA relay systems in terms of outage probability and ergodic achievable rate.The analysis of the achievable rate for a NOMA based relaying system is performed in 22 with Rayleigh fading channels and decode-and-forward (DF) relay.The authors in 23 investigate the ergodic sum rate and outage performance of the NOMA based relaying system.For Rician fading channels, the average achievable rate expression is derived for the NOMA relaying system in 24 using a DF relay.By employing a relay in amplify-and-forward (AF) mode, authors in 25 calculate the asymptotic outage probability and ergodic sum rate of the NOMA based relaying system.There are numerous studies [26][27][28][29][30][31] that provide the useful performance analysis of NOMA-based communication systems. NOMA cooperative dual-hop relaying is investigated in 32 , where the selection of the best relay from the set of multiple relays is based on a max-min signal-to-interference-plus-noise ratio (SINR) criteria.Ergodic sum-rate and outage probabilities are derived in 32 for the DF and AF protocols.The performance of a full duplex relay assisted cognitive radio network with NOMA is evaluated in 33 .The secondary and primary users are coupled using the NOMA transmission strategy, such that a common relay for cooperative transmission.An accurate closed-form expressions for outage probability and average throughput are also presented.It has been shown in 33 that the full duplex relay assisted cognitive radio network gives superior performance compared to its half duplex counterpart.The impact of imperfect SICs on the performance of space-time block codes (STBC) based cooperative NOMA is investigated in 34 .The closed-form expression of the ergodic capacity with imperfect SICs is derived.The SINR expressions of the two weak-user nodes are derived in 35 using the AF relay cooperative NOMA model.At high SNR, an asymptotically tight approximation of symbol error rate (SER) is also derived using moment-generating function (MGF).The performance of cooperative NOMA based internet of things (IoT) networks for the generalised non-homogeneous fading channel model is investigated in 36 , where the Meijer's G-function is used to derive closed-form analytical expressions of outage probability for secondary NOMA users. System model We consider a communication for fall detection application scenario based on Fig. 1.The considered system consists of a motion sensor source, relay as a cooperative communication node, and the fog server destination node, represented by S, R and D, respectively as shown in Fig. 2. For an IoT healthcare application, the source can be the sensor placed on the patient, and the destination can be the data server located in the home or hospital. The relay nodes can be the wireless nodes placed at different places to enhance the data connectivity.In the considered system, each node consists of a single antenna and it is assumed that the relay operates in a half-duplex mode.The source node transmits two symbols s 1 and s 2 using the superposition coding s = √ Pa 1 s 1 + √ Pa 2 s 2 at the first time slot, where P is the total transmission power at the source.It may be noted that a 1 and a 2 are NOMA power levels such that a 1 + a 2 = 1 .It is further assumed that the direct S → D link is experiencing severe fading such that the symbol s 1 is allocated more power, i.e., a 1 > a 2 23 .The depiction of the system model is shown in Fig. 2, where the detection of symbols in AF and DF modes are discussed in the following subsections. AF relaying Here, we present the system model of the fall detection application that uses NOMA-based AF relaying system.The system model follows the receiver design of 25 .At the n th time slot, the signal received at the destination and at the relay can be written as and respectively, where h sd and h sr denote the Rayleigh distributed flat fading channels of the source-destination S → D and source-relay S → R links, respectively.P denotes the total transmit power at the source and relay nodes.The additive white Gaussian noise (AWGN) values at the destination and relay for the n th time slot are denoted by w d [n] and w r [n] , respectively.For convenience, the noise variances at relay and destination are assumed to be same, represented by σ 2 . The relay amplifies the received signal with a gain factor determined as A = P/(P|h sr | 2 + σ 2 ) .It may be noted that the destination does not decode the received signal at the n th time slot rather it waits for the amplified signal transmitted by the relay at the (n + 1) th time slot.Therefore, the signal received at the destination during the (n + 1) th time slot can be written as where h rd denotes the Rayleigh distributed flat fading channels of the relay-destination R → D link.In order to decode the symbol s 1 , the signals received at the destination, y AF d [n] and y AF d [n + 1] are combined using MRC with weighting factors q s 1 [n] and q s 1 [n + 1] .Therefore, the combined signal at the destination becomes . The weighting factors are chosen as www.nature.com/scientificreports/and where (•) * denotes the conjugate transpose.Note that SIC is implemented separately on the received signals of the two time slots in order to remove symbol 1 interference at the destination.To decode the symbol 2, weighting factors are chosen as and From 25 , we can express the combined SINR of the symbol s 1 as the sum of the SINR of the direct link, γ daf −s1 , and SINR of the relayed link, γ raf −s1 , which is given by where, r, d) represents the exponentially distributed instantaneous SNR of the corre- sponding link.|h ij | 2 is the instantaneous channel gain between the nodes i and j.D ij is the link distance between the nodes i and j, and α denotes the path-loss exponent.Similarly, the combined SINR of the symbol s 2 can be expressed as the sum of the SINR of the direct link, γ daf −s2 , and SINR of the relayed link, γ raf −s2 , such that The average SINR analysis of both the symbols s 1 and s 2 for the NOMA-based AF system is presented in Section "SINR analysis of AF relayed system". DF relaying We utilize the receiver design of 23 for the NOMA-based DF relaying system in the considered fall detection communications.In the case of the DF relay transmission, the signal received at the fog node destination and relay in the n th time slot can be written as and respectively.After the first transmission phase, the symbol s 1 is decoded at the relay which is followed by the detection of the symbol s 2 using SIC.During the (n + 1) th time slot, the relay transmits the symbol s 2 to the destination.The received signal in the (n + 1) th time slot at the destination is given by After the completion of the second transmission phase, the destination employs MRC on the received signals of the two time slots, where the MRC weights are given by and From the resultant signal, the symbol s 2 is decoded first at the destination, while treating symbol s 1 as interference.Using SIC, the symbol s 1 is obtained at the destination.Now, we can write the SINR of the symbol s 1 denoted by γ cdf −s2 , and the SINR of the symbol s 2 , represented by γ cdf −s2 , at the destination as (4) and respectively.The average SINR analysis of both the symbols s 1 and s 2 for the NOMA-based DF system is presented in Section "SINR analysis of DF relayed system". SINR analysis of AF relayed system In this section, we derive the average received SINRs for symbols s 1 and s 2 in the NOMA-based AF relaying system for fall detection communications.For our analysis, we define rd as the average received signal-to-noise ratios (SNRs) for the S → D , S → R and R → D links, respectively.We define three random variables denoted by U, V and W representing γ sd , γ sr and γ rd respectively.The mean values of the exponentially distributed random variables U, V and W are given by u , v and w respectively.For notational convenience, we use a = a 1 and b = a 2 . Average SINR for symbol s 1 The average received SINR at the destination for the symbol s 1 after applying MRC can be determined using the instantaneous SINR ( 8) as It may be noted that γ caf −s1 , γ daf −s1 and γ raf −s1 denote the combined, direct and relayed link average SINRs for symbol s 1 , respectively.The expectations in (17) are solved in the Lemma 1. Lemma 1 Let the random variable X 1 be related to U through X 1 = aU bU+1 , then the E{X 1 } can be determined as Let another random variable Y 1 be related to V and W through Y 1 = aVW bVW+V +W+1 , then the E{Y 1 } can be calcu- lated as where �(•, •, •) is the Tricomi confluent hypergeometric function (TCHF) 37, Eq. (9.211.4) . Average SINR for symbol s 2 The average received SINR at the destination for the symbol s 2 using ( 9) can be expressed as where γ caf −s2 , γ daf −s2 and γ raf −s2 denote the combined, direct and relayed link average SINRs for symbol s 2 , respectively.The expectations in (20) are solved in the Lemma 2. Lemma 2 Let the random variable X 2 be related to U through X 2 = bU , then the E{X 2 } can be determined as Let another random variable Y 2 be related to V and W through Y 2 = bVW V +W+1 , then the E{Y 2 } can be calculated as SINR analysis of DF relayed system In this section, we present the derivation details of the average received SINRs for symbols s 1 and s 2 in the NOMA relaying system with the DF transmission mode at the relay. Average SINR for symbol s 1 The average received SINR at the destination for the symbol s 1 using ( 15) can be expressed 23, Eq. ( 9) as where γ cdf −s1 denote the combined link average SINR for s 1 .The expectation in ( 23) is worked out in the Lemma 3. Lemma 3 Let the random variable Z 1 be related to U and V through the relation Z 1 = min aV bV +1 , aU , then the E{Z 1 } can be calculated as Average SINR for symbol s 2 The average received SINR at destination for symbol s 2 using (15) can be expressed 23, Eq. (10) as where γ cdf −s2 is the combined link average SINR for s 2 .The expectation in ( 25) is worked out in the Lemma 4. Lemma 4 Let the random variable Z 2 be related to U, V and W through the relation Z 2 = min bV , bU aU+1 + W , then E{Z 2 } can be shown to be equal to (26). where Although not shown here, it has been observed that summations 1 , 2 and 3 require few numbers of terms (at least two terms) to converge. Ergodic sum rate analysis The average received SINRs derived in the previous section can be used to find an upper bound on the sum rate of the system.For this purpose, the Jensen's inequality is used and we can write the upper bounded sum rate as where m j represents the mode of the relay i.e., m j = {af , df } , such that af corresponds to the AF mode and df represents the DF mode. Results and discussion In this section, we present the average received SINR results for the fall detection communication application using NOMA-based AF and DF relayed systems.The derived analytical results in Sections "SINR analysis of AF relayed system" and "SINR analysis of DF relayed system" are ascertained by the simulation results to verify their correctness.Here, we use , where d = D sr D sd is the normalized distance.Furthermore, the path loss exponent is set to α = 3 .The simulation results are obtained by averaging over 1,000,000 independent channel realizations.It is emphasized here that simulation model statistically approximates the analytical model by generating independent channel realizations.The degree of convergence between the analytical and simulation results depends on the number of generated channel realizations.An absolute convergence between the two models can be obtained only when the size of data set of channel realizations approaches to infinity.The transmit SNR P/σ 2 is set to 10 dB, unless stated otherwise. In Fig. 3, we plot the average SINR results (in dB) against the various values transmit SNR as shown in the x-axis.Here, we set d = 0.45 and a 1 = 0.6 , which makes a 2 = 0.4 .It can be observed that the derived analytical results fit well with the simulation results, which proves the correctness of the closed form expressions derived in this study for both AF and DF systems.As the transmit SNR is increased, the SINR of the symbol s 2 also increases in a linear fashion, while the SINR of s 1 only increases marginally.This trend is due to the fact that while decoding s 1 , the interference from the symbol s 2 is treated as a noise.Whereas, SIC is utilized to cancel the interference from s 1 while decoding s 2 . Fig. 4 shows the average SINR performance by varying the distance between the motion sensor source and relay, i.e., D sr .It can be seen that the derived analytical results matches well with the simulation results.We also observe that DF based relaying yields a better SINR performance for the symbol s 2 as compared to AF based ( 27) relaying, when the relay is placed closer to the source.However, as the relay is moved closer to the destination, the SINR with AF based relaying starts to outperform the SINR with DF based relaying for the symbol s 2 .This is due to the fact that as the relay is placed far away from the source, it receives a further distorted version of the signal and consequently more errors occur during the decoding process.These errors are carried forward to the destination, and therefore, the SINR performance degrades.In Fig. 5, the average SINR versus a 1 performance is shown.As the parameter a 1 is increased, the SINR per- formance gap between the two symbols begins to reduce.The SINR performance with both symbols for the AF case matches when the value of a 1 is around 0.83 and after that the SINR performance for the symbol s 1 becomes superior.Similarly, for the DF case, the crossover point of the SINR for both the symbols is when a 1 reaches 0.95. Finally, in Fig. 6 we plot the ergodic rate performance against the different values of transmit SNR for both AF and DF based relaying systems.Here, we plot an upper bounded (UB) rate (27) using the derived average SINR results and compares it with the simulation results.It can be seen from Fig. 6 that the UB rates for the symbol s 1 for both AF and DF systems are tightly bounded.While the bound is loose for the symbol s 2 for both AF and DF based relaying systems. It is important to note that insights obtained from the SINR results presented in this study are same as the insights acquired from the outage probability results presented in 23,25 for NOMA-based DF and AF relaying systems, respectively. Discussion From the results, it can be verified that the NOMA-based relay system can enhance the communication reliability for fall detection application.The addition of the cooperative communication layer as proposed in the framework of this paper ensures that the data from the motion sensor is accurately monitored at the fog node destination.This reliable communications is particularly vital in case of an actual fall so that timely response can be taken for the safety of elderly patients.The analytical model presented in this paper can be used to optimize the working of fall detection communication.Important system parameters such as position of relay nodes and transmit power of motion sensor nodes can be optimized such that the reliability of data received by the fog nodes can be maximized and battery life of motion sensors can be improved. Conclusions This study explores the utilization of a relaying system combined with NOMA-based communication for fall detection applications.In this research, we have mathematically derived closed-form expressions to calculate the average SINR for a NOMA-based relaying system employing both AF and DF modes.Through extensive simulations, we have compared these SINR expressions against numerically computed average SINR results, demonstrating a consistent alignment between the derived analytical results and the simulation outcomes.Our findings suggest that the received SINR experiences significant degradation in DF relaying scenarios when the relay's proximity to the destination is reduced.These performance trends can be effectively tailored to meet (31) � 1, 0, (1, 1), (0, 1) Figure 1 . Figure 1.Framework for fall detection application. Figure 2 . Figure 2.An illustration of the communication for fall detection system model with AF and DF relaying modes (Here S = Motion Sensor Node, R = Relay node, D = fog server Node). Table 1 . Fall detection systems in literature.
6,603.8
2024-03-15T00:00:00.000
[ "Engineering", "Computer Science", "Medicine" ]
Nitrogen fixation in eukaryotes – New models for symbiosis Background Nitrogen, a component of many bio-molecules, is essential for growth and development of all organisms. Most nitrogen exists in the atmosphere, and utilisation of this source is important as a means of avoiding nitrogen starvation. However, the ability to fix atmospheric nitrogen via the nitrogenase enzyme complex is restricted to some bacteria. Eukaryotic organisms are only able to obtain fixed nitrogen through their symbiotic interactions with nitrogen-fixing prokaryotes. These symbioses involve a variety of host organisms, including animals, plants, fungi and protists. Results We have compared the morphological, physiological and molecular characteristics of nitrogen fixing symbiotic associations of bacteria and their diverse hosts. Special features of the interaction, e.g. vertical transmission of symbionts, grade of dependency of partners and physiological modifications have been considered in terms of extent of co-evolution and adaptation. Our findings are that, despite many adaptations enabling a beneficial partnership, most symbioses for molecular nitrogen fixation involve facultative interactions. However, some interactions, among them endosymbioses between cyanobacteria and diatoms, show characteristics that reveal a more obligate status of co-evolution. Conclusion Our review emphasises that molecular nitrogen fixation, a driving force for interactions and co-evolution of different species, is a widespread phenomenon involving many different organisms and ecosystems. The diverse grades of symbioses, ranging from loose associations to highly specific intracellular interactions, might themselves reflect the range of potential evolutionary fates for symbiotic partnerships. These include the extreme evolutionary modifications and adaptations that have accompanied the formation of organelles in eukaryotic cells: plastids and mitochondria. However, age and extensive adaptation of plastids and mitochondria complicate the investigation of processes involved in the transition of symbionts to organelles. Extant lineages of symbiotic associations for nitrogen fixation show diverse grades of adaptation and co-evolution, thereby representing different stages of symbiont-host interaction. In particular cyanobacterial associations with protists, like the Rhopalodia gibba-spheroid body symbiosis, could serve as important model systems for the investigation of the complex mechanisms underlying organelle evolution. Background Historically, the phenomenon of symbiosis has been defined as a close and prolonged interaction between two different species [1]. This includes parasitic, mutualistic and commensalistic interactions. However, more modern interpretations use the term "symbiosis" for interactions, which are more or less beneficial for both partners. Here, we use the term "mutualistic symbiosis" or "mutualism" for symbiotic interactions where a mutual benefit is confirmed. For interactions in general and where the exact nature of interaction is unknown or is not easily defined, we use the general term of "symbiosis". It is generally thought that all eukaryotic organisms are descendents of progenitors in which at least two partners have interacted symbiotically. Mitochondria have originated from an α-proteobacterial ancestor, which was dramatically reduced during evolution [2,3]. Plastids, the typical organelles of photoautotrophic eukaryotes, are thought to have been derived from the merger of a cyanobacterial-like progenitor and a phagotrophic eukaryote [4]. The driving force for the close interactions that have led to organelle formation appear to be the metabolic needs of at least one of the participants in the interaction. In the case of mitochondria, ATP synthesis carried out by the α-proteobacterial symbiont has been the principal driving force for the co-evolution of both partners. In the case of plastids, the need for photosynthetic products has presumably driven symbiosis. Both metabolic capacities are exclusively prokaryotic inventions and only symbiotic interaction has allowed them to be used by eukaryotes. Prokaryotic invention and eukaryotic utilisation through symbiosis also applies to molecular nitrogen fixation. Nitrogen is an essential compound of many molecules, including proteins, nucleic acids and vitamins. Associations of eukaryotic host organisms with nitrogen-fixing bacteria occur in many environments and have thus increased the bioavailability of nitrogen. These associations are numerous and diverse, ranging from loose interactions to highly regulated intracellular symbioses. Here we compare the morphological, physiological and molecular characteristics of symbiotic nitrogen fixing bacteria and their host organisms (animals, fungi, plants and protists). We classify the evolutionary state of some of these interactions, and discuss the potential of these for becoming model systems for investigating the molecular basis of the transition from endosymbiont to organelle [5,6]. Molecular nitrogen fixation and nitrogenase Most animals and fungi use nutrition to heterotrophically acquire nitrogen bound in biomolecules. However, other organisms including plants and many bacteria use inorganic nitrogen compounds like ammonium or nitrate bound to soil or present in water. The fixation of molecular nitrogen into bioavailable compounds for cellular anabolism is a process restricted to some bacteria. Such bacteria are termed diazotrophs, as they obtain all their nitrogen by fixing molecular nitrogen. During biological nitrogen fixation (BNF) molecular nitrogen is reduced ( Figure 1A) in multiple electron transfer reactions, resulting in the synthesis of ammonia and the release of hydrogen [7]. Ammonium is then used for the subsequent synthesis of biomolecules. This reduction of molecular nitrogen to ammonium is catalyzed in all nitrogen-fixing organisms via the nitrogenase enzyme complex in an ATP-dependent, highly energy consuming reaction ( Figure 1B). The nitrogenase complex is comprised of two main functional subunits, dinitrogenase reductase (azoferredoxin) and dinitrogenase (molybdoferredoxin) [8]. The structural components of these subunits are the Nif (nitrogen fixation) proteins NifH (γ 2 homodimeric azoferredoxin) and NifD/K (α 2 β 2 heterotetrameric molybdoferredoxin). Basically three types of nitrogenases are known based on the composition of their metal centres: iron and molybdenum (Fe/Mo), iron and vanadium (Fe/V) or iron only (Fe) [9]. The most common form is the Fe/Mo-type found in cyanobacteria and rhizobia. An important feature of the nitrogenase enzyme complex is its extreme sensitivity to even minor concentrations of oxygen. In aerobic environments and in photoautotrophic cyanobacteria, where oxygen is produced in the light reactions of photosynthesis [10], nitrogenase activity must be protected. This protection is realised by different mechanisms in nitrogen fixing bacteria, depending on their cellular and physiologic constitutions. Aerobic bacteria like Azotobacter limit high intracellular oxygen concentrations by high rates of respiratory metabolism in combination with extracellular polysaccharides to reduce oxygen influx [11,12]. In some filamentous cyanobacteria, BNF is restricted to specialised cells, the heterocysts, which are separated from other cells, and show reduced photosynthetic activity without oxygen production [13,14]. Unicellular cyanobacteria combine photosynthesis and nitrogen fixation within the same cell and show a temporary separation of these two pathways where BNF is restricted to the dark period, when the oxygen-levels are low [15]. In addition to these protections, the concentration of oxygen can be decreased by biochemical pathways like the Mehler-reaction or by special oxygenscavenging molecules such as cyanoglobin and leghemoglobin, the latter playing a major role in rhizobia-plant interactions [16,17]. Diversity and specificity of symbioses between nitrogen fixing bacteria and eukaryotes The ability to fix molecular nitrogen is a widespread characteristic of prokaryotic cells, being established among various groups of bacteria including some archaea [18,19]. The distribution of BNF among archaea and eubacteria indicates that nitrogen fixation is an ancient innovation [15,20,21], which developed early in the evolution of microbial life on earth. Within the eubacteria, nitrogen fixation has been described for members of the proteobacteria, cyanobacteria, actinobacteria, spirochaetes, clostridiales, purple-sulfur (Chromatiales) and greensulfur (Chlorobiales) bacteria ( Figure 2). However, only some of these diazotrophic bacteria are known to interact with eukaryotes symbiotically (Figure 2, Table 1). A diversity of eukaryotic partner organisms (animals, fungi, plants and protists) from different environments is involved in symbioses with nitrogen fixing bacteria ( Table 1). The kind of these nitrogen fixing symbioses range from rather loose, temporary and non-specific contacts to stable and permanent interactions, the latter ones often characterised by morphological and/or physiological modifications of one or both partners and also the vertical transmission of symbionts to the next host generation. Symbionts can reside either extracellularly in more or less close association to their hosts or exist as endosymbionts intracellularly within host cells. Among these associations an interaction is considered as obligate for one partner if it is not able to survive outside the symbiotic association. In the case of symbiotic bacteria, an obligate status is often accompanied by deleterious genome evolution, e.g. the loss of genes whose products are no longer required for the new host-dependent lifestyle [22,23] whereas nonobligate (facultative) symbionts retain their autonomy and are indistinguishable from their free-living forms with respect to gene content. Numerous manifestations of symbiotic interactions between nitrogen-fixing bacteria and their hosts are known and they reflect considerable diversity and complexity. The following sections provide an overview of main types of associations and their characteristics. Symbioses of nitrogen fixing bacteria with sponges, corals and insects (invertebrates) Marine sponges (Porifera) are evolutionary primordial invertebrates, which can harbour a variety of extra-and intracellular bacteria or bacterial communities [24][25][26]. However, the symbiotic character of these associations is well defined only in a few cases [27]. Symbioses with sponges have been described for many different groups of cyanobacteria [28], where the symbionts seem to provide their hosts with organic carbon, nitrogen or secondary metabolites [27,29]. This might also be the case for the filamentous cyanobacterium Oscillatoria spongeliae, which is found to be host-specific in Dysidea spp. [30]. Cyanobacterial symbionts of Chondrilla australiensis are thought to be vertically transmitted [31,32], but an obligate status for these interactions has yet to be tested rigorously. Corals in general are partners of endosymbiotic dinoflagellates (zooxanthellae), which provide photosynthetically derived carbon to their animal hosts [33], but nitrogen fixation by cyanobacteria is also a well-known feature of coral reefs and coral communities [34][35][36]. The metazoan coral Montastraea cavernosa is an example of a host harbouring symbiotic cyanobacteria [37]. In the Montastraea endosymbiosis, two symbiotic organisms, the zooxanthellae and cyanobacteria, share the same host compartment. Here, the nitrogen fixation by the cyanobacteria might be facilitated by the host providing energy rich compounds. If so, this would indicate a high degree of specificity association between all three partners [31]. Also higher invertebrates benefit from the metabolic capacities of nitrogen-fixing bacteria. The hindgut of wood-feeding termites is colonised by flagellate protozoa [38,39], which facilitate digestion of lignocellulose [40]. The carbon-rich but nitrogen-poor nature of the termite diet requires nitrogen from other sources [41]. This is thought to be provided by intracellular bacteria associated Reaction and molecular mechanism of biological nitrogen fix-ation Figure 1 Reaction and molecular mechanism of biological nitrogen fixation. A. General reaction of molecular nitrogen fixation B. Schematic structure and operation of the nitrogenase enzyme complex and subsequent metabolism of nitrogen. Electrons are transferred from reduced ferredoxin (or flavodoxin) via azoferredoxin to molybdoferredoxin. Each mol of fixed nitrogen requires 16 mol ATP hydrolyzed by the NifH protein. The NH 3 produced is utilised in the synthesis of glutamine or glutamate, respectively, for N-metabolism. NifJ: pyruvate flavodoxin/ferrodoxin oxidoreductase, NifF: Flavodoxin/Ferredoxin). with termite gut flagellates, such as Trichonympha agilis in Reticulitermes santonensi [42]. These are examples of permanent endosymbionts placed phylogenetically in a new phylum endomicrobia [42]. Interestingly, although the endomicrobia are symbionts of the flagellate protists rather than the termites, they might best be considered as animal endosymbiotic associations. More recently, freeliving spirochetes of the termite hindgut have also been revealed to fix molecular nitrogen and provide their host with nitrogen metabolites [43]. A further interaction has also been identified in Tetraponera ants, which harbour a subset of different bacteria in a special organ ("bacterial pouch"), among them relatives of Rhizobium, Pseudomonas and Burkholderia [44]. However, although these symbionts are related to nitrogen fixing and/or root-nodule associated bacteria, it is only speculated that the insect host benefits from fixation of molecular nitrogen. More likely, nitrogenous waste secreted by the host is metabo-lised and recycled by the bacteria. This is also indicated by the high amount of Malphigian tubules in the pouch, which transport nitrogenous waste. Nevertheless, nitrogen fixing activity of the symbiotic bacteria of Tetraponera cannot be excluded as a possibility. The diverse symbiotic interactions between nitrogen fixing bacteria and insects described so far share some common characteristics. These symbionts often inhabit specialised organs or regions of the host. This localisation in turn provides an optimal environment for their activity, without symbionts needing to reside inside host cells. This is in contrast to other well-known bacterial interactions with insects, like the Buchnera symbiosis [45]. Here, the symbionts reside within specialised host cells and show a remarkable degree of adaptation leading to an obligate and permanent level of interaction. One prerequisite for such co-evolution of both partners is stable vertical transmission of Phylogenetic affinities of symbiotic and non-symbiotic nitrogen fixing bacteria symbionts which usually takes place maternally, via infection of eggs or larvae [45,46]. In contrast to endosymbionts, stable integration and transmission of gut and cavity symbionts seems to be challenging as they are more vulnerable for replacement by other mircobes. Ants and termites are colony organised insects and transmission of extracellular symbionts could take place horizontally via close contact of different individuals or via feeding of larvae by infected workers. However, reproduction of social insects is accomplished only by few individuals, thus vertical transmission from queens to the offspring is necessary for the foundation of new colonies. Phylogenetic analyses of the gut microbiota of termites indicate symbiont-host coevolution based on vertical transmission in combination with frequent horizontal exchange between congeneric species [47,48]. Consequently, the special social lifestyle of termites and ants might be one prerequisite for the establishment of stable vertical transmission and cospeciation of extracellular symbionts in these lineages. Symbioses of nitrogen fixing bacteria with fungi: cyanolichens and symbionts of arbuscular mycorrhizal fungi In lichen symbioses, a fungal partner (mycobiont) is associated with an extracellular photobiont. The latter are mostly different photosynthetic algae, but cyanobacteria also occur as photobionts in lichens, either alone (bipartite symbiosis) or in combination with algae (tripartite symbiosis) [49]. The benefit to the photobiontic partner is not fully understood, but it might include the provision of water, minerals, protection from predators and UV damage [50]. The advantage for the fungal partner is the provision of photosynthesis-derived carbon metabolites from the photobiont. Cyanobacteria (cyanobionts) provide, in addition to carbon, fixed nitrogen to their hosts. The importance of molecular nitrogen fixation is reflected in the physiological and morphological adaptations of lichen-associated cyanobacteria. These include an increased number of nitrogen-fixing heterocysts in symbiotic Nostoc sp. compared to free-living filaments. A further Overview of nitrogen fixing bacteria, including selected symbiotic interactions, possible host organisms and symbiont localisation. Details of the individual symbiotic associations are described in the text. n.d.: not detected adaptation is found in tripartite symbioses where the cyanobacteria are concentrated in special areas called cephalodias, where they fix nitrogen and are protected from high oxygen concentrations. In these tripartite symbioses, photosynthesis is restricted to the algal photobionts, and these supply the other partners with fixed carbon compounds [51]. The fact that most cyanobionts are not vertically transmitted and are also found as freeliving organisms indicates that they are not obligate symbionts, and thus not dependent on host metabolism. Nevertheless, the morphological characters of lichens suggest a high degree of coevolutionary adaptation of all participants. Although commonly considered a mutualistic interaction, some hypotheses propose that lichen symbioses are a form of parasitism [50]. Even so, the ecological and evolutionary success of lichens suggests mutual benefit is characteristic for the association. The arbuscular mycorrhizal (AM) symbiosis between fungi and plant roots is the most common of this type of interaction in the rizosphere [52]. The fungus supplies the plant with water and nutrients such as phosphate, while the plant provides the fungus with photosynthetically produced carbohydrates. The AM fungus Gigaspora margarita harbours intracellular bacteria from the genus Burkholderia [53,54], which supply the fungus with fixed nitrogen. However, the extent of physiological adaptation or reduction of these endosymbionts leading to an obligate status of interaction has yet to be determined. A further symbiosis, discovered in the Spessart-mountains (Germany), was identified by analysing the fungus Geosiphon pyriformis, related to AM fungi [55]. At the hyphal tips of this fungus, unicellular multinucleated "bladders" develop, which harbour Nostoc punctiforme. It has been shown that these bladders fix CO 2 , which may be the major contribution of the cyanobacterium to the symbiosis. The symbiont also forms heterocysts, suggesting that nitrogen is fixed as well [56]. However, as these heterocysts are somewhat similar to those of free-living relatives of this Nostoc strain, nitrogen fixation may only serve the needs of the symbiont itself. Symbioses of nitrogen fixing bacteria with plants Interactions of bacteria with various groups of plants are the most common symbiotic association for nitrogen assimilation. A multiplicity of bacteria with different physiological backgrounds are involved in these associations, including gram-negative proteobacteria like Rhizobia sp. and Burkholderia sp., gram-positive Frankia sp. [57] and filamentous or unicellular cyanobacteria [58]. The physiological and morphological characteristics of these symbioses range from extracellular communities to highly adapted interfaces within special organs or compartments. The mutualistic symbioses between various non-photosynthetic proteobacteria of the order Rhizobiales with plants of the orders Fabales, Fagales, Curcurbitales and Rosales are the most extensively studied interactions between bacteria and plants [59]. The rhizobia-legume symbiosis is characterised by typical root-nodule structures of the plant host, which are colonised by the endosymbiotic rhizobia, so-called bacteroids [60]. The nodulated plant roots supply the bacteria with energy-rich carbon compounds and obtain fixed nitrogen by the bacteroids in return. The nodule formation is a highly regulated and complex process driven by both partners. Freeliving rhizobia enter the plant root epidermis and induce nodule formation by reprogramming root cortical cells. Of special importance for the establishment of the symbiosis are flavonoids secreted by the plant partner [61] and the subsequent induction of bacterial nodulation (nod) genes [62]. The Nod-factors play a role in the formation of the nodule, a complex structure optimised for the requirements of both partners [63,64]. Analysis of root epidermal infection and the underlying signal transduction pathways [65][66][67] indicate that Nod-factors may have evolved following recruitment of pathways, which developed in a phylogenetically more ancient arbuscular mycorrhiza symbiosis [68,69]. In the nodule, bacteroids reside within parenchym cells, where they are localised in membrane bound vesicles (Figure 3a) [70]. Nitrogenase activity is ensured by the spatial separation of the bacteroids inside the nodule structure and special oxygen-scavenging leghemoglobin that is synthesised in the nodules [71]. An interesting feature of rhizobia is that nitrogen fixation is restricted to symbiotic bacteroids, whereas freeliving bacteria do not express nitrogenase [72]. Although the rhizobia-legume symbiosis is a highly adapted and regulated interaction it can not be termed permanent or obligate. Both partners can live and propagate autonomously, and each host generation has to be populated by a new strain of free-living rhizobia. Rhizobia-legume symbioses are not the only root-nodule forming interactions of bacteria and plants. Actinobacteria of the genus Frankia spp. are known to develop nodules for nitrogen fixation in various families and orders of angiosperms known as actinorhizal plants [73]. Free-living Frankia is characterised by a unique morphology, including three structural forms, hypha, sporangium and vesicle, the latter one being a compartment for nitrogen fixation. Although functionally analogous, Frankia nodules differ from those in rhizobia-legume interactions in development and morphology [74]. In contrast to rhizobia all Frankia strains are also capable of fixing molecular nitrogen as free-living bacteria [75]. The appearance of the Frankia-symbiosis as a nodulation dependent interaction emphasises the adaptation of both partners. Other plants, including important economic crops like Zea mays and Oryza sativa have established associations with different nitrogen-fixing bacteria, including Azospirillum [76] and Azoarcus [77]. However, such symbioses have never been found to result in nodule formation. In addition, nitrogen fixing cyanobacteria are also often found interacting with plant partners. For example, symbioses of filamentous heterocyst-forming Nostoc sp. have been reported for bryophytes, pteridophytes (Azolla), gymnosperms (cycads) and angiopsperms (Gunnera) [78][79][80][81]. In all plant hosts, with the exception of Gunnera, symbiotic Nostoc filaments are localised extracellularly in different locations depending on the host species. In bryophytes, like hornworts, the cyanobacteria are found within cavities of the gametophyte [79], whereas an Azolla sp. harbours the bacterial partners in cavities of the dorsal photosynthetic parts of the leaves [80]. In cycad-cyanobacterial associations the symbionts are limited to specialised coralloid roots where they reside in the cortical cyanobacterial zone [81]. More specialised is the mutualistic intracellular Gunnera-Nostoc symbiosis. Here the process begins with invasion of the petiole glands, followed by intracellular establishment within the meristematic cells of this tissue [60,78]. The symbioses of cyanobacteria with their plant partners differ remarkably from the rhizobia-legume interactions. First, cyanobacteria show a broad host range and thus differ from rhizobia or Frankia sp., which are limited to legumes or angiosperms, respectively. In addition, cyanobacteria do not induce the formation of highly specialised structures like root-nodules after colonisation of the host but reside in plant structures known as symbiotic cavities [82], which also exist without symbiosis. The lack of nodule-like organs can be explained by the fact that heterocyst forming cyanobacteria also fix nitrogen as freeliving cells and therefore do not need a special environment for N 2 -fixation in symbiosis. This makes them distinct from rhizobia, which only fix nitrogen in the protective environment of the nodule. Although symbiotic cavities do not display the close and highly regulated interface of a legume-nodule they are nevertheless regions that exhibit adaptations for symbiosis. A common specialisation in occupied symbiotic cavities of plant hosts is the elaboration of elongated cells to improve nutrient exchange [83] and the production of mucilage-exopolysaccharides for water storage or as nutrient reserve (e.g. [84,85]). The infection process is controlled via the production of hormogenium-inducing factors by the host plant, resulting in the development of vegetative cyanobacterial filaments (hormogonia), important for host colonisation [86,87]. The main adaptations to the symbiotic lifestyle found in the bacterial partners concern changes of morphology and physiology. These include a remarkable increase of heterocysts in symbiotic Nostoc, and higher rates of N 2 fixation compared to those of free-living cells. In addition, photosynthesis of symbiotic cyanobacteria is depressed in various associations to avoid competition between symbionts and host for CO 2 and light [86]. In conclusion, different adaptations are found in cyanobacterial-plant interactions but they are not as specific and highly regulated as the complex nodule-forming symbioses. A common feature of all bacteria plant symbioses is their non-obligate, non-permanent character, including a lack of vertical transmission of symbionts to the next host generation. An exception might be the Nostoc-Azolla symbiosis, where cyanobacterial homogenia are transmitted via megaspores [88]. Symbioses of nitrogen fixing bacteria with protists Symbioses of bacteria with unicellular eukaryotes are exceptional as they involve the whole host rather than specialised parts of the host organism. Also these intracellular symbionts require a high degree of regulation and adaptation to maintain the mutualistic relationship. This feature, in conjunction with vertical transmission, suggests that co-evolution and dependence of partners is sufficiently advanced to regard the relationship as unification of two single organisms. The mitochondria and plastids of recent eukaryotes are extreme examples of this kind of association [89,90]. Cyanobacteria have also been detected in intracellular association with an euglenoid flagellate [91], heterotrophic dinoflagellates [92][93][94], a filose amoeba [95], diatoms [96,97] and, extracellularly, with some protists, e.g. diatoms [98]. Only rarely has the nitrogen fixing activity of the prokaryotic partner been demonstrated in these symbioses (e.g. [99]). In the next paragraph the range of symbiotic associations between cyanobacteria and protists is described in a progression of interactions from temporary to permanent. As such, these symbioses provide an opportunity to investigate the cellular changes that may accompany the evolutionary transition from extracellular symbiont to intracellular endosymbiont and cell organelle. Petalomonas sphagnophila is an apoplastic euglenoid that harbours endosymbiotic Synechocystis species [91]. The cyanobacteria occur inside a perialgal vacuole and remain alive for several weeks, before they are metabolised, so that they must be regarded as temporary endosymbiotic cell inclusions. These intracellular cyanobacteria are thus reminiscent of kleptochloroplasts found in some heterotrophic dinoflagellates, marine snails, foraminifera and ciliates. These associations can be understood as a mechanism for the temporary separation of ingested and digested prey [92][93][94]100]. However, in all well-documented cases of kleptochloroplastic interactions, only the plastid or the plastid together with surrounding cell compartments (never the whole cell) is incorporated as a klep-tochloroplast by the host. In contrast, the cyanobacteria of P. sphagmophila are not disintegrated during their internalisation by the euglenoid [91]. Symbiont integrity is therefore likely to be a prerequisite for the functioning of the cyanobacterial nitrogen fixing machinery. The enslaved cyanobacteria may also provide energy-rich C-compounds or, as suggested for other symbiotic interactions, vitamin B12 production to it host [101]. These hypotheses are yet to be investigated thoroughly. Phaeosomes are symbionts found in some representatives of the order Dinophysiales. They exhibit morphological characteristics of Synechocystsis and Synechococcus cells and are located either extracellularly or intracellularly [94]. In the case of intracellular cells, the symbioses seem to be permanent and the benefit of the symbiosis to the host may be efficient nitrogen fixation. However, as in the case of P. sphagnophila, difficulties in cultivating these strains complicate molecular characterisation of the endosymbi-onts. At present this problem is limiting our understanding of the potential benefits of these prokaryote/eukaryote mergers. Some filamentous cyanobacteria are known to interact with diatoms. Extracellular epibionts, endosymbionts and also symbionts positioned in the periplasmic space between the cell wall and cell membrane of the diatom are known to occur [58,98]. Electron microscopy scanning of such interactions has demonstrated a dual symbiotic nature of some symbionts. E. g. Richelia intracellularis has been observed to interact either as an epibiont (with Chaetoceros spec.) or as endosymbiont (with Rhizosolenia clevei) [98]. In these examples, nitrogen fixation for the benefit of the host has been demonstrated by the cultivation of the symbiont-diatom association in the absence of an external fixed nitrogen source. Nitrogen fixation is also suggested from morphological features such as the presence of heterocysts. At least in tropical environments, the production of B12 vitamins may also be a further benefit for the host [101]. The cyanobacterial endosymbionts of the diatom Rhopalodia gibba Some diatoms, including Climacodium frauenfeldianum and Rhopalodia gibba, are known to harbour permanent endosymbionts [96,97,102]. As indicated by EM investigations of R. gibba, these endosymbionts are intracellular and are transmitted vertically [102,103]. The endosymbionts, so-called spheroid bodies [96], are localised in the cytoplasm, and separated by a perialgal vacuole from the cytosol. Each spheroid body is surrounded by a double membrane. As additionally internal membranes are also visible, this morphotype is similar to that of cyanobacteria ( Figure 3b). 16S rDNA sequences have been amplified from an environmental sample of C. frauenfeldianum [97] and from isolated spheroid bodies of R. gibba [102]. Phylogenetic analysis groups these sequences together with free-living cyanobacteria of the genus Cyanothece ( Figure 2). This robust grouping is also evidenced from phylogenetic analysis of a nitrogenase subunit gene, isolated from R. gibbas's spheroid body [102]. In phylogenetic reconstructions of both genes, the branch lengths separating free-living cyanobacteria and the cell inclusions of C. frauenfeldianum and R. gibba are very short, indicating that origins of the protist symbioses are relatively recent. This is unlike the situation for plastids and extant cyanobacteria, which have an ancient phylogenetic relationship. Cyanothece sp., the closest known free-living relatives of spheroid bodies and the endosymbiont of C. frauenfeldianum, are typical unicellular and diazotrophic cyanobacteria. To protect the nitrogenase from oxygen tension, Cyanothece show a strong physiological periodicity, restricting nitrogen-fixation exclusively to the dark period of growth [104]. During this period, the energy demand for N 2 fixation is sustained by large amounts of photosynthetically derived carbohydrates, which are stored as starch parti-Endosymbionts adapted for molecular nitrogen fixation cles. Nitrogen fixing activity of R. gibba was first indicated in the 1980s via acetylene reduction assays [99] and confirmed in latter studies [102]. Intracellular localisation of the enzymatic activity has been undertaken by scanning for protein subunits of nitrogenase [102]. Immunogold experiments have shown that the nitrogenase is localised within the diatom spheroid bodies, thereby confirming that the endosymbiont is responsible for the fixation of nitrogen. Furthermore, corresponding genes for the nitrogenase activity have also been isolated from purified spheroid bodies [102]. Interestingly, spheroid body nitrogen fixation in R. gibba is a strictly light dependent process. This might be the result of several adaptations to the endosymbiotic lifestyle. Spheroid bodies lack a characteristic cyanobacterial fluorescence based on photosynthetic pigments, indicating that they have lost photosynthetic activity and that energy for nitrogen fixation is supplied by the host cell. The protection of the nitrogenase enzyme complex is accomplished through the spatial separation of the two pathways, with N 2 fixation in spheroid bodies and photosynthesis in the host plastid. The loss of photosynthetic activity of spheroid bodies is also expected to lead to the loss of autonomy resulting in an obligate endosymbiosis. This hypothesis is consistent with the observation that R. gibba cells are never observed without spheroid bodies and that cultivation of the endosymbionts outside the host cells has not been possible [102]. Definitive evidence is still required to determine the exact nature of symbiotic interaction and whether the spheroid body of R. gibba is an obligate endosymbiont, or perhaps even an unrecognised DNA-containing organelle. Conclusion The ability to fix molecular nitrogen is restricted to selected bacterial species that express the nitrogenase enzyme complex. Nevertheless, various eukaryotic organisms have utilised this capacity by establishing symbiotic interactions with nitrogen fixing bacteria. In these associations, fixed nitrogen is provided to the hosts, thereby enabling them to colonise environments where the supply of bound nitrogen is limited. In mutualistic symbioses, bacterial symbionts benefit from these associations, e.g. by protection against predators or by being provided with host metabolites. Symbioses for molecular nitrogen fixation can be found in many different habitats, with host organisms including all crown groups of eukaryotic life. Although all partnerships are based on the same enzymatic reaction, the diverse associations differ with respect to the physiological and morphological features that characterise the interconnection of partners. Such features include the development of special host organs for optimal performance of bacterial symbionts, adaptations in host and symbiont metabolism, and the intracellular establishment of bacteria within the host. Close associations involving multiple adaptations and coevolution between partners can result in permanent and obligate relationships, whereby the bacterial symbiont is stably integrated into the host system, and vertically transmitted across generations. These close interactions are mainly found in intracellular symbioses, where free-living bacteria reside within the cells of the host organism. These are similar to organelles of eukaryotes, such as mitochondria and plastids, which both derived from symbiotic interactions and where continuous adaptation and coevolution lead to a fusion of two distinct organisms [3,4]. In both cases, the metabolic capacity of the bacterial symbiont was the driving force for maintenance and evolutionary establishment, resulting in an inseparable merger of host and symbiont. The same basis of interaction applies for molecular nitrogen fixation, where eukaryotic hosts benefit from the unique metabolic capacity of special bacteria, leading to various symbiotic interactions with different specifications. In particular, bacteria interacting with protists, like the spheroid bodies of R. gibba, might serve in the future as important model systems for investigating the establishment of molecular nitrogen fixation in eukaryotic hosts. The detailed study of this interaction will thus provide a great opportunity to understand the complex mechanisms underlying the evolution of obligate endosymbionts and organelles. Phylogenetic analysis Tree construction for Figure 2: 16S rDNA gene tree built using PhyML [105] assuming the optimal substitution model determined by ModelTest [106]. For eubacterial sequences this was a K81 + I + G model, and for eubacteria and archaea a GTR + I + Γ model.
7,306.6
2007-04-04T00:00:00.000
[ "Biology" ]
How directed is a directed network? The trophic levels of nodes in directed networks can reveal their functional properties. Moreover, the trophic coherence of a network, defined in terms of trophic levels, is related to properties such as cycle structure, stability and percolation. The standard definition of trophic levels, however, borrowed from ecology, suffers from drawbacks such as requiring basal nodes, which limit its applicability. Here we propose simple improved definitions of trophic levels and coherence that can be computed on any directed network. We demonstrate how the method can identify node function in examples including ecosystems, supply chain networks, gene expression and global language networks. We also explore how trophic levels and coherence relate to other topological properties, such as non-normality and cycle structure, and show that our method reveals the extent to which the edges in a directed network are aligned in a global direction. I believe that the manuscript should be reconsidered for publication after a major revision. There are a few points that I think are worth discussing and elaborating further on. Below I detail some of these. p.2 eq (2.1): these quantities are extenstios of the cocept of "strength", introduced by Barrat et al. in [PNAS (2004), 101(11),3747-3752]. I would recommend that the authors refer to this manuscript (and subsequent generalization to digraphs, if any) and that they use the term "in/out-strength" or similar, in order to emphasize the relationship with the index introduced in 2004. p.2, eqs (2.2)-(2.3): what is the rationale for calling these "weight" and "imbalance"? p.2. eq (2.4): this appears to be an extension to the weighted case of the a symmetrized graph Laplacian for digraphs (as can also be seen from equation (2.5)). This type of approach to the treatment of directed graphs is often criticised in the literature, as it completely changes the topology of the network (especially in the case of highly non-symmetric matrices W). Could the authors justify their approach and explain further why disregarding directionality of edges is the right thing to do in this context? In my opinion the justification is quite weak, as it stands. (This also links to the contents of page 9). p.3 eq (2.5): please specify who the vector u is. p.3 l.10: here h is characterized as being *the* solution to \Lambda h = v (please add specification of who v is). However, in l. 14 the authors state that \Lambda h = v does not have a unique solution. Please change l.10 to state that h is *a* solution and fully characterize span{h: \Lambda h = v}. p.3 l.14: Instead of considering the case of a disconnected network with several weakly connected components, it would be easier to just focus on the case of weakly connected networks; Then the matrix W+W^T is the (weighted) adjacency matrix of a connected undirected graph and therefore the vector of all ones spans the null space of W+W^T. The case of disconnected networks follows from, e.g., chapter 6.13.3 in "Networks: An Introduction" by M. Newman. (Let me clarify that I understand that the authors are doing this already, I am only suggesting what I consider a better way of presenting the result.) p.3 l. 15: instead of having the notion of weakly connected component as a footnote, please have it in the text. "Connected component" usually implies strongly, not weakly, therefore it is worth making clear what the authors are referring to in the text. p.3 l.25 (and consequently appendix B): it is not straightforward do see the reason behind this choice of F_0. Why this and not an expression with ((h_n-h_m)^2 -1) in place of (h_n-h_m-1)^2 in the numerator? p.3 eq (2.8): in the definition of trophic confusion it may be worth using x instead of h, to avoid confusion with equation (2.7). Section 3 and 4: I believe that the manuscript would improve greatly if these two sections were swapped. p.5 l.56: What do he authors mean by "cyclic network"? p.7 l. 10: eigenvector centrality instead of eigenvalue centrality. p.7 l. 11: "Trophic analysis reveals that this network is strongly directional": what does directionality have to do with the value of F_0? The authors should try and keep their notation as consistent as possible throughout the manuscript. p.7 l.50: "et al." instead of "et al" p.10 ll. 37 ff: It is quite unclear the purpose of these paragraphs: please either expand further on these (by adding formulas as well, when appropriate) or remove these entirely. Section 5: I understand what question the authors are trying to address, but it escapes me why this question should be of interest in the first place. p. 11 ll. 53-56: "The term "normal" came from people who spent their lives with self-adjoint operators and unitary operators, both of which are normal, but people working in stability of ordinary differential equations are fully cognizant that most matrices are not normal." Please remove or rephrase this sentence. p. 11 l 58: cut "imbalance vector" or rephrase as "implies that the imbalance vector is the zero vector: v=0". p. 12 l. 9: "When v = 0 we say that the network is balanced." p. 12 l.10: what is a normal network? p.12 l. 17-18: The authors state the following: "if W is normal and has all eigenvalues real then F_0 = 1". Having previously noted that F_0=1 for symmetric matrices, the result is trivial. Indeed, a matrix is normal iff it is unitarly diagonalizable. Moreover, a unitarly diagonalizable matrix with real spectrum is Hermitian. Since the authors are assuming that W is real, "normal with all real eigenvalues" is a complicated way of saying "symmetric". p.12 l.36: this statement (and the proof in the appendix) appears to be true only for networks without self-loops. p.12 l.57: is r>1? p.15 l.8: "A cycle in a directed network is a closed walk in it. In contrast to some of the literature, we allow repeated edges and repeated nodes" Instead of improperly calling it cycle, the authors could refer to this object as a closed walk. Please also recall the definition of walk. Review form: Reviewer 2 Is the manuscript scientifically sound in its present form? Yes Recommendation? Accept as is Decision letter (RSOS-201138.R0) We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below. Dear Dr MacKay On behalf of the Editors, I am pleased to inform you that your Manuscript RSOS-201138 entitled "How directed is a directed network?" has been accepted for publication in Royal Society Open Science subject to minor revision in accordance with the referee suggestions. Please find the referees' comments at the end of this email. The reviewers and handling editors have recommended publication, but also suggest some minor revisions to your manuscript. Therefore, I invite you to respond to the comments and revise your manuscript. • Ethics statement If your study uses humans or animals please include details of the ethical approval received, including the name of the committee that granted approval. For human studies please also detail whether informed consent was obtained. For field studies on animals please include details of all permissions, licences and/or approvals granted to carry out the fieldwork. • Data accessibility It is a condition of publication that all supporting data are made available either as supplementary information or preferably in a suitable permanent repository. The data accessibility section should state where the article's supporting data can be accessed. This section should also include details, where possible of where to access other relevant research materials such as statistical tools, protocols, software etc can be accessed. If the data has been deposited in an external repository this section should list the database, accession number and link to the DOI for all data from the article that has been made publicly available. Data sets that have been deposited in an external repository and have a DOI should also be appropriately cited in the manuscript and included in the reference list. If you wish to submit your supporting data or code to Dryad (http://datadryad.org/), or modify your current submission to dryad, please use the following link: http://datadryad.org/submit?journalID=RSOS&manu=RSOS-201138 • Competing interests Please declare any financial or non-financial competing interests, or state that you have no competing interests. • Authors' contributions All submissions, other than those with a single author, must include an Authors' Contributions section which individually lists the specific contribution of each author. The list of Authors should meet all of the following criteria; 1) substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; 2) drafting the article or revising it critically for important intellectual content; and 3) final approval of the version to be published. All contributors who do not meet all of these criteria should be included in the acknowledgements. We suggest the following format: AB carried out the molecular lab work, participated in data analysis, carried out sequence alignments, participated in the design of the study and drafted the manuscript; CD carried out the statistical analyses; EF collected field data; GH conceived of the study, designed the study, coordinated the study and helped draft the manuscript. All authors gave final approval for publication. • Acknowledgements Please acknowledge anyone who contributed to the study but did not meet the authorship criteria. • Funding statement Please list the source of funding for each author. Please ensure you have prepared your revision in accordance with the guidance at https://royalsociety.org/journals/authors/author-guidelines/ --please note that we cannot publish your manuscript without the end statements. We have included a screenshot example of the end statements for reference. If you feel that a given heading is not relevant to your paper, please nevertheless include the heading and explicitly state that it is not relevant to your work. Because the schedule for publication is very tight, it is a condition of publication that you submit the revised version of your manuscript before 01-Aug-2020. Please note that the revision deadline will expire at 00.00am on this date. If you do not think you will be able to meet this date please let me know immediately. To revise your manuscript, log into https://mc.manuscriptcentral.com/rsos and enter your Author Centre, where you will find your manuscript title listed under "Manuscripts with Decisions". Under "Actions," click on "Create a Revision." You will be unable to make your revisions on the originally submitted version of the manuscript. Instead, revise your manuscript and upload a new version through your Author Centre. When submitting your revised manuscript, you will be able to respond to the comments made by the referees and upload a file "Response to Referees" in "Section 6 -File Upload". You can use this to document any changes you make to the original manuscript. In order to expedite the processing of the revised manuscript, please be as specific as possible in your response to the referees. We strongly recommend uploading two versions of your revised manuscript: 1) Identifying all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); 2) A 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them. When uploading your revised files please make sure that you have: 1) A text file of the manuscript (tex, txt, rtf, docx or doc), references, tables (including captions) and figure captions. Do not upload a PDF as your "Main Document"; 2) A separate electronic file of each figure (EPS or print-quality PDF preferred (either format should be produced directly from original creation package), or original software format); 3) Included a 100 word media summary of your paper when requested at submission. Please ensure you have entered correct contact details (email, institution and telephone) in your user account; 4) Included the raw data to support the claims made in your paper. You can either include your data as electronic supplementary material or upload to a repository and include the relevant doi within your manuscript. Make sure it is clear in your data accessibility statement how the data can be accessed; 5) All supplementary materials accompanying an accepted article will be treated as in their final form. Note that the Royal Society will neither edit nor typeset supplementary material and it will be hosted as provided. Please ensure that the supplementary material includes the paper details where possible (authors, article title, journal name). Supplementary files will be published alongside the paper on the journal website and posted on the online figshare repository (https://rs.figshare.com/). The heading and legend provided for each supplementary file during the submission process will be used to create the figshare page, so please ensure these are accurate and informative so that your files can be found in searches. Files on figshare will be made available approximately one week before the accompanying article so that the supplementary material can be attributed a unique DOI. Please note that Royal Society Open Science charge article processing charges for all new submissions that are accepted for publication. Charges will also apply to papers transferred to Royal Society Open Science from other Royal Society Publishing journals, as well as papers submitted as part of our collaboration with the Royal Society of Chemistry (https://royalsocietypublishing.org/rsos/chemistry). If your manuscript is newly submitted and subsequently accepted for publication, you will be asked to pay the article processing charge, unless you request a waiver and this is approved by Royal Society Publishing. You can find out more about the charges at https://royalsocietypublishing.org/rsos/charges. Should you have any queries, please contact<EMAIL_ADDRESS>Once again, thank you for submitting your manuscript to Royal Society Open Science and I look forward to receiving your revision. If you have any questions at all, please do not hesitate to get in touch. Comments to the Author: When making your revisions, please been sure to carefully address all of the detailed points raised by both reviewers. In my opinion, authors should be largely free to choose their organisation and writing style, so I leave it up to you to decide if you want to take the advice of Referee 1 about these matters. Reviewer comments to Author: Reviewer: 1 Comments to the Author(s) The manuscript is concerned with the introduction of a new measure of network incoherence which is based on a symmetrized graph Laplacian for weighted directed networks. The measure is tested on different real world data. The manuscript is overall well written, but it is poorly organized and very wordy: it would benefit from reorganizing the material and incorporating in the text some of the results presented in the appendix. Moreover, many of the proofs could be carried out using formulas rather than words, and this simple switch would greatly improve readability. The comparison with other measures of incoherence should be carried out in a more thorough fashion, and it definitely deserve more space that it has been allocated in the manuscript. Disregarding directionality of edges is something that is usually best avoided, and I believe that the authors are not making a strong enough case for their decision to following this path in the manuscript. I believe that the manuscript should be reconsidered for publication after a major revision. There are a few points that I think are worth discussing and elaborating further on. Below I detail some of these. p.2 eq (2.1): these quantities are extenstios of the cocept of "strength", introduced by Barrat et al. in [PNAS (2004), 101(11),3747-3752]. I would recommend that the authors refer to this manuscript (and subsequent generalization to digraphs, if any) and that they use the term "in/out-strength" or similar, in order to emphasize the relationship with the index introduced in 2004. p.2, eqs (2.2)-(2.3) : what is the rationale for calling these "weight" and "imbalance"? p.2. eq (2.4): this appears to be an extension to the weighted case of the a symmetrized graph Laplacian for digraphs (as can also be seen from equation (2.5)). This type of approach to the treatment of directed graphs is often criticised in the literature, as it completely changes the topology of the network (especially in the case of highly non-symmetric matrices W). Could the authors justify their approach and explain further why disregarding directionality of edges is the right thing to do in this context? In my opinion the justification is quite weak, as it stands. (This also links to the contents of page 9). p.3 eq (2.5): please specify who the vector u is. p.3 l.10: here h is characterized as being *the* solution to \Lambda h = v (please add specification of who v is). However, in l. 14 the authors state that \Lambda h = v does not have a unique solution. Please change l.10 to state that h is *a* solution and fully characterize span{h: \Lambda h = v}. p.3 l.14: Instead of considering the case of a disconnected network with several weakly connected components, it would be easier to just focus on the case of weakly connected networks; Then the matrix W+W^T is the (weighted) adjacency matrix of a connected undirected graph and therefore the vector of all ones spans the null space of W+W^T. The case of disconnected networks follows from, e.g., chapter 6.13.3 in "Networks p.5 l.6: what does it mean for a network to be incoherent? Here the authors seem to back some known fact about IO networks with what they observe using F_0. However, shouldn't it be the other way around, with the values of F_0 leading the authors to derive that these networks are incoherent? p.5 l.56: What do he authors mean by "cyclic network"? p.7 l. 10: eigenvector centrality instead of eigenvalue centrality. p.7 l. 11: "Trophic analysis reveals that this network is strongly directional": what does directionality have to do with the value of F_0? The authors should try and keep their notation as consistent as possible throughout the manuscript. p.7 l.50: "et al." instead of "et al" p.10 ll. 37 ff: It is quite unclear the purpose of these paragraphs: please either expand further on these (by adding formulas as well, when appropriate) or remove these entirely. Section 5: I understand what question the authors are trying to address, but it escapes me why this question should be of interest in the first place. p. 11 ll. 53-56: "The term "normal" came from people who spent their lives with self-adjoint operators and unitary operators, both of which are normal, but people working in stability of ordinary differential equations are fully cognizant that most matrices are not normal." Please remove or rephrase this sentence. p. 11 l 58: cut "imbalance vector" or rephrase as "implies that the imbalance vector is the zero vector: v=0". p. 12 l. 9: "When v = 0 we say that the network is balanced." p. 12 l.10: what is a normal network? p.12 l. 17-18: The authors state the following: "if W is normal and has all eigenvalues real then F_0 = 1". Having previously noted that F_0=1 for symmetric matrices, the result is trivial. Indeed, a matrix is normal iff it is unitarly diagonalizable. Moreover, a unitarly diagonalizable matrix with real spectrum is Hermitian. Since the authors are assuming that W is real, "normal with all real eigenvalues" is a complicated way of saying "symmetric". p.12 l.36: this statement (and the proof in the appendix) appears to be true only for networks without self-loops. p.12 l.57: is r>1? p.15 l.8: "A cycle in a directed network is a closed walk in it. In contrast to some of the literature, we allow repeated edges and repeated nodes" Instead of improperly calling it cycle, the authors could refer to this object as a closed walk. Please also recall the definition of walk. Decision letter (RSOS-201138.R1) We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below. Dear Dr MacKay, It is a pleasure to accept your manuscript entitled "How directed is a directed network?" in its current form for publication in Royal Society Open Science. The comments of the reviewer(s) who reviewed your manuscript are included at the foot of this letter. Please ensure that you send to the editorial office an editable version of your accepted manuscript, and individual files for each figure and table included in your manuscript. You can send these in a zip folder if more convenient. Failure to provide these files may delay the processing of your proof. You may disregard this request if you have already provided these files to the editorial office. You can expect to receive a proof of your article in the near future. Please contact the editorial office<EMAIL_ADDRESS>and the production office<EMAIL_ADDRESS>to let us know if you are likely to be away from e-mail contact --if you are going to be away, please nominate a co-author (if available) to manage the proofing process, and ensure they are copied into your email to the journal. Due to rapid publication and an extremely tight schedule, if comments are not received, your paper may experience a delay in publication. Please see the Royal Society Publishing guidance on how you may share your accepted author manuscript at https://royalsociety.org/journals/ethics-policies/media-embargo/. Thank you for your fine contribution. On behalf of the Editors of Royal Society Open Science, we look forward to your continued contributions to the Journal. This work looks at concepts and algorithms for identifying and quantifying structure that may be hidden in pairwise interaction networks. I found the submission to be well-written and novel, and I enjoyed reading it. The work is novel and elegant. It makes a clear contribution and is likely to have a wide impact. It combines ideas, analysis and well-chosen examples on real data sets. I like the organization of the manuscript: getting the main point across first and discussing related work later. I have just a couple of minor comments; these are not vital: • It is interesting (to me) that removing the −1 in (2.7) would reduce to the classical and widely used graph Laplacian/Fielder vector structure. Perhaps this could be mentioned somewhere. • The figures are generally quite compelling, however it is not always easy to see all the edges and to identify their direction. Figure 4 is the most extreme example. Is there any way of dealing with this? • The Discussion section undersells the material and finishes on a strange note. Given that many readers will go straight there, I would recommend a longer and more forceful description of the contributions. In my opinion, authors should be largely free to choose their organisation and writing style, so I leave it up to you to decide if you want to take the advice of Referee 1 about these matters. We are grateful for the reviewers' comments and for the freedom you are giving to us to decide about organisation of the paper and writing style. We chose the organisation deliberately: after an introduction which mentions the ways in which we've improved over previous methods, we present our method and give some illustrations; then we make a comparison with previous methods, followed by several significant connections to other network properties. We also chose to relegate most of the mathematics to appendices because we wanted to keep the paper accessible to less mathematically oriented readers, especially from social science, where we believe the paper can have big impact. We wish to keep to this organisation. On writing style, we feel the style we have adopted is appropriate; again, we chose it to attempt to keep on board readers of a less mathematically oriented background, so it is perforce more wordy than some papers. Response to Reviewer 1 Reviewer comments to Author: Reviewer: 1 Appendix B Comments to the Author(s) The manuscript is concerned with the introduction of a new measure of network incoherence which is based on a symmetrized graph Laplacian for weighted directed networks. The measure is tested on different real world data. The manuscript is overall well written, but it is poorly organized and very wordy: it would benefit from reorganizing the material and incorporating in the text some of the results presented in the appendix. We chose the organisation of the material deliberately, to present the method as early as possible, illustrate its use to attract the general reader's interest, and then discuss in detail comparisons with previous methods, followed by connecting to other network properties. We also chose deliberately to put most of the mathematical proofs into appendices, so that less mathematically inclined readers would not be put off, because we believe a major domain of impact for the method will be the social sciences. Furthermore, the other reviewer liked the organisation! Moreover, many of the proofs could be carried out using formulas rather than words, and this simple switch would greatly improve readability. Most of the proofs are done by formulae. The other proofs are written so that a less mathematically inclined reader can follow them. The comparison with other measures of incoherence should be carried out in a more thorough fashion, and it definitely deserve more space that it has been allocated in the manuscript. There is only one established other measure of incoherence of which we are aware and that is the one of [JDDM] which we cover thoroughly. We have expanded our comments on the notion used by [CHK]. We consider our comments on the notions in [T] and [LM] sufficient. We make the connection with `circularity' of [KIII]. Disregarding directionality of edges is something that is usually best avoided, and I believe that the authors are not making a strong enough case for their decision to following this path in the manuscript. It is a misunderstanding to say we have disregarded the directionality of edges. The whole paper is about directed networks. Although the graph-Laplacian is symmetric, the imbalance vector is antisymmetric and that is where the directionality is encoded. I believe that the manuscript should be reconsidered for publication after a major revision. There are a few points that I think are worth discussing and elaborating further on. Below I detail some of these.
6,141.8
2020-01-15T00:00:00.000
[ "Environmental Science", "Computer Science", "Biology" ]
The Role of Institutions in African Development: Lessons from the Neo-Patrimonial Governance in Nigeria This paper attempts to look at and analyse the role of institutions within the context of three trajectoriesinstitutions, African Development and patrimonial governance in Nigeria. The first refers to the various institutions put in place to sustain the state and the current democratic governance in Nigeria; the second refers to the functions of the state and the political elites in African Development drawing from the Nigeria’s experience; and the third refers to the patrimonial governance in Nigeria across regimes and its effect on Nigeria’s institutional development. Therefore, an analysis of the institutions and patrimonial governance in Nigeria’s development is made; options to strengthen institutions are explored and opinions on the way forward are offered. Citation: Johnson AU (2018) The Role of Institutions in African Development: Lessons from the Neo-Patrimonial Governance in Nigeria. Arts Social Sci J 9: 326. doi: 10.4172/2151-6200.1000326 Introduction The peace of Westphalia marked a victory for the sovereign state as a form of political authority, a kind of political organization where a single locus of authority, a prince or later, a junta or a people ruling through constitution is… supreme within a territory. The sovereign became virtually the only form of polity… to practice substantive or merely formal constitutional authority World Politics [1]. Since independence, the African states have yielded to the victory of the peace of Westphalia without looking at the social history of the African nation-states. As Mustapha notes it is of vital importance that Africa's own experience of state formation plays a crucial role in our theorizing of the contemporary political predicament of the continent [2]. More often than not, Eurocentric models are implicitly or explicitly deployed without any effort being made at establishing and evaluating the relevance of a specifically Africa experience on the matter. The issue is that African historical, geographic, cultural, and institutional context of state formation were not considered in the Berlin Conference in 1884/85. Consequently, African states were set up with three major missions: (1) to disorganize the existing African political economy, social systems, and their values; (2) to create an agency of the international capitalism; and (3) to create an internal police agency for the European institutions and political elites [3]. Lumumba-Kasongo further notes that in its current forms, the African state cannot and will not be able to formulate progressive policies and politics needed for the development of the continent [3]. In essence, the various institutions of government cabinet, parliament, judiciary, civil service, local councils, police, and military -may undergo relatively little modification in formal structure [4]. Indeed, in Nigeria there was no gainsay that these institutions were not modified since it was tested along side with the Nigerian political class the British handed over power to. The issue is that the British fused feudalism into capitalism since both has common ground which is the exploitation of the majority of the people by the tiny few. To the political class that emerged after independence, democracy is not an institutional process which should allow the people to have a say in electing their leaders. Rather it is an institutional process of domination by using the concept to plant into power those who will continue to protect the interest of the international and national bourgeois. In this regard, African political elites in their quest to remain afloat in power accommodated the African political economy system and the Western liberal capitalism in the name of neo-patrimonialism, prebendalism, and clientelism as political corruption. African states are run largely on patrimonial lines. That is a state whose energies among them are coercive, extractive, productive, allocation and distributive and has been commandeered by an oligarchy sometimes civil but more often military towards the fulfilments of its own objectives giving little or no bearing on the common will [5,6]. Neo-patrimonialism is a form of governance which seems to be closely related to the Capstone state (extraction by force or coercion) based on personalized rule. It is organized through client network of patronage, personal loyalty and coercion. In order for leaders of neo-patrimonial states to sustain themselves, they regularly extract resources from their followers in a largely coercive and predatory manner [7,8]. Neo-patrimonialism as Medard put it, involve "any person with even a tiny parcel of authority who manages it as a private possession; and in which clientelism is but one aspect of broader syndrome of privatization of politics that includes, besides clientelism, nepotism, tribalism and corruption [9]. Neo-patrimonial systems tend to monopolise material resources, turning the political game into a zero-sum struggle for control of the state [10]. Neo-patrimonial institutions function in order to enrich political leaders and maintain their personal rule [11][12][13][14][15]. The neo-patrimonial system displays significant continuity overtime and with different ruler…neo-patrimonialism maintains something more persistent than just temporal leaders, namely the political organisations headed by these leaders [10]. Institutional abuse by patrimonial leaders in Nigeria is not new. But its current manifestations and trends in the fourth republic debilitate democratic governance that hampers development in Nigeria. We argue that the institutional arrangements are not the problem of Nigeria's development, but patrimonial rulers. We therefore, contend that for any meaningful development to be achieved patrimonial political manipulation which, erode the effective functioning of public institutions should be discouraged for good governance. Neo-Patrimonialism in Governance in Nigeria: An Overview Nigeria state evolved from a predatory political class that was concerned with power struggle, consolidation, alignment and re-alignment in the context of hegemonic control [16,17]. Since independence, the Nigerian political class has constituted mainly an opportunity seeking office, but the military elevated it to the greater height [18]. The dream of the nationalist leaders of the first republic never is, for a series of avoidable circumstances. Thus, by poor leadership, subjugation of national interest to sectional interest, thievery and internal colonialism (patrimonialism), Nigeria became a colossus with feet of clay [19]. In forty-six (46) years of independence, governments in Nigeria have been overthrown by military coups six times, namely on 15th January 1966, 29th July 1966, on 29th July 1975, on 31st December 1983, on 27th August 1985 and on November 17, 1993. In five of these coups, the coup-makers claimed to seize power in order to save the nation and bring about major improvements in the lives of the people. The military coup of 15th January 1966, for instance, was hailed as a revolution by many radicals and socialists. In the euphoria of the overthrow of a very corrupt and decrepit regime, many failed to see that the underlying economic and social structures and processes, and the external control of the state were not touched by the coup at all. They also failed to see the real nature of the Nigerian Army and its role in the structures which generated the corruption of the civilian regime it had overthrown [20]. Indeed, the underlying structures and processes which generated the corruption and institutional collapse, which brought down the First Republic were not addressed. Painful as it is, we must begin by admitting one glaring fact. This fact is that the most fundamental factor which has prevented the emergence of a democratic political system in this country is the institutional crisis. This is what Bako contextualized as "garrison democracy". Garrison democracy is only democratic in form and appellation, but in essence and reality, it actually trivializes and even repudiates the minimum conditions for democratic processes, laws, values and institutions, leading to the unprecedented contribution of the democratic space in Nigeria during the past eight years [21]. Another element and consequences of garrison democracy is the usurpation of powers of the organs of state and institutions of democracy in Nigeria. In this view, Hodgkin observed that the central concept of "democracy" has normally been understood in its classic sense as meaning, essentially the transfer of political and other forms of power from a small ruling European class to the mass of the African people … the African demos [4]. The democratic method is that institutional arrangement for arriving at political decisions in which individuals acquired the power to decide by means of a competitive struggle for the people's vote. Schumpter, Macpherson also notes the essence of the liberal state as being the system of alternate or multiple parties whereby governments could be held responsible to different sections of the class or classes that had a political voice … The job of the competitive party system was to uphold the competitive market society by keeping the government responsive to the shifting majority interests of those who were running the market society [22,23]. Nigeria is a "rentier state" that runs on oil revenues from a foreigndominated enclave. Those who hold political power command vast patronage resources from the oil. The overthrow of Murtala government was engineered by the foreign interest who were not comfortable with the radical policies of the regime which might deny them access to the vast oil resources. Obasanjo as one of their own was drafted to power in order to sustain dominance of the foreign interest and domestic cronies of the West. Power relations in this regard, in Nigeria become a relay race from one political class (military or civilian) to another with the common programme of acquiring the state as private property for primitive accumulation. As Madunagu observed "A class in power will not hand over power to another through elections", but through imposition of patrimonial leaders for continuity [24]. That the transition to the civil rule in the Second Republic that the Obasanjo -Yar'Adua junta saw in Shehu Shagari and his (NPN) National Party of Nigeria henchmen the ideal successor to their patrimonial governance. That is why the junta spared no efforts -and even broke the very rules it had itself laid down-in its rabid desire to ensure that the NPN succeeded it. The departing military junta thus set the stage for the subversion of laid down rules in the bitter intra-ruling class struggle for the capturing and/or retention of political power and control over government [20]. The problem with this however, is that bad habits once learned, are very difficult to discard. The Shagari administration throughout the country was to deploy similar improper, illegal and even unconstitutional measures not only to capture or retain control over governments but also to "punish" harass and intimidate political adversaries [20]. The fall of the Second Republic was further hastened by the incredible lust for personal comfort and private fortunes by the bulk of the politicians of the Second Republic. Seeing the occupation of public office not as a privilege to diligently and honourably serve the people who put them there, but rather as a golden opportunity to amass wealth, the politicians wasted no time on assumption of office, in building private fortunes. In this vein, Lewis stated that the nebulous party system (in Nigeria) has little to do with a distinct ideology programs, or sectional appeals [25]. The major parties are relatively diverse in their leadership and constituencies, but remain focused on elite contention and patronage... personalities and clientelist networks predominate: internal discipline is weak: internecine battles are common. Politics to them is "Winnertake-all" because public office is still a high road to personal enrichment by dubious means. Bribery, manipulation, and even violence are common tools in the ceaseless struggle for spoils and their frequent use makes plain the abject weakness of democratic norms. The military regimes in Nigeria were not left out in this political corruption in a patrimonial manner. The military lacks mass following, in place of these patrons and clients were recruited from a small group of the rich and powerful contractors, traditional rulers, top civil servants, top military and police officers, big foreign and local businessmen and their managers and bankers. Buhari in his short rulership regarded military intervention in politics on purely redemptive but also catalytic grounds, while Babangida regarded the military, particularly in Nigeria as a fullfledged actor in the struggle for power, as against their prescribed role as custodians of national defense under a democratic authority. As an actor, Babangida sees the military in Africa as legitimate contenders for power, and Nigeria as merely Epicurean, if not hedonistic, the essence of whose activities is to have a bite at the national cake. In this manner the Federal and State level during April 2003 elections. The irony was that the police institution that is suppose to protect the Governor was used against him by a private citizen in the patrimonial state business. Haruna [27] described the actions of the police and Chief Ubah as a coup' etat and grave threat to the survival of the nation's nascent democracy, which should be dealt with constitutionally; "As a student of political science I simply call it a coup. It cannot but be… arrest a governor? Whatever anybody wants to think, it is what I think about it, the due process of removing a governor is there, in the constitution, impeachment, you cannot accomplish it one day. It is beyond party matter. It is a major national crisis… the development in Anambra State had shown that some people are still treated as sacred cows in the country… that unless the so-called sacred cows are demystified, there would be no safe place for anybody in the country" [27]. He added: "Some people feel they are above the law. Unless certain elements are demystified into believing that they are not special species, then there is plenty of problems in this country. Where you make a private citizen running about with more than 60 to 70 police men remains a matter to be investigated" [27]. In the presidential system of government under Obasanjo, has revealed political corruption built around patrimonialism and patronage politics, whereby the constitution is put aside in crucial state issues to protect the interest of patrimonial leaders. The profound deficits of governance that trouble Nigerian's Fourth Republic stem from feeble, unsteady institutions; squabbling among political leaders and factions, and an elite that most Nigerians see as distant, selfish and, lacking in integrity [25]. Institutions and Patrimonial Abuses in Nigeria's Fourth Republic The patrimonial politics of the Fourth Republic in Nigeria cannot be completed without looking at the character of the man at the helms of affairs in state power. Obasanjo had a stinch of radicalism under the influence of Gen. Murtala Mohammed regime. Immediately Obasanjo left office with the euphoria of Murtala-Obasanjo regime, he became apostle of one party system, locating his love for one party system in the African traditional political system where kings do not have oppositions, yet they administered their various domains/kingdoms. In this manner Obasanjo stated; In essence my present suggestion that we adopt a one-party system is very much in consonance with a possible and logical outcome of our political development. All I am saying is that we should give nature and history a gentle push in the right direction… The one party system like a knife is a technique. I am sure we will all agree that a knife is a knife, whether in the hand of a butcher, carver or farmer. It is a technique for achieving a set goal. It is the use to which we put it that matters. Too much opposition that is pushed to the extremes will tear the political system apart [28]. Ajayi after observing the Fourth Republic politics noted that "… Nigerians should take it as a transition from Nigerian cultural set up to the new Nigerian political system. …We are familiar with the "power" bestowed on the traditional rulers in Africa, especially in Nigeria [29]. In Yorubaland we call the kings "Igbakeji' Orisa" second to lesser gods". Nigerians, before the advent of the modern state introduced by Europeans, believed in some deities, which we call by different names depending on where you come from. However, we still believe that he combined abashing use of state resources and coercion what political observers called "settlement" "cooperation" and "force". For Babangida, Nigerian politics is mainly revolving on concept of politics the authoritative allocation of values and; with him at the helms, the surest way to legitimizing himself was to regulate at best as he could, the authority to determining who gets what and how much of the (material) values abundant in country. This "gate-keeping" power business in distributing of state resources was a significant feature of his legitimacy project [26]. Under General Babangida as much as Buhari regime, the military used power to continue building upon an existing authoritarian state established through years of colonial rule on behalf of the bureaucratic bourgeoisie. And authoritarianism by its very nature and logic is demarcated by the concentration, indeed monopolization of power in the Head of State through his kith and kin, friend and associates and, the concomitant access to resources by the same group through large scale corruption. All these combined leads to heightened competition for political power. The state as the vehicle for access to resources which enables the leader and his cohorts to claim to have the capacity to satisfy the needs and wants of the citizenry resulted to the neo-patrimonial state [9]. In order for the neo-patrimonial leaders to function in kleptocratic manner the institutions that sustain the state for the interest of all becomes the casualty. In Nigeria, under the military rule, the executive, the legislative and the judiciary functions are combined and handed to the commander-in-chief. The constitution which is the legal instrument that protects the interest of all is suspended and replaced by Decrees. In the case of Abacha's regime, the nation was at its lowest point. The military conquered every facet of our national life and control the affairs of the state directly or by proxy. Hence, General Jeremiah Useni, headed the Traditional Rulers Forum. Government appointed officials to oversee the affairs of the labour union. The apex arm of the Judiciary, the Supreme Court of Nigeria was crippled by the refusal of the military government to make up the shortfall in the statutory size of its membership by appointment of new Justices [19]. Abacha as Head of State personalized the state and matrimonially shared the state's resources to his acolytes, family members and clients, while the disloyal citizens were brutally oppressed, using the state institutions. He made history as the patrimonial leader who made the political class to surrender the contest of the presidency to him as sole candidate for the5 five political parties his regime formed. The Fourth Republic Politics in Nigeria The hang-over of military rule is also being demonstrated in the politics of the Fourth Republic in Nigeria. Political corruption played in a patrimonial manner dominated the electoral process and which affected the institutions of the state in the post-election governance. Elections were handed over to patrons at the state or regional levels to determine who will occupy any elective position. The condition for occupying any elective positions is loyalty to the patron and the powers that be, at the national level. So instead of elections we had selections of loyalists to the patrimonial leaders. And when their loyalty is questionable especially in making returns to patrons, the national patrons makes available the institutions of the state-police, the legislative arm and the judiciary to deal with disloyal clients. So our experience is that institutions of the state functions in a selective manner. The rules are used against disloyal clients while the law is abused to protect the loyal clients. The abduction of Governor Chris Ngige, and the anarchy that followed was as a result of massive electoral fraudulent practices committed against the people of the poor state by few individuals at there is a mighty God somewhere that lesser gods report to, and the kings are their servants on earth. In that way people tend not to go against the kings or traditional rulers because of the belief that they are second to lesser gods whom we have to obey. It also follows that we have some individuals or clans that have been designated as kingmakers, whose family lineage has been traditionally endowed in choosing the kings after a king passes on. The kingmakers are believed to possess some power from the lesser gods that people could not challenge. In the current day Nigeria, we could consider such belief to be crude Ajayi but that is what exist in the patrimonial politics of the Fourth Republic [29]. Here we can demonstrate that Obansanjo is the "Igbakeji' Orisa" while the patrons and governors are the kingmakers. But once the king has been selected, normally, the king has to go back and pay homage to the kingmakers, from that point on the kingmakers must publicly and traditionally respect the king [29]. In the case of the kingmaker(s) or patron(s) refusing to acknowledge the domineering position of the king, for the sustenance of the system, the kingmaker or patron must be sacrificed. Take for instance, what happened in Bayelsa, Oyo, Plateau, Ekiti and Anambra States. In all the impeachment saga only Oyo and Anambra states governors were not induced by the Economic and Financial Crimes Commission (EFCC) who indicted them. As the Centre for Democracy and Development (CDD) observed "while we continue to applaud the diligent work of the Economic and Financial Crimes Commission in exposing and prosecuting corruption in Nigeria, CDD is concerned about the new political role they have defined for themselves as an institution that is actively planning and implementing the removal of governors CDD, unconstitutionally [30]. In Bayelsa State, the EFCC induced the State House of Assembly to impeach Governor Alamieyeseigha, in Plateau State, the declaration of state emergency in the state was one of the illegality adopted by the patrimonial leaders to checkmate governors. In the same Plateau State, the Governor was impeached by six of twenty-four members of the House of Assembly despite the fact that the two-third quorum was not formed. In Anambra State Governor Peter Obi was impeached at about 5.30am by less than two-third of the House of Assembly members. The allegation against these governors was corruption, whereas, other corrupt governors are still in power untouched. Political analysts and commentators, however, observed that the offence of the impeached governors was that they offended the king (Obasanjo). In Ekiti State as CDD also stated that the state of emergency was declared by Mr. President has far-reaching consequences on the future of Nigeria's democracy. It described the action as a serious compromise on the spirit and operation of federalism and devolution of powers. The group accused the Federal Government of aiding and abetting the impeachment of a governor, allowing the installation of an acting governor and facilitating the declaration of the deputy governor as acting governor. Agbaje a constitutional lawyer also argued that Mr. President complied with section 305 of the constitution in declaring emergency rule in Ekiti State [31]. He however expressed fears about the concentration of both executive and judicial powers in the hands of one man. The implication as he further noted is that the rule of law will collapse. In the case of Oyo State, it was not the issue of constitutional matters it was the issue of respecting the king. As Adedibu stated "he (Ladoja) deserves what he got. The President sent for the two of us (Adedibu and Ladoja) and when I got there and having waited for hours when it was 3 pm, President Obasanjo called him on phone and he told the President he was at a function that he cannot come Adedibu. Ladoja's answer to Obasanjo was a sacrilege that a kingmaker should not accept from a client before the king. Adedibu in order to be relevant demanded that there must be respect for the patron and the king and since Governor Ladoja lacks the decorum, he must leave office. In this direction Adedibu stated that he … deserves some level of respect from Ladoja and he has refused to give it. The issue is that despite the common front the king, patrons and clients may have, there are always political casualties to sustain the system, and that was what happened to Governor Ladoja. Beyond Obasanjo's Patrimonial Governance: Yar'Adua, Jonathan and Buhari Administrations, the struggle by the civil society to enthrone democracy in Nigeria under the military regimes was on the assumptions that it will bring good governance. Therefore, it is good governance that sustains democracy which strengthens democratic institutions. But as we reflected under Obasanjo's administration, what we got was patrimonial governance because institutions that sustain good governance in a democracy was and are still, weak. Political parties in Nigeria as very important democratic institutions have diminished in meaning and purpose they are meant to serve. The practice in Nigeria is that political barons and Godfathers take decisions on behalf of party members who have no say in the running of party affairs. It is actually an aberration to talk of party members in Nigeria. Membership cards are given to barons and godfathers who keep them until the need to use them arises, usually for a party convention. At that point the godfathers would bus their "members" or "clients" to the venue and give them the cards with instructions (under Oath) on who to vote for and payments for their services. It is therefore a straightforward patron-client relationship which the patron pays for the services of his clients. This the picture of political parties in Nigeria since the fourth republic began in 1999. Obasanjo's victory in the people's Democratic Party (PDP) primary election and the general election was made possible by the political Barons, and retired but not tired Military General in Nigeria. In this regards, their political investment must yield dividend by turning the state through the leader they brought to power into patrimonial governance. The Nigerian Elite know that both wealth and power come from access to the state. In our political system there is no autonomy between the hegemonic classes and the state apparatus. Controlling the state is therefore serious business that pushes the elite to all sorts of extremists' tactics to secure access to power. In advanced capitalist societies there is a major difference between the politics of the bourgeoisie and that of the political elites in Nigeria. The interests of the bourgeoisie are the maintenance of law and order, and the dispositions which regulate economic life and ensure the production of the exploitation relationship vis-à-vis the productive class. On the other hand, the interests of the political elites are to preserve their privileged positions at the summit of organization against rival elites [32]. Indeed, political elites in Nigeria and the so called lumpen bourgeoisie are made by the state and still rely on the state for patronage. This makes the contest and keeping power in Nigerian state a do-ordie-affair. Patrimonial arrangements become part of access to power and also keeping power away from increasing number of political elites who seek power. This accounted for the Obasanjo transferring power in patrimonial connection to President Yar'Adua by default to Good luck Jonathan. Why patrimonialism? Many nation states in Africa in the post military rule adopted presidential system of government, because the power of Executive President which is equivalent to the power of a junta and a king. In this regard, the leaders and many of the citizens still maintain the mindset of kingship and feudalism (a ruler should be in the position for life) [33]. This could explain the reason Obasanjo anointed Musa Yar'Adua, the younger brother to his family friend, the late General Shehu Yar'Adua a member of the military political Barons in Nigeria politics, and a vice president that will be loyal to the political machine when his third term bid failed. To them, political success is defined as the capacity to explore and exploit every available option to access the state through ethnic, home town, family and clan connections, the military gangsterism, trade unionism, professional Associations, and Personal Connection are also used to leap frog their way to access [32]. The Musa Yar'Adua Administration was not eventful to measure the level of patrimonial governance because it was short lived due to his death. But First Ladyism played out when there was power vacuum, due to the President ill health. His wife Hajiya Turai Yar'Adua, the first lady whose office was listed as the third in order of protocol on the official website of the State House was so powerful. It was a common knowledge that the first lady was fully in charge of many of the decisions in the presidency. She is the president's closest adviser and does not hide it. She played a key role in the emergency of key federal government appointees. Even State governors desirous of closer relationship with the President, court the office of the First Lady [34]. These advantages of power made her the de facto President. The First Lady with her patrimonial appointees almost executed a civilian coup for her to take over power when her husband died because of constitutional lapses. It was the intervention of legislature that saved the situation which led to the vice President assuming the position of President. Because the new president was a child of patrimonial governance, he was made the vice president by the patrimonial leaders on the credential of being loyal deputy governor and will also likely to be a loyal vice President to the late President Yar'Adua. Therefore, President Good luck could not have done otherwise, since he is a product from the patrimonial leaders. The state therefore was turned to oil the wheel of governance to sustain this power bloc, through corruption. Evidence from high profile political appointees under Good luck administration arrested by EFCC and the money recovered says it all. Today after ten Months in power we are in-undated on a daily basis by numerous revelations about mega corruption and what is clear is that corruption under the Jonathan Good luck Administration was carried out with such recklessness. A few hundred persons were stealing billions of Naira and making governance impossible. More seriously is the massive allocations for arming our troops was simply diverted to private pockets, thereby strengthening the Boko Haram insurgency. This happened because government is run based on family, friends, patrons, sons and daughters of political Barons, and loyal party members. In this regard there is no boundary between state resources and private use so long you are part of the patrimony. Though, Buhari concept of power is cleansing the political arena of the corrupt elite and self-serving persons who tend to dominate, and replacing their dirty politics with a return to providing for public good. Nigerians voted for Buhari precisely because that was the change they wanted, because they saw the zeal in him when he came into power as a Military General on 1st January, 1984. However, his charisma is known nationally, but politics reduced his charisma to the northern dominate Muslims geo-political zone by the press, who accused him of religious bigotry. He then needed a bridge to the south to have access to state power. This was made available for his victory by the southern patrons, who also funded his election. To this end he has to serve two masters. The northern dominate Muslims who mobilized and gave him votes, and the southern patrons who funded his election and mobilized votes for him in the south. The corrupt politicians who never gave him a chance in other elections, even as military leader all of a sudden worked for him in the election victory. Under this situation, with his good intentions to change the patrimonial governance in Nigeria, it has been difficult for him. He is the only man standing in the change party (All Progressive Congress), while all others in his party and cabinet are for business as usual. As a politician, he has to please his geo-political zone in Nigerian tradition and the patrons that funded his election victory. Ibrahim [32] observed; The Buhari Administration is making appointments that are skewed towards the North in General and towards Muslims in particular. One of the most talked about is the leadership of security agencies in which only three out of seventeen positions are filled with people from the south. The other is the board of NNPC, which is said to be skewed against the presumptive owners of petroleum, the Niger Delta. There was no denial of the observation made above; rather government officials justified it on the ground that the Buhari administration has been allocating more top jobs to the North, just as the Jonathan Good luck Administration gave more to the South-South and South-East, Nigeria. On the other hand, the clients of the patrons that funded his election were given the juicy ministerial positions like Works, Power and Housing, Finance, Communication, Transport and Information in order to offset the funds provided by their patrons in 2015 elections and to be in the position to fund the next election in 2019. The understanding of the political elites in Nigeria is that access to state power is to serve private interest as against public good. In this regard, who ever manipulate the election through religion, ethnic, family connection, patron-client ties and geo-politics to gain power deploys it to serve these primordial interests Conclusion Many African states (including Nigeria) are headed by patrimonial regimes that have vest interest in resisting popular participation. African rulers have proven to be crafty and innovative within state governance centered on elite domination. For instance, many governments implement democracy within a context of ongoing violence, intimidation, corruption and a general lack of transparency and accountability. In other words corruption is maintained behind the façade of democratization. Such a context allows for continued plundering of natural resources, misuse of state institutions and of private armies. This has led certain commentators to conclude that such "features of public life in Africa suggest that the state itself is becoming a vehicle for organised criminal activity [35]. The system does not represent significant institutional pressure aimed at holding the governing elite accountable to the people and is not a serious threat to their monopoly on power. Essentially, the process of democratic opening that represents progress is being manipulated and undermined through political corruption built on patronage politics so as to ensure regime survival and avoid the peaceful handing over of power to nonpatrimonial leader. The experience of Nigerians in state governance shows that the erosion of public institutions, as a result of corruption, autocratic rule and the political manipulation of patrimonialism of ethnicity and religion has not abated. Without a fundamental, indeed, revolutionary transformation of governance in Africa (Nigeria) in both private and public sectors and at local, provisional and national levels, the woes of the continent will deepen. The way forward is to lay emphasis on "quality democracy" an approach that will serve to strengthen democracy and popular belief in the democratic system of governance. This is a process which seeks to develop appropriate relationship between African states and their citizens, one in which the state ceases to function as a vehicle for personal enrichment [6,[36][37][38][39]. That is, African states must actively seek to deepen democracy through reconstructing the relationship between state and society. All groups, sectors and individuals should be incorporated as citizens not subjects within the state [40][41][42][43]. Institutions that sustain democracy as outlined in the constitutions of African states should be allowed to function. African leaders must learn that the first step toward a self-reliant future and the restoration, material and non-material, of the continent's situation is the establishment of governmental and institutional legitimacy and accountability [36]. Entrenched political corruption has become one element of a broader phenomenon that can be called catastrophic governance and endemic practices that steadily undermine Nigeria's capacity to increase the supply of public goods and development [44]. The crux of the matter boils down to the absence of the appropriate formal institutions or their systemic perversion by the forces of neo-patrimonialism who engage the state in kleptocracy in the name of governance in Nigeria. In conclusion, our debate is that if the government corrupts the institutions of governance, where will the development come from? Where is democracy?
8,418.2
2017-01-01T00:00:00.000
[ "Economics", "Political Science" ]
A detailed time-resolved and energy-resolved spectro-polarimetric study of bright GRBs detected by AstroSat CZTI in its first year of operation The radiation mechanism underlying the prompt emission remains unresolved and can be resolved using a systematic and uniform time-resolved spectro-polarimetric study. In this paper, we investigated the spectral, temporal, and polarimetric characteristics of five bright GRBs using archival data from AstroSat CZTI, Swift BAT, and Fermi GBM. These bright GRBs were detected by CZTI in its first year of operation, and their average polarization characteristics have been published in Chattopadhyay et al. (2022). In the present work, we examined the time-resolved (in 100-600 keV) and energy-resolved polarization measurements of these GRBs with an improved polarimetric technique such as increasing the effective area and bandwidth (by using data from low-gain pixels), using an improved event selection logic to reduce noise in the double events and extend the spectral bandwidth. In addition, we also separately carried out detailed time-resolved spectral analyses of these GRBs using empirical and physical synchrotron models. By these improved time-resolved and energy-resolved spectral and polarimetric studies (not fully coupled spectro-polarimetric fitting), we could pin down the elusive prompt emission mechanism of these GRBs. Our spectro-polarimetric analysis reveals that GRB 160623A, GRB 160703A, and GRB 160821A have Poynting flux-dominated jets. On the other hand, GRB 160325A and GRB 160802A have baryonic-dominated jets with mild magnetization. Furthermore, we observe a rapid change in polarization angle by $\sim$ 90 degrees within the main pulse of very bright GRB 160821A, consistent with our previous results. Our study suggests that the jet composition of GRBs may exhibit a wide range of magnetization, which can be revealed by utilizing spectro-polarimetric investigations of the bright GRBs. INTRODUCTION Gamma-ray bursts (GRBs) are among the most energetic and enigmatic phenomena in the Universe.They emit an immense amount of energy in the form of highenergy photons occurring during cataclysmic events such as the collapse of massive stars or the merging of compact objects (Piran 2004;Kumar & Zhang 2015).The exact radiation mechanism driving the prompt emission remains elusive (Baring & Braby 2004;Zhang 2011;Bošnjak et al. 2022).Synchrotron emission, typically associated with the radiation emitted as relativistic electrons accelerated in magnetic fields, is commonly believed to underlie the spectral shape of the prompt emission (Uhm & Zhang 2014;Oganesyan et al. 2019;Tavani 1996;Zhang 2020).The low energy spectral slope (α pt ) acts as an indicator tool for understanding the potential radiation physics of GRBs.In scenarios involving fast cooling synchrotron emission, where relativistic electrons rapidly emit all their energy upon acceleration, the theoretically predicted value of α pt is -3/2 (Granot et al. 2000).However, upon examining the distribution of α pt for numerous GRBs observed with various telescopes such as CGRO/BATSE and Fermi/GBM, it becomes clear that a substantial number of bursts do not align with the expected characteristics of synchrotron emission (Preece et al. 1998).This inconsistency suggests the involvement of alternative mechanisms in generating some or all of the emissions.For instance, physical models of photospheric emission have been observed to directly fit the observational data (Pe'Er & Ryde 2017;Beloborodov & Mészáros 2017;Acuner et al. 2020;Fan et al. 2012).Moreover, thermal photospheric spectra need not strictly adhere to a Blackbody distribution; if dissipation takes place just beneath the photosphere, this process could widen the spectrum compared to the standard Blackbody spectrum (Beloborodov 2017;Ahlgren et al. 2019;Rees & Mészáros 2005;Ryde et al. 2011).Additionally, non-dissipative broadening of the photospheric emission can occur due to high latitude emission, often referred to as the multi-color Blackbody effect (Lundman et al. 2013;Pe'er 2015;Acuner et al. 2019).This effect arises because different parts of the photosphere can have different temperatures, leading to a spectrum that is broader than a single Blackbody. In recent years, significant strides have been made in the study of radiation physics through broadband spectroscopy of prompt emissions.Oganesyan et al. (2017Oganesyan et al. ( , 2018) ) performed a joint spectral analysis on a sample of 34 bright bursts observed concurrently by the Swift Burst Alert Telescope (BAT) and the X-ray Telescope (XRT), focusing on prompt gamma-ray emissions.This analysis identified a distinct lower frequency energy break in addition to the typical peak energy break.Notably, the values for α 1 (photon index below the low energy break) and α 2 (photon index above the low energy break) aligned with synchrotron theory predictions.This spectral behavior was similarly noted in bright long GRBs observed by Fermi, as discussed by Ravasio et al. (2018Ravasio et al. ( , 2019)), though it was not present in bright short GRBs from Fermi.We also noted comparable spectral characteristics in one of the brightest long-duration GRBs detected by Fermi (GRB 190530A, Gupta et al. 2022a;Gupta 2023).Furthermore, Oganesyan et al. (2019) expanded the analysis to include the optical band and concluded that the synchrotron spectral shape fits well across the spectrum from gamma-ray to optical bands, using a synchrotron physical model.However, it is important to note that the resulting parameters of the spectral fits, such as the bulk Lorentz factor, number density of electrons, and magnetic field strength, showed inconsistencies compared to other GRB prompt emissions analyses.These discrepancies highlight the complexities involved in modeling the emission region of the jet and suggest the need for further investigation to reconcile these differences.These findings further highlight the potential of simultaneous multi-band observations of prompt emissions, from optical to GeV energies, to deepen our understanding of emission mechanisms.However, capturing such simultaneous observations remains a significant challenge due to the extremely short and variable nature of prompt emissions, often concluding before there is time to redirect optical/X-ray instruments to the burst location (Gupta 2023). Currently, a major challenge in the spectral analysis of GRBs is the degeneracy among various spectral models.Often, the same dataset can be effectively fitted with different spectral models, all yielding comparably good statistical results (Iyyani et al. 2016).The spectroscopic study of prompt gamma-ray emission of GRBs provides valuable information, yet it alone is inadequate to fully discriminate between various emission models.Consequently, there is a critical need for more constraining observables, such as polarization, for example.(Iyyani 2022;Toma 2013;Gill et al. 2020). Polarization measurements offer a reliable means of distinguishing between various potential radiation models of GRBs.This is because different models for prompt emission radiation predict distinct polarization fractions depending on the geometry of the jet.Typically, any asymmetry in the emitting region or viewing geometry results in linearly polarized emission.Synchrotron radiation originating from structured magnetic fields and observed along the jet axis is expected to exhibit a high degree of polarization.Conversely, inverse Compton and photospheric emission typically yield low polarization fractions, except when the jet is observed off-axis (Toma et al. 2009).Therefore, by conducting polarization measurements for numerous bursts, we can gain tangible insights into the emission mechanisms of GRBs.Therefore, combining polarization measurement with spectroscopy can effectively resolve the degeneracy among different spectral models.Additionally, variations in polarization are crucial as they influence the underlying emission mechanism (Gill et al. 2021;McConnell 2017).The temporal evolution of polarization also serves as a vital tool for comprehending the dynamic nature of the jet.Thus, time-resolved spectro-polarimetric measurements offer valuable information for distinguishing between different GRB models and understanding the radiation mechanisms involved. Polarization measurements of prompt emission present significant challenges and have yet to be extensively conducted (Gill et al. 2021).As of now, such measurements have been attempted for only a limited number of bursts, approximately 40, utilizing instruments such as the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI; Coburn & Boggs 2003;Rutledge & Fox 2004), the BATSE Albedo Polarimetry System (BAPS; Willis et al. 2005), the INTErnational Gamma-Ray Astrophysics Laboratory (IN T EGRAL; Kalemci et al. 2007;McGlynn et al. 2007), the GAmmaray burst Polarimeter (GAP; Yonetoku et al. 2011aYonetoku et al. ,b, 2012)), the Cadmium Zinc Telluride Imager onboard As-troSat and POLAR (Kole et al. 2020;Burgess et al. 2019).However, most analyses have focused only on time and energy-integrated polarization measurements (Chattopadhyay 2021). Recently, we in Chattopadhyay et al. (2022) reported the first catalog of prompt emission polarization measurements, focusing on twenty bright GRBs observed by Cadmium Zinc Telluride Imager (CZTI) during its first five years of operation.These bursts were selected for their brightness to maximize the number of Compton events available for polarization analysis.The analysis revealed time-integrated polarization measurements in the energy range of 100-600 keV.Based on the timeintegrated polarization analysis, we found that most of these bursts (∼ 75 %) exhibited a low or zero polarization in the full burst interval (time-resolved and energyresolved polarization measurement is required to exam-ine if they are intrinsically unpolarized or the polarization angle within the burst is changing over the time) and only about 25 % of the sample show indications of high linear polarization, including some as high as 71.43% ± 26.84% (GRB 180103A).Such high polarization implies that the mechanism for prompt emission could either be synchrotron radiation within a timeindependent ordered magnetic field or Compton drag. On the other hand, the POLAR instrument was also designed to perform linear polarization measurements of GRBs within an energy range of approximately 50-500 keV.Kole et al. (2020) analyzed a sample of GRBs detected by POLAR and reported that the time-integrated analysis of the GRBs in their selection is compatible with a low or zero polarization Chattopadhyay et al. (2022) compared the GRB polarization measurements made by POLAR and AstroSat.POLAR, with an energy range of 50-500 keV, is sensitive to lower energies and samples with longer burst durations.In contrast, AstroSat, sensitive to energies above 100 keV, samples shorter burst durations.GRB emissions are typically highly structured, and several GRBs, such as GRB 160821A (Sharma et al. 2019), GRB 170114A (Burgess et al. 2019), and GRB 100826A (Yonetoku et al. 2011a), have shown polarization angle changes during bursts.Thus, POLAR's longer sampling duration makes it more likely to detect emissions with varying polarization angles, resulting in lower polarization observations compared to AstroSat's higher energy, shorter duration sampling.In addition, the discrepancies could also arise from instrument systematics or differences in the GRBs observed by each instrument. In this paper, we performed the time-resolved and energy-resolved polarization measurements of five bright bursts observed by CZTI in its first year of operation to verify whether the polarization properties are changing for these bursts.Additionally, we also performed a comprehensive time-resolved spectral analysis of those bursts observed by the Fermi mission to constrain their radiation physics.The paper's layout is as follows: In § 2, we have given the details about our sample for the present study.In § 3, we have given the methods of time-averaged, time-resolved, and energy-resolved spectro-polarimetric data analysis.The results and discussion of this work are given in § 4 and in § 5, respectively.Finally, we have given a summary & conclusion of this work in § 6. SAMPLE SELECTION AND PREVIOUS POLARIZATION MEASUREMENTS For the present work, we selected five GRBs (GRB 160325A, GRB 160623A, GRB 160703A, GRB 160802A, Table 1.List of bright gamma-ray bursts and their properties under investigation in our sample.The reported values of time-integrated polarization fractions (PF) obtained from Chattopadhyay et al. (2022) are listed in the last column.AstroSat orbit IDs cited here correspond to those in which the necessary data were telemetered to the ground station.The data include those of Target of Opportunity (ToO), Announcement of Opportunity (AO), and Guaranteed Time (GT) observations.The redshift measurement/host search for the sample was attempted utilizing larger telescopes, such as the 10.4m GTC and the 3.6m DOT (4K × 4K IMAGER and TANSPEC) to study such transients (Pandey 2016).and GRB 160821A) to investigate in-depth the timeresolved and energy-resolved spectral and polarimetric characteristics.These bright bursts were observed by AstroSat in its first year of operation.These GRBs are selected based on their brightness (fluence values greater than 10 −5 erg cm −2 ) and their detection in CZTI within certain angles (0-60 and 120-180), where CZTI has good sensitivity for polarization measurements (see section 2 of Chattopadhyay et al. 2022 for more information about sample selection).The selected sample of bright bursts (see Figure 1) for this study and their time-integrated polarization have been tabulated in Table 1.Below, we provide brief observations of individual bursts and their previous polarization measurements. GRB 160325A GRB 160325A was triggered by Fermi GBM (Meegan et al. 2009) and LAT (Atwood et al. 2009) simultaneously at 06:59:21.51UT on March 25, 2016 (Roberts 2016;Axelsson et al. 2016).The GBM light curve of GRB 160325A has two separate emission episodes with a total T 90 duration of 43 sec in 50 -300 keV.The gammaray/hard X-ray instruments like Swift BAT (Barthelmy et al. 2005;Sonbas et al. 2016), Konus-Wind (Tsvetkova et al. 2016), and AstroSat (Chattopadhyay et al. 2019) also detected GRB 160325A.Previously, we studied the spectro-polarimetric properties of individual episodes of GRB 160325A and noted that both episodes have different spectral and polarimetric properties.The first episode of GRB 160325A is best fitted using Cutoff power-law + Blackbody function and has a low polarization fraction (< 37 %, an upper limit in 100-380 keV), suggesting sub-photospheric model as a dominant radiation model for this episode.On the other hand, the second episode of GRB 160325A is best fitted using Cutoff power-law function and has high polarization fraction (> 43 %, a lower limit in 100-380 keV), suggesting thin shell synchrotron radiation model (Sharma et al. 2020).Our joint spectro-polarimetric analysis in-dicates a change in the spectral and polarimetric properties of two episodes of GRB 160325A. Utilizing the precise localization of the optical afterglow of GRB 160623A, Malesani et al. (2016) reported the spectroscopic redshift of the burst (z = 0.367).We also conducted observations of the optical afterglow of GRB 160623A using the 10.4m Gran Telescopio Canarias (GTC) as a part of a larger collaboration.Spectra were gathered at various epochs: on June 25 (1.9 days post-burst) and July 3/4, 2016.We utilized both the R1000B and R2500I grisms, covering the wavelength range of 3800-10000 Å. Analysis of the reddest spectrum (2 x 1200 sec with R2500I) at the afterglow position revealed emission lines of H-alpha and [SII], enabling us to determine a redshift of z = 0.367 (see Figure A1 of the appendix), which corroborates the value proposed by Malesani et al. (2016).Additionally, the bluest range spectrum (1200 sec) indicated a marginal detection of H-beta, considering the high foreground Galactic extinction along the line of sight.The faint continuum observed in the spectrum from the first epoch extended down to 3800 Å, with no discernible absorption lines present.Based on these observations, we confirmed that this redshift corresponds to the host galaxy of GRB 160623A (Castro-Tirado et al. 2016). The X-ray and optical counterpart of GRB 160703A was detected by Swift XRT and UVOT instruments (D'Elia et al. 2016;Hagen & Cenko 2016).The UVOT detected the afterglow of GRB 160703A in all its seven filters, based on this Hagen & Cenko (2016) constrained the redshift of the burst (z < 1.5).Later follow-up observations using the Giant Metrewave Radio Telescope (GMRT) telescope detected a faint potential radio counterpart of GRB 160703A (Nayana et al. 2016). We in Chand et al. (2018) studied the spectropolarimetric study of GRB 160802A using joint Fermi and AstroSat observations.We performed spectral analysis using empirical functions and XSPEC software.We noted that the evolution of low-energy photon indices of the Band function is harder than those theoretically expected from thin shell synchrotron slow and fast cooling model, indicating photospheric origin.Additionally, we calculated the time-averaged PF = 85 ± 29 % using previous polarization tools in 100-300 keV (Chand et al. 2018).A high value of the time-averaged PF (< 51.89 %, an upper limit) was also measured 100-600 keV in Chattopadhyay et al. (2022) for GRB 160802A using improved polarimetric techniques.Such a high value of PF indicates a synchrotron model if the source was observed on-axis.On the other hand, the photospheric model can also produce such high PF if the source is viewed along the edge.Based on our joint Fermi and As-troSat spectro-polarimetric observations, we suggested that GRB 160802A might have originated due to subphotospheric dissipation viewed along the edge (Chand et al. 2018). GRB 160821A GRB 160821A was detected by Swift BAT and Fermi GBM at 20:34:30 UT on 21 August 2016 (Siegel et al. 2016;Stanbro & Meegan 2016).The prompt emission of the burst was also discovered independently using Fermi LAT (McEnery et al. 2016), Konus-Wind (Kozlova et al. 2016b), CALET (Marrocchesi et al. 2016), and As-troSat (Bhalerao et al. 2016c).The burst is extremely bright which provides a unique opportunity for detailed spectro-polarimetric analysis using Fermi-AstroSat observations.We performed the spectro-polarimetric analysis of GRB 160821A and noted a high PF (66 +26 −27 %) in the time-averaged polarization measurements (in 100-300 keV).Additionally, the time-resolved polarization measurements give evidence of a change in polarization angle by twice during the entire emission phase of GRB 160821A (Sharma et al. 2019).Recently, for this burst, we reported the time-averaged PF (< 33.87 %, an upper limit) in 100-600 keV utilizing the improved polarization measurement tools (Chattopadhyay et al. 2022). DATA ANALYSIS We utilized AstroSat CZTI data for the polarization measurements of the GRBs in our sample, while Fermi and Swift observations were employed for the spectral analysis of the bursts (see details below).It is crucial to clarify that our analysis does not involve a fully coupled spectro-polarimetric fitting.Despite the absence of a fully coupled spectro-polarimetric analysis, our study provides significant insights into the polarization characteristics and emission mechanisms of the GRBs under investigation.The AstroSat CZTI mainly serves as a hard X-ray imaging/spectroscopy detector with a wide field of view.Notably, its ground calibration has revealed polarization measurement capabilities for on-axis sources.Recent experimentation by Vaishnava et al. (2022) has further validated CZTI's ability to measure off-axis hard X-ray polarization for bright sources such as GRBs.Above 100 keV, CZTI exhibits a notable probability of Compton scattering.Leveraging the pixilated nature of CZT detectors, it functions as a Compton Polarimeter.Given its distinctive hard X-ray polarization measurement capabilities, the CZTI team has reported polarization measurements of both persistent (such as the Crab pulsar and nebula) and transient (including GRBs) X-ray sources (Rao et al. 2016;Vadawale et al. 2018;Chattopadhyay et al. 2019).Despite moderate brightness, energetic transient sources like GRBs are the potential hard X-ray for polarization measurements due to the simultaneous availability of pre and post-burst backgrounds with higher signal-to-noise ratios.For a comprehensive understanding of the prompt emission polarization analysis of GRBs using CZTI data, we in Chattopadhyay et al. ( 2022) present detailed techniques.In this study, we present a concise overview of the steps and recent enhancements in the polarization analysis tool for CZTI data. Technique of polarization analysis and improvements • Selection of Compton events: To conduct polarization analysis using CZTI data, we initially chose double events detected within a 20 µs temporal window.Subsequently, we applied Compton criteria, assessing the ratio of energies received on neighboring pixels, to filter out double events resulting from chance coincidence. • Creation of Background-Subtracted Azimuthal Angle Distribution: Compton events were selected within both the GRB emission region and the preand post-burst background regions.To define the latter, we excluded instances of spacecraft crossing the South Atlantic Anomaly.Following this, we subtracted the raw azimuthal angle distribution of the GRB emission region from that of the background, resulting in the final backgroundsubtracted azimuthal angle distribution for the GRB. • Correction for geometric effects: Systematic errors stemming from geometric effects and offaxis detection of GRBs impact the backgroundsubtracted azimuthal angle distribution.To address this, we employed the Geant4 toolkit and the AstroSat mass model to simulate an unpo-larized azimuthal angle distribution.This simulation considered the distribution of photons observed from GRB spectra at the same orientation as the AstroSat spacecraft.Subsequently, we normalized the observed background-subtracted azimuthal angle distribution of the GRB using the simulated unpolarized azimuthal angle distribution. • Calculation of Modulation Amplitude and Polarization Angle: We employed a sinusoidal function to fit the observed background-subtracted and geometry-corrected azimuthal angle distribution of the GRB.This fitting process enabled us to determine the modulation factor (µ) and polarization angle within the AstroSat CZTI plane. For the sinusoidal function fitting, we utilized the Markov chain Monte Carlo (MCMC) method. • Calculation of Polarization Fraction: To ascertain the polarization fraction, normalization of the modulation factor (µ) with the simulated modulation amplitude for 100% polarized radiation µ 100 is required.This value is obtained through Geant4 toolkit simulations using the AstroSat mass model for the same direction and observed spectral parameters.Subsequently, the PF is calculated by normalizing µ with µ 100 for those bursts exhibiting a Bayes factor greater than 2. In instances where the Bayes factor is below 2, we establish a constraint on the polarization fraction by setting a 2σ upper limit (refer to Chattopadhyay et al. 2022 for further details). Furthermore, we have implemented the following enhancements in the polarization data analysis of the AstroSat CZTI for this study.This upgraded CZTI pipeline is being utilized for the first time for executing time and energy-resolved polarization measurements of bursts detected by the AstroSat CZTI. Low gain pixels and energy bandwidth Since the launch of the AstroSat mission, around 20 % of the CZTI pixels were observed to have electronic gains lower (2−4 times) than the laboratory-tested gain values.In the previous studies (e.g., Chand et al. 2018;Chattopadhyay et al. 2019;Chand et al. 2019;Sharma et al. 2019Sharma et al. , 2020;;Gupta et al. 2022a), the sensitive spectroscopic and polarimetric information in 100 -300 keV were extracted using the normal-gain pixels only.However, the electronic gain for the low-gain pixels has been constant since the first day of working of CZTI in space; therefore, considering the low-gain pixels after rigorous calibration can extend the energy channels of Compton energy spectra and polarization up to 600 keV.This new characteristic also makes wider the spectral coverage using single-pixel up to the sub-MeV capacity (∼1 MeV), earlier it was restricted to 150 keV (Chattopadhyay et al. 2021).Recently, we applied this method for the time-averaged polarization measurement of twenty GRBs detected by the AstroSat CZTI in its five years of operation (Chattopadhyay et al. 2022).We are now implementing these improvements for the first time in time-resolved and energy-resolved polarimetric measurements of bright bursts.This new methodology significantly enhances outcomes and extends the energy coverage for prompt emission spectro-polarimetric analysis. New event selection logic Hard X-ray detectors are typically sensitive to background noise, such as cosmic rays, owing to their nonfocusing nature at these wavelengths.As a result, it is crucial to identify and eliminate such noise events, selecting only those unaffected by background interference for scientific analysis.In the case of CZTI, the previous analysis pipeline for time-resolved spectro-polarimetric studies has incorporated techniques for event selection, but these techniques have some constraints.For example, these algorithms were mainly developed to analyze data from regular X-ray sources where the object flux is significantly lower than the background and thus is not well equipped for transient events like GRBs.Ratheesh et al. (2021) re-investigated the features of noise events in CZTI and gave a generalized event choice technique that provides analysis for all types of sources, including GRBs.This algorithm significantly reduces noise levels without considering the source flux dependence.In our current study, focusing on time-resolved and energyresolved polarimetric analysis using CZTI data, we have used this algorithm, leveraging its improved capability for noise reduction across various source types. Technique of temporal and spectral analysis The temporal profiles of GRBs exhibit distinct characteristics attributed to the erratic behavior of the central engine.To extract temporal information from Fermi GBM data, we employed the Fermi GBM Data Tools (Goldstein et al. 2022).Furthermore, to extract spectra from Fermi GBM data, we employed the gtburst tool1 .For BAT data, both temporal and spectral analyses were carried out using HEASOFT, utilizing the most recent BAT calibration files.For detailed insights into the technique employed for BAT data analysis, refer to Gupta et al. (2021).It is important to note that 3ML plugin for the simultaneous fitting of BAT data with data from other instruments, such as Konus-Wind, As-troSat, etc, is not currently available.Consequently, for GRBs observed with BAT, we have relied exclusively on the spectral parameters derived from the Konus-Wind instrument, as reported in Chattopadhyay et al. (2022).Below, we have provided details of our spectral analysis (empirical and physical synchrotron model).However, we did not explore the physical photospheric models due to the lack of a publicly available robust and validated photospheric model (compatible with 3ML). Empirical spectral modeling For the prompt emission spectral modeling of GRBs, we employed the Multi-Mission Maximum Likelihood framework (Vianello et al. 2015, 3ML).Typically, the GRB spectrum can be adequately described by an empirical Band function.Therefore, we initially fitted the spectrum of GRBs of our sample using Band function.Subsequently, we explored additional empirical functions such as power-law (PL), Cutoff power-law (CPL), and bkn2pow, considering model parameters and statistical measures/residuals from spectral fitting with 3ML.The selection of the best-fit model was determined based on the difference in deviance information criterion (DIC) values obtained from various models.A comprehensive method for empirical spectral modeling is provided in Caballero-García et al. (2023).Burgess et al. (2020) showed that empirical function could be fallacious, and we should use physical spectral modeling to constrain the radiation physics of prompt emission.Burgess et al. (2020) further showed that even if the low-energy index of Band function exceeds the line of death of the synchrotron model, the spectrum still could be fitted using physical thin shell synchrotron model.Additionally, due to the spectral curvature of empirical functions, the empirical spectral models may lead to incorrect interpretations of the radiation physics of GRBs.So, we have utilized the physical thin shell synchrotron model to accurately interpret the emission mechanism.For the present work, we have applied publicly available pynchrotron2 physical model for the time-integrated and time-resolved spectral fitting of GBM data in 3ML (Burgess et al. 2020).pynchrotron model executes the synchrotron emission from a cooling population of electrons in the thin shell case.According to the pynchrotron model, the relativistic electrons follow a power-law distribution N(γ) ∝ γ −p with γ inj ≤ γ ≤ γ max .In this equation, p represents the power-law index of the energy distribution of injected electron, γ inj represents the lower limit, and γ max represents the upper limit of the relativistic electron spectrum.pynchrotron model consists of six model parameters: 1. the power-law index of the energy distribution of injected electron (p), 2. The strength of magnetic field (B), 3. γ max , 4. γ inj , 5. Bulk Lorentz factor (γ bulk ) of the relativistic jet, and 6.Lorentz factor for the electron cooling time scale (γ cool ).Although, while the physical spectral modeling of GRBs, we fixed the γ inj = 10 5 (due to degeneracy between B and γ inj ), γ max = 10 8 (slow cooling synchrotron model better fit the prompt spectrum).Additionally, we have also fixed the γ bulk for GRBs utilizing the prompt emission correlation between γ bulk and isotropic gamma-ray energy. Search for potential host galaxies using DOT The expected polarization fraction from different radiation models depends on the jet viewing geometry, and this can be further verified by investigating the Γθ j condition, where Γ represents the bulk Lorentz factor and θ j denotes the jet opening angle (see section 5.1 for more information).Γ and θ j could be calculated using the Liang relation (the correlation between isotropic gamma-ray energy and Γ, Liang et al. 2010) and the jet breaks observed in the afterglow light curve, respectively.However, both of these parameters depend on the redshift.Therefore, redshift is a very important parameter to verify the Γθ j condition and predict the possible radiation mechanism based on the observed value of polarization fraction.We observed that only two GRBs (GRB 160623A and GRB 160703A) in our sample have redshift constraints.No redshift measurements were found in the literature for the remaining GRBs.To determine their photometric redshift, we attempted to locate the associated host galaxies of the bursts with sub-arcsecond localization in our sample (GRB 160703A and GRB 160821A) using the 3.6m Devasthal Optical Telescope (DOT, Gupta et al. 2023).We conducted observations of GRB 160703A using TANSPEC (in i-filter, Sharma et al. 2022) on 2022-11-11, with a total exposure time of 5700 seconds.Similarly, observations of GRB 160821A were carried out using a 4K × 4K IMAGER (in R-filter, Pandey et al. 2018Pandey et al. , 2023) ) on 2022-12-20, with a total exposure time of 5100 seconds (see Figure A1 of the appendix).The methods for the optical data reduction of host images taken using TANSPEC and IMAGER are presented in Gupta et al. (2022b); Gupta (2023).However, despite our efforts, we were unable to detect any associated host galaxies of these bursts within the best available error circles.Our observations yielded limiting magnitudes of ∼ 23 mag for GRB 160703A and 23.6 mag for GRB 160821A, respectively.This suggests that the host galaxies of these GRBs may be intrinsically faint or highly obscured, reflecting the diverse nature of GRB host environments. RESULTS Utilizing the comprehensive analysis outlined above, we proceed to present the detailed spectro-polarimetric results of all five bright GRBs in the subsequent section. Prompt uniform light curves and time-integrated spectra The prompt light curve profiles of Fermi detected (GRB 160325A, GRB 160623A, GRB 160802A, and GRB 160821A) and Swift detected (GRB 160703A) GRBs in our sample are presented in Figure A2 of the appendix.The light curves of GRB 160325A (depicted in red) and GRB 160802A (in green) exhibit similar temporal profiles, characterized by two distinct episodes: a prominent pulse followed by a softer pulse, with a quiescent temporal gap in between.In contrast, GRB 160623A (highlighted in blue) showcases a primary pulse succeeded by weaker emission.Notably, Fermi could not detect the main emission of GRB 160623A due to Earth occultation during the burst's main emission, with the Fermi trigger occurring approximately 50 seconds post-burst (Mailyan et al. 2016).The light curve of GRB 160821A (depicted in pink) illustrates a faint initial emission followed by a very brighter emission.Meanwhile, GRB 160703A presents multiple overlapping profiles (in grey). We employed the Bayesian block method on the CZTI Compton light curves to determine the time intervals for the time-integrated spectral analysis of GRBs in our sample.These selected time segments were also utilized for time-integrated polarization measurements, as detailed in section 2.2 of Chattopadhyay et al. 2022.The time-integrated Fermi spectra of GRB 160325A and GRB 160623A were optimally fitted using the Bkn2pow function.Conversely, the time-integrated Fermi spectra of GRB 160802A and GRB 160821A exhibited the best fits with the Band + Blackbody function.For the Swift BAT-detected GRB 160703A, the time-integrated spectrum was most effectively described by the Cutoff power-law function, considering the limitation of energy coverage of BAT.Detailed information regarding the best fit time-integrated spectral parameters for all five GRBs in our sample can be found in Table B1 of the appendix. Comparison with Fermi GRBs We analyzed the spectral (obtained using timeintegrated analysis) and temporal parameters of GRBs in our sample and compared them with a larger sample of Fermi GBM detected GRBs (see Figure 1).Such comparison provides valuable insights into the spectral properties and diversity of these cosmic sources.The distribution of the low energy photon index is useful for characterizing the power-law behavior of the photon spectrum at lower energies and identifying the emission mechanisms.The distribution of α pt reveals that a significant number of bursts deviates from the synchrotron emission mechanism.The distribution of E p value is crucial and indicates the energy at which the GRB spectrum reaches its maximum intensity.We noted all the bursts in our sample have a harder peak energy than the mean peak value obtained for Fermi GBM detected GRBs.The distribution of high-energy photon indices signifies the steepness of the spectral slope in the highenergy regime.The high energy spectral index (β pt ) values (calculated using the time-integrated spectral measurement) for GRB 160325A, GRB 160623A, and GRB 160802A are steeper than the mean value obtained for Fermi GBM detected GRBs.On the other hand, GRB 160821A has a shallower β pt value.Further, we studied the distribution of T 90 duration using Fermi GBM data, and the distribution indicates that all the bursts in our sample belong to the long GRBs class. Energy-fluence distribution We compared the energy fluence value of GRBs in our sample with Fermi GBM and Swift BAT detected GRBs.Our analysis indicates that GRBs in our sample are significantly brighter than the mean value of observed fluence values (see Figure 1).We also represented this result using the distribution of T 90 as a function of energy fluence values for the bursts observed by Fermi GBM (see inset plot in Figure 1).High fluence bursts are useful for polarization measurements. Spectral-Hardness plot The classification of GRBs primarily relies on the prompt emission properties, such as the duration of the T 90 and the hardness ratio.We studied the spectral hardness distribution for the GRBs in our sample.The peak energy of a GRB's spectrum is related to its duration.Studies have shown that GRBs with longer durations tend to have lower peak energies (soft), while shorter-duration GRBs tend to have higher peak energies (hard).We compiled the T 90 duration and E p values of all the GRBs detected by the Fermi GBM instrument from the GBM burst catalog.We noted that all In our analysis, we included all five GRBs in the Amati and Yonetoku relations.However, for the GRBs without measured redshifts, we assumed a redshift value of 2, mean of redshift distribution for long GRBs (Gupta et al. 2022b). the GRBs in our sample are consistent with the typical characteristics of long GRBs (see Figure 1). Amati and Yonetoku correlation Several global correlations can be observed in the prompt properties of GRBs, and these correlations play a crucial role in characterizing GRBs (Minaev & Pozanenko 2020).We studied the Amati correlation for the GRBs in our sample (Amati 2006).It is a well-known empirical relationship and relates the isotropic equivalent energy (E γ,iso ) and the spectral peak energy of GRB prompt emission spectra in the rest frame.Amati correlation has important implications for the physics of the prompt emission process, the emission mechanism, or the properties of the GRB progenitor systems.For GRB 160623A, we obtained E γ,iso and peak energy values using Konus-Wind observations (Tsvetkova et al. 2017) as the main emission was not detected using Fermi GBM.We noted that all the GRBs in our sample are consistent with the Amati correlation of the long GRBs (see Figure 2).The physical explanation for the Amati correlation in the literature remains a subject of debate and lacks consensus.Nevertheless, certain studies suggest that the Amati correlation may be attributed to the viewing angle effect within the context of synchrotron emission (Yamazaki et al. 2004;Eichler & Levinson 2004;Levinson & Eichler 2005). We also studied the Yonetoku correlation for our sample (Yonetoku et al. 2010).The Yonetoku correlation relates two observables of GRBs: E γ,iso and the peak luminosity (L γ,iso ) of the prompt gamma-ray emission.This correlation indicates that GRBs with higher isotropic equivalent energies tend to have higher peak luminosities.The correlation provides constraints and insights into the nature of GRB progenitors, emission processes, and the energy release mechanisms associated with these powerful cosmic explosions.This correlation could potentially be explained by the photospheric dissipation model, taking into account that subphotospheric dissipation occurs at a considerable distance from the central engine (Rees & Mészáros 2005).We noted that all the GRBs in our sample are consistent with the Yonetoku correlation (see Figure 2).Furthermore, the photospheric model has been useful in explaining both the Amati and Yonetoku relations.Recent studies have shown that these correlations can be naturally accounted for by considering the effects of the viewing angle relative to the jet axis.When the photospheric emission is viewed at different angles, the observed spectral properties and the inferred energetics can vary significantly.This variation can lead to the observed Amati and Yonetoku relations (Ito et al. 2019;Parsotan & Ito 2022;Ito et al. 2024;Parsotan & Lazzati 2022). Time-resolved spectral measurements The GRBs spectrum shows strong evolution within the burst; therefore, the derived time-integrated spectral parameters may not provide intrinsic spectral behavior and can be artifacts due to strong spectral evo-lution.Thus, time-resolved spectral measurements are needed to verify the underlying radiation mechanisms of GRBs.We studied the time-resolved spectral analysis of those bursts (GRB 160325A, GRB 160802A, and GRB 160821A) for which Fermi GBM observations were available.Fermi GBM wide spectral coverage is crucial for detailed spectral analysis. We selected the temporal bins for time-resolved spectral analysis using the Bayesian Block method.After selecting bins, we calculated the significance of individual bins and only selected those with a signification greater than 10.Further, we fitted all these bins with Band and CPL models and calculated the difference of DIC values to identify the best-fit model for individual bins.The comparison between DIC values of Band and CPL models for all three GRBs are plotted with red squares in Figure A3 of the appendix.The DIC comparison indicates that individual bins of GRB 160325A, GRB 160802A, and GRB 160821A are preferred Band function over CPL model (no bins have ∆ DIC ≤ -10).For some of the bins, CPL model has ∆ DIC value in between zero and -10, indicating that Band and CPL both models have an equivalent fit.After selecting the best-fit function between Band and CPL models, we added the additional Blackbody (BB) function.We again selected the bestfit model between Band or CPL with Band+BB or CPL+BB using the difference of DIC values obtained for the two models.A detailed selection method for the different empirical functions is present in Caballero-García et al. (2023).Furthermore, we have also compared the fits between the best fit empirical and physical models.However, there are some time bins for which the physical synchrotron parameters are not very well constrained (due to the low signification). We used the derived spectral parameters using timeresolved spectral analysis to study their evolution and correlation among them.The spectral evolution of empirical parameters E p , low and high energy photon indices for GRB 160325A, GRB 160802A, and GRB 160821A is presented in Figures 3, 4, and 5, respectively.The evolution of physical parameters (electron spectral index and magnetic field strength) obtained using synchrotron modeling is also shown in these figures.We observed that E p evolution of all three GRBs has an intensity tracking behavior.Additionally, α pt evolution for GRB 160802A and GRB 160821A have the same tracking behavior, supporting a double-tracking nature.The correlation among different empirical and physical spectral parameters of time-resolved spectral parameters was also studied.The correlation results between different model parameters are listed in the appendix in Table B11.Our correlation analysis indicates that the peak energy of the burst (obtained using empirical fitting) is strongly correlated with flux evolution for all three GRBs.We also observed that α pt is strongly correlated with flux evolution for GRB 160802A and GRB 160821A.However, it is anti-correlated for GRB 1603025A (correlation analysis for GRB 160325A is not statistically significant due to less number of available bins).The physical parameters B and p calculated using synchrotron modeling are found to be correlated with each other for GRB 160802A and GRB 160821A.More-over, the physical parameters B and p strongly correlate with empirical parameters E p , α pt , and flux for GRB 160802A and GRB 160821A. Time-resolved polarization measurements Previous studies on the polarization of a few GRBs, such as GRB 100826A, GRB 160821A, and GRB 170114A, have suggested that their polarization properties could exhibit temporal evolution (Yonetoku et al. 2011a;Sharma et al. 2019;Burgess et al. 2019).However, it's important to note that these GRBs were observed using different instruments and analyzed through distinct pipelines.The observed hints regarding the evolution in polarization properties of GRB 100826A, GRB 160821A, and GRB 170114A were obtained using the GAP, AstroSat/CZTI, and POLAR instruments, respectively.These findings imply that the polarization properties of GRBs may undergo intrinsic changes over time, potentially resulting in null or low polarization fractions in time-integrated polarization measurements. In our recent five-year catalog paper (Chattopadhyay et al. 2022), we highlighted a notable observation: approximately 75% of GRBs exhibit low or null polarization fractions in our time-integrated polarization analysis.However, to ascertain whether these bursts are intrinsically unpolarized or if their polarization properties undergo changes within the bursts, leading to null or low polarization, a detailed time-resolved polarization analysis is imperative.In the present work, we studied a detailed time-resolved polarization analysis of five GRBs detected in the first year of operation of As-troSat.We applied two distinct binning techniques to the GRB light curves and subsequently conducted polarization measurements.In case the GRB light curve has more than one pulse (for example, GRB 160325A and GRB 160802A), we selected individual pulses for timeresolved polarization measurement.Conversely, for GRBs exhibiting a single pulse, namely GRB 160623A, GRB 160703A, and GRB 160821A, we selected the peak duration of the burst.The results of our time-resolved polarization measurement are tabulated in Table 2. Additionally, we present an illustrative example of the posterior probability distribution obtained through polarization analysis of GRB 160623A (during the peak duration) in Figure 6. Further, we also selected the time bins using the sliding mode temporal binning method (since the GRB light curves exhibit rapid or irregular variations) with a bin width of 10 sec (for GRB 160325A, GRB 160623A, GRB 160703A, and GRB 160821A) or 5 sec (for GRB 160802A) for the time-resolved polarization measurements.We initially divided the light curve into smaller Figure 6.An example of the posterior probability distribution (polarization angle in the top left and polarization degree in the bottom right) obtained using polarization analysis (using MCMC) of GRB 160623A (during the peak duration).In the top right panel, the modulation curve and the sinusoidal fit are illustrated by a solid blue line, accompanied by 100 random Markov Chain Monte Carlo (MCMC) iterations.In the bottom left panel, the confidence area for the polarization angle and degree is represented by red, blue, and green contours, corresponding to confidence levels of 68%, 95%, and 99%, respectively.time intervals of the bin width from 0-10 sec or 0-5 sec and slid these average intervals across the entire duration of the burst with increasing order of 1 sec (GRB 160325A, GRB 160623A, GRB 160703A, and GRB 160802A) or 2 sec (GRB 160821A).Using the sliding mode binning, we calculated the average values of polarization parameters within each bin.The polarization results obtained using the temporal sliding binning along with pulsed/peak-wise binning algorithms are displayed in Figures 3, 4, 5, and 7, respectively.The timeresolved (pulsed-wise) analysis of the first pulses of GRB 160325A and GRB 160802A constrains the higher PF values (see Table 2), although sliding mode analysis of the same pulses indicates lower PF values.We noted that the polarization angles of GRB 160325A, GRB 160623A, GRB 160703A, and GRB 160802A obtained for different burst intervals remain within their respective error bars.This suggests that there is no substantial change in the polarization properties as these bursts evolve.However, we noted that the polarization angles of GRB 160821A changed twice within the burst, consistent with our previous results reported in 100-300 keV with the previous polarization pipeline (Sharma et al. 2019).Our time-resolved polarization analysis gives a hint that the polarization properties of GRB 160821A depend on the temporal window of the burst. Energy-resolved polarization measurements In addition to conducting time-resolved polarization measurements, we also performed an energy-resolved polarization analysis.A comparison of the polarization fraction obtained using the AstroSat CZTI and POLAR missions catalog revealed that AstroSat CZTI detected approximately 20 % higher polarization compared to POLAR measurements (Chattopadhyay et al. 2022).We suggested that the discrepancy between the observed time-integrated and energy-integrated polarization fractions of prompt emission by AstroSat CZTI and POLAR missions could be attributed to the fact that both instruments report the polarization fractions values in different energy channels (CZTI values in 100-600 keV, and POLAR values in 50-500 keV). In this work, we carried out energy-resolved polarization measurements to investigate the energy-dependent behavior of polarized radiation (polarization degree and angle) during the prompt phase of GRBs.We have employed two methods for selecting energy bins of individual bursts.Initially, we selected the bins based on observed peak energy calculated from the time-averaged spectral analysis.We created two bins: one ranging from 100-E p keV and the other from E p -600 keV.In cases where the observed peak energy exceeded 600 keV (the maximum allowed energy range for the polarization measurements using CZTI), we selected the following bins: 100-300 keV and 300-600 keV, considering that the mean value of peak energy for long GRBs is approximately 200-300 keV (refer to Figure 1).The calculated values of the energy-resolved polarization fraction of all five bursts are listed in Table 3. Further, we also selected the energy bins using the sliding mode spectral binning method (since the GRB spectra exhibit rapid variations) with a bin width of 50 keV for the energy-resolved polarization of all five GRBs in our sample.We initially divided the spectrum into smaller energy intervals of the bin width from 100-300 keV and slid these average energy intervals across the total energy range of the CZTI with an increasing order of 50 keV.Using the sliding mode binning, we calculated the average values of polarization parameters within each spectral bin.The polarization results obtained using the energy sliding binning algorithm are shown in Figure 8.We noted that the polarization angles of GRB 160325A, GRB 160703A, and GRB 160802A, GRB 160821A obtained for different energy segments remain mostly consistent (no substantial change in the polarization angles); however, we noted that the polarization angles of GRB 160623A changed with energy.Additionally, we noted that the polarization fraction values have increasing trends with energy, although the analysis might be limited due to fewer Compton counts in later energy bins.The energy-resolved polarization analysis gives a hint that polarization measurements depend on the energy channels of the detectors. DISCUSSION Based on the above data analysis and results, we present the key discussion on the spectro-polarimetric properties of individual GRBs in this section. Jet composition and emission mechanisms of the sample The main objective of this study is to investigate the possible jet composition and emission mechanisms of GRBs through time-resolved and energy-resolved spectro-polarimetric analysis.Different radiation models in GRBs are associated with different polarization fraction values.However, it is important to note that the observed polarization fraction values also depend on the viewing geometry of the bursts.To assess the viewing geometry of individual bursts, we employed the Γθ j condition.By applying this condition, we sought to gain insights into the viewing perspective of the bursts and their implications on their polarization properties.When viewing the jet from an on-axis perspective, the value of Γθ j is significantly greater than 1.Conversely, for off-axis observations, Γθ j is expected to be much smaller than 1.In the case of a narrow jetted view, the Γθ j value is expected to be approximately 1.The value of Γ of the fireball can be derived from prompt emission as well as afterglow features of GRBs (Liang et al. 2010;Ghirlanda et al. 2018).In this work, we constrain the value of bulk Lorentz factor using well-studied Liang correlation3 , the strong correlation between bulk Lorentz factor and isotropic gamma-ray energy of the fireball (Liang et al. 2010).The derived values of bulk Lorentz factor are tabulated in Table 4.For GRB 160623A, we obtained E γ,iso value using Konus-Wind observations (Tsvetkova et al. 2017) as the main emission was not detected using Fermi GBM.Additionally, we derive the jet opening angle (lower limits) using the X-ray afterglow light curves observed using Swift XRT and equation 4 of Frail et al. (2001).The θ j value depends on microphysical afterglow parameters (medium number density (n 0 ) and electrons thermal energy fraction (ϵ e )).We assume typical values of n 0 = 1 and ϵ e = 0.2 to constrain θ j values (Gupta et al. 2022c).However, detailed afterglow modeling and a good data set will be needed to constrain these parameters better (Gupta et al. 2022a).For GRB 160623A, we obtained the θ j value from Chen et al. (2020).However, in the case of GRB 160802A and GRB 160821A, no Swift XRT observations are available, so we used θ j = 2.1 degree, which is the mean value of jet opening angle for typical Fermi-detected long GRBs (Sharma et al. 2021).After calculating the bulk Lorentz factor and jet opening angle values for individual bursts, we determine the viewing geometry (Γθ j ).The calculated values of Γθ j are tabulated in Table 4.We noted that the calculated for all five bursts in our sample have Γθ j >> 1, suggesting that the jet from these GRBs is observed from an on-axis perspective.Further, we utilized Γθ j condition and our spectro-polarimetric results for each of the five GRBs in our sample to investigate GRBs' possible jet composition and emission mechanisms. GRB 160325A We studied the spectro-polarimetric properties (analysis of spectral properties using Fermi as well as the polarization of emitted radiation using AstroSat) of GRB 160325A for both pulses (the light curve of this burst exhibits two separate emission episodes with a quiescent period in between).The α pt values seem harder during the first episode, and we observed a low polarization fraction (using time-resolved polarization measurements) during this episode.Conversely, α pt value becomes softer during the second emission episode, and we observed a hint of high polarization fraction (an upper limit of 98 %).The observed spectro-polarimetric properties during the first episode suggest that the emission during this episode originated from a thick shell photosphere with localized dissipation occurring below it.In contrast, the emission during the second episode is dominated by thin-shell synchrotron emission.Furthermore, our time-resolved polarization measurements of GRB 160325A indicate the transition of a baryonicdominated jet composition during the first episode to a subdominant Poynting flux jet composition during the second episode.Our results (with an updated polarization analysis pipeline) are consistent with our previous spectro-polarimetric analysis of the bursts reported in 100-300 keV (Sharma et al. 2020). GRB 160623A The prompt light curve of GRB 160623A obtained using Konus-Wind exhibits a broad emission episode (main), followed by a weaker emission episode (Frederiks et al. 2016a).However, the main emission episode of GRB 160623A was occluded for Fermi mission.Therefore, we utilized the time-integrated spectral analysis results reported by us using Konus-Wind observations to constrain the radiation mechanism of GRB 160623A (Chattopadhyay et al. 2022).We noted that the observed value of α pt using the time-integrated Konus-Wind spectrum lies within the synchrotron slow and fast cooling prediction.Additionally, the time-integrated and time-resolved (however, it is important to note that within a 2-sigma confidence interval, these polarization measurements are consistent with low polarization) po-larization analysis using CZTI data gives a hint for the high degree of polarization, supporting the synchrotron emission in an ordered magnetic field (see Figure 7).The possibility of no significant polarization cannot be entirely ruled out based on the current measurements.Our spectro-polarimetric analysis of GRB 160623A suggests a Poynting flux jet composition throughout the burst's emission. GRB 160703A The light curve of GRB 160703A, as observed by Konus-Wind, displays multiple overlapping emission pulses (Frederiks et al. 2016b), consistent with Swift BAT light curve (see Figure A2 of the appendix).However, since this GRB was not detected by the Fermi mission, we were unable to perform a detailed timeresolved spectral analysis of this burst.To investigate the radiation mechanism of GRB 160703A, we relied on the time-integrated spectral analysis results previously reported by us using Konus-Wind observations (Chattopadhyay et al. 2022).The low energy photon index obtained from the time-integrated Konus-Wind spectrum is consistent with the synchrotron emission model.Furthermore, our α pt value calculated using the timeintegrated Swift BAT spectral analysis is also consistent with the synchrotron emission model (see Table B1 of the appendix).Similar to the previous case, both the time-integrated and time-resolved (however, it is important to note that the observed polarization is also consistent with low polarization within 2-sigma confidence interval) polarization analysis using AstroSat CZTI data provide indications of a hint for the high degree of polarization, supporting the presence of synchrotron emission in an ordered magnetic field (see Figure 7).Our spectro-polarimetric analysis of GRB 160703A suggests a Poynting flux jet composition throughout the emission of the burst. GRB 160802A The light curve of GRB 160802A displays two distinct emission episodes separated by a quiescent period (see Figure A2 of the appendix).A detailed spectropolarimetric analysis was conducted for both episodes, revealing a notable similarity in spectral behavior to GRB 160325A.The spectral analysis of GRB 160802A indicates that the low energy photon index remains (hard) above the synchrotron emission "line of death" for most of the temporal bins in the first episode (see Figure 4).Time-resolved polarization measurements (sliding mode analysis) during this episode constrain the polarization fraction to low values.Since the jet of this burst was observed on-axis (see section 5.1), our spectro-polarimetric analysis of the first episode is consistent with the photospheric emission model.Such hard values of α pt and low polarization fraction can be explained using a Baryonic dominated jet with subphotospheric dissipation.In contrast, α pt value becomes softer (than the first episode) during the second emission episode.Although we obtained a hint of a high degree of polarization fraction (with respect to the time-resolved measurements during the first episode), we were unable to obtain a more precise measurement due to the low number of Compton counts during this episode.The observed spectro-polarimetric properties during the second episode suggest that it is dominated by thin-shell synchrotron emission.Furthermore, our time-resolved polarization measurements of GRB 160802A suggest a possible transition of a baryonic-dominated jet composition during the first episode to a subdominant Poynting flux jet composition during the second episode.However, the limited number of Compton events during the second episode of GRB 160802A prevents us from making a definitive claim for such a transition. GRB 160821A The light curve of GRB 160821A observed by Fermi GBM reveals an initial fainter emission followed by a highly intense emission.However, the initial weaker emission was not detected by AstroSat CZTI.Therefore, this study focuses solely on the spectro-polarimetric analysis of the main emission episode of GRB 160821A.The exceptional brightness of the main emission episode of GRB 160821A helps us to perform a detailed timeresolved spectro-polarimetric analysis of the burst.The observed evolution of α pt lies within the predicted range of the thin shell synchrotron emission model.The high flux suggests that the bursts are observed on-axis, as discussed in Section 5.1.During the rising and peak phase of the main pulse, we observed the swing in the polarization angle by approximately 90 degrees.Subsequently, from the peak to the decay phase of the pulse, the polarization angle swings back.Our time-resolved polarization analysis indicates that the lower value of the time-integrated polarization fraction reported in Chattopadhyay et al. ( 2022) may be attributed to the variation in the polarization angle.The spectro-polarimetric analysis of GRB 160821A provides further support for synchrotron emission occurring within an ordered magnetic field.The results also suggest that the jet composition throughout the burst's emission is dominated by Poynting flux.These results align with our previous spectro-polarimetric analysis of bursts reported in the 100-300 keV energy range (Sharma et al. 2019). SUMMARY AND CONCLUSION The spectro-polarimetric analysis of GRBs has been investigated for a limited number of GRBs, and most of the studies explored only the time-integrated polarization measurements due to the transient behavior of GRBs, in particular, as well as the challenge of Xray polarization measurement, in general, (Gill et al. 2020;Kole et al. 2020;Chattopadhyay et al. 2022).In our recent study, we suggested that the majority of bursts in the sample exhibit minimal or no polarization in our time-integrated measurements within the 100-600 keV energy range, as observed with AstroSat CZTI (Chattopadhyay et al. 2022).However, a detailed timeresolved and energy-resolved polarization analysis was needed to identify if the observed low-polarization fraction is intrinsic or due to variation in polarization fraction and polarization angle with time and energy within the burst.In this paper, we investigated the prompt emission temporal, spectral, and polarization properties of five bright bursts observed using the CZTI onboard AstroSat in its first year of operation.Our study focuses on the application of time-resolved and energy-resolved spectro-polarimetry techniques to obtain detailed polarization information and characterize the emission properties of these GRBs.The primary objective of our study is to delve into the jet compositions of these bright GRBs and constrain the different radiation models of prompt emission.This issue has been a subject of long-standing debate, and prompt emission spectroscopy on its own has been insufficient to resolve these questions independently. By exploiting the high-angular-resolution CZTI data, we have derived time-resolved polarization profiles for a sample of GRBs.We studied the Γθ j condition to constrain the jet geometry of these bursts, as observed polarization also depends on the jet geometry.We utilized 10.4m GTC and 3.6m DOT telescopes to contain the redshift/ host search of the bursts, which further helps to verify the Γθ j condition.Our analysis suggests that the jet emissions from these GRBs were observed on-axis.Furthermore, our comprehensive spectro-polarimetric analysis suggests that GRB 160623A, GRB 160703A, and GRB 160821A have a Poynting flux-dominated jet, and emission could be explained using a thin shell synchrotron emission model in an ordered magnetic field.On the other hand, GRB 160325A and GRB 160802A have the first pulse with a thermal signature followed by non-thermal emission during the second pulse.Our analysis indicates that GRB 160325A and GRB 160802A have a Baryonic dominated jet with mild magnetization.We do not observe any rapid evolution in the polarization angles of GRB 160325A, GRB 160623A, GRB 160703A, and GRB 160802A.However, we observe a rapid change in polarization angle by ∼ 90 degrees within the main pulse of very bright GRB 160821A, consistent with our previous results reported in 100-300 keV (Sharma et al. 2019).The profile of GRB 160821A (time-resolved polarization analysis) reveals temporal variations in the angle of polarization, shedding light on the radiation mechanisms and geometry involved in this extreme event.We noted that some authors performed the theoretical simulations and reproduced such large temporal variation in polarization angle under the photospheric emission model.They also discussed the physics and implications of observing such changes (Ito et al. 2024;Parsotan et al. 2020).However, our analysis reveals a hint of high degree of polarization for GRB 160821A, which contrasts with the predictions of the photospheric emission model. Additionally, we have studied the polarization properties as a function of energy, suggesting a hint of variations in the polarization degree and angle across different energy bands.We noted that the polarization angles of GRB 160325A, GRB 160703A, and GRB 160802A, GRB 160821A obtained for different energy segments remain mostly consistent; however, the polarization angles of GRB 160623A changed with energy (though large associated error due to the limited number of Compton events).Further, we noted that the polarization fraction values have increasing trends with energy, although the analysis might be limited due to fewer Compton counts in later energy bins.The energy-resolved polarization analysis gives a hint that polarization properties depend on the energy channels of the detectors. Our results demonstrate the capability of AstroSat CZTI for detailed time-resolved and energy-resolved spectro-polarimetry of GRBs.The combination of highangular-resolution imaging, broad energy coverage, and polarization sensitivity provides a unique opportunity to unravel the complex physics governing these explosive phenomena.By studying the polarization of these GRBs, we obtain important insights into the geometry and magnetic field structures associated with these bursts.Our findings suggest that prompt emission polarization analysis, when combined with spectral and temporal data, possesses a distinct capacity to resolve the long-standing debate surrounding the emission mechanisms of GRBs.A comprehensive analysis that delves into both time-resolved and energy-resolved spectro-polarimetry offers greater insight into the emission mechanisms of GRBs compared to a time-averaged spectro-polarimetric analysis (Gupta 2023). Our time-resolved and energy-resolved analysis may be somewhat limited due to the relatively low number of Compton events in the finer time/energy bins.We need more observations (extremely bright GRBs with more Compton counts) or more sensitive GRB polarimeters with larger effective areas and refined theoretical models to improve our understanding of the physical processes that drive these energetic and enigmatic events.Additionally, examination of the correlation between spectral parameters and measured polarization parameters for more bright GRBs will provide further constraints on the radiation physics of GRBs.The findings presented in this study pave the way for future investigations and highlight the potential of AstroSat CZTI for advancing our understanding of GRBs and their role in the Universe.Further, the insights gained from this study have profound implications for our understanding of highenergy astrophysics and the physical processes associated with GRBs.The scientific community is actively engaged in preparing for the next generation of gammaray missions, including COSI, eAstroGAM, AMEGO, AMEGO-X, and POLAR 2. Our research contributes valuable insights for these forthcoming missions, particularly through our time-resolved polarization measurements.This information is instrumental for the development and optimization of upcoming GRB polarimeters such as LEAP, POLAR 2 (Hulsman 2020), COSI, and other missions. We thank the anonymous referee for providing positive and encouraging comments on our manuscript.RG and SBP are very grateful to Prof. A. R. Rao for the excellent suggestions and discussion with the project.RG is also thankful to Dr. Tyler Parsotan for reading the manuscript and fruitful discussion.RG, SBP, DB, and VB acknowledge the financial support of ISRO under AstroSat archival Data utilization program (DS 2B-13013(2)/1/2021-Sec.2).This publication uses data from the AstroSat mission of the Indian Space Research Organisation (ISRO), archived at the Indian Space Science Data Centre (ISSDC).CZT-Imager is built by a consortium of institutes across India, including the Tata Institute of Fundamental Research (TIFR), Mumbai, the Vikram Sarabhai Space Centre, Thiruvananthapuram, ISRO Satellite Centre (ISAC), Bengaluru, Inter University Centre for Astronomy and Astro-physics, Pune, Physical Research Laboratory, Ahmedabad, Space Application Centre, Ahmedabad.This research also has used data obtained through the HEASARC Online Service, provided by the NASA-GSFC, in support of NASA High Energy Astrophysics Programs.RG was sponsored by the National Aeronautics and Space Administration (NASA) through a contract with ORAU.The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the National Aeronautics and Space Administration (NASA) or the U.S. Government.The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.AA acknowledges funds and assistance provided by the Council of Scientific & Industrial Research (CSIR), India, under file no.09/948(0003)/2020-EMR-I. AA also acknowledges the Yushan Young Fellow Program by the Ministry of Education, Taiwan, for financial support.This research is based on observations obtained at the 3.6m Devasthal Optical Telescope (DOT), which is a National Facility run and managed by Aryabhatta Research Institute of Observational Sciences (ARIES), an autonomous Institute under the Department of Science and Technology, Government of India.Table B3.The time-resolved spectral analysis of GRB 160325A was conducted using empirical models, namely the Band and Band + Blackbody.The flux values (in erg cm −2 s −1 ) reported in this study were calculated within the energy range of 8 keV to 40 MeV. Figure 1 . Figure 1.Prompt emission characteristic of the GRBs: The distributions of basic spectral (αpt (top-left), Ep (top-right), βpt (middle-left)) and temporal (T90, middle-right) properties of GBM detected GRBs.The solid black lines correspond to the theoretically predicted values of the low energy photon index from thin-shell synchrotron emission models.The vertical colored lines denote the position of GRBs under study in this paper.The Kernel density estimations (KDE) for all the distributions are shown using grey curves.Bottom-left: Histogram of Fermi GBM (light blue) and Swift BAT (orange) energy fluence values.The mean fluence values for the BAT and GBM samples are marked by vertical solid orange and blue lines, respectively.The positions of all five bursts in our sample are marked using vertical-colored lines.The inset plot illustrates the relationship between energy fluence and duration for Fermi GRBs.Bottom-right: Ep-T90 (harness-duration) plot for Fermi GBM GRBs.The location of five GRBs in our sample is shown using colored squares.The vertical red line represents the threshold for classifying bursts.The figure displays the long and short bursts obtained from the GBM catalog.The probability of long GRBs is represented on the right side of the Y-scale. Figure 2 . Figure 2. Prompt emission correlation of GRBs: Top: The location of five bright GRBs in Amati correlation.The wellstudied long and short bursts extracted from Minaev & Pozanenko (2020) are represented by blue and orange circles, respectively, with solid blue and orange lines depicting the linear fits for these groups.The parallel shaded areas illustrate the 3σ variation.Bottom: The location of five GRBs in Yonetoku correlation.The well-studied long and short bursts, as studied inNava et al. (2012), are shown with blue and orange circles.The parallel shaded areas indicate the 3σ scatter.The colored squares illustrate the location of the GRBs of our sample.In our analysis, we included all five GRBs in the Amati and Yonetoku relations.However, for the GRBs without measured redshifts, we assumed a redshift value of 2, mean of redshift distribution for long GRBs(Gupta et al. 2022b). Figure 3 .Figure 4 .Figure 5 . Figure3.Time-resolved spectro-polarimetric characteristic of GRB 160325A.Top-left: Temporal evolution of peak energy or cutoff energy obtained using empirical spectral fitting.Top-right: Temporal evolution of high energy photon index.Middle-left: Temporal evolution of low energy photon index.The black solid lines correspond to the theoretically predicted values of the low energy photon index from thin-shell synchrotron emission models.The pulsed-wise time-resolved polarization fraction is shown using blue squares.The right side y-scale (light blue) represents the evolution of polarization fraction over time obtained using time-resolved polarization analysis (sliding mode).Middle-right: Temporal evolution of polarization angle over time obtained using time-resolved polarization analysis (sliding mode).Bottom-left: Temporal evolution of the power-law index of the energy distribution of injected electron obtained using physical spectral fitting.Bottom-right: Temporal evolution of the strength of the magnetic field.Squares show the results for pulse-wise time-resolved spectro-polarimetric analysis. Figure 7 . Figure7.Time-resolved polarization analysis of GRB 160623A (left) and GRB 160703A (right).Top panels: 1-sec bin size Compton light curves obtained using CZTI data.Middle panels: The evolution of PF over time (time-sliding mode).The PF obtained during the peak or averaged analysis is shown using blue (GRB 160623A) and grey (GRB 160703A) squares, respectively.The right side Y-scales (red) show the values of α of GRB 160623A and GRB 160703A, respectively.The black solid lines correspond to the theoretically predicted values of the low energy photon index from thin-shell synchrotron emission models.Bottom panels: The evolution of PA over time (time-sliding mode). Figure 8 . Figure 8. Energy-resolved polarization measurements: Top: Compton light curves of the GRBs with 1-sec bin size obtained using CZTI data.Middle and bottom: The evolution of polarization faction and polarization angle with energy.The energy binning has been carried out based on the sliding mode algorithm. Table 2 . The calculated values of time-resolved polarization fraction (pulsed or peak-wise time bins) of all the five bursts in 100-600 keV. Table 3 . The calculated values of energy-resolved polarization fraction (100-Ep or 300 keV and Ep or 300 -600 keV) of all the five bursts in our sample. Table 4 . The calculated values of the Lorentz factor and jet opening angle of all the five bursts in our sample.Γθj >> 1 suggests that the jet from all the GRBs in our sample is observed from an on-axis view. Table B1 . Empirical and physical spectral fitting of the time-averaged spectrum of GRBs of our sample.Time-integrated flux has been calculated from 10 keV to 40 MeV energy range.For the Swift detected GRB 160703A, the time-integrated flux has been calculated from 15 keV to 150 keV energy range. Table B2 . The time-resolved spectral analysis of GRB 160325A was conducted using empirical models, namely the Cutoff power-law and CPL + Blackbody.The flux values (in erg cm −2 s −1 ) reported in this study were calculated within the energy range of 8 keV to 40 MeV. Table B4 . The time-resolved spectral analysis of GRB 160325A was conducted using physical model, namely the Synchrotron.The flux values (in erg cm −2 s −1 ) reported in this study were calculated within the energy range of 8 keV to 40 MeV. Table B5 . Similar to B2 but for GRB 160802A. Table B6 . Similar to B3 but for GRB 160802A. Table B7 . Similar to B4 but for GRB 160802A. Table B8 . Similar to B2 but for GRB 160821A. Table B10 . Similar to B4 but for GRB 160821A.
15,754.2
2024-06-19T00:00:00.000
[ "Physics" ]
Joint Estimation of the Non-parametric Transitivity and Preferential Attachment Functions in Scientific Co-authorship Networks We propose a statistical method to estimate simultaneously the non-parametric transitivity and preferential attachment functions in a growing network, in contrast to conventional methods that either estimate each function in isolation or assume some functional form for them. Our model is shown to be a good fit to two real-world co-authorship networks and be able to bring to light intriguing details of the preferential attachment and transitivity phenomena that would be unavailable under traditional methods. We also introduce a method to quantify the amount of contributions of those phenomena in the growth process of a network based on the probabilistic dynamic process induced by the model formula. Applying this method, we found that transitivity dominated PA in both co-authorship networks. This suggests the importance of indirect relations in scientific creative processes. The proposed methods are implemented in the R package FoFaF. Introduction Science has never been more collaborative. In this era that has been witnessing an unprecedented explosion of multi-author scholarly articles (Larivière, Gingras, Sugimoto, & Tsou, 2015), collaboration has become more and more important in the path to scientific success (Jones, Wuchty, & Uzzi, 2008;Bornmann, 2017). Promising ideas from numerous analytic fields, including complex network theory, statistics, and informetrics, have been weaved together to understand this collaborative nature of science (Zeng et al., 2017;Fortunato et al., 2018). One of the early attempts to analyze the formative process of scientific collaborations came from physics when Newman proposed a non-parametric method to estimate the preferential attachment (PA) and transitivity functions from scientific collaboration networks (Newman, 2001a). PA (Price, 1965;Merton, 1968;Price, 1976;Albert & Barabási, 1999) and transitivity (Heider, 1946;Holland & Leinhardt, 1970, 1971, 1976 are two of the most fundamental mechanisms of network growth. On the one hand, PA is a phenomenon concerning the first-order structure of a network. In PA, the higher the number of co-authors a scientist already has, the more collaborators they will form. On the other hand, transitivity concerns the second-order structure: co-authors of co-authors are likely to collaborate. Newman's method is non-parametric in the sense that it does not assume any forms for either the PA or transitivity function. The method, however, considers each phenomenon in isolation and thus completely ignores any entanglements of the two phenomena, which are entirely plausible in real-world networks. Apart from this non-parametric-in-isolation approach, a joint-estimation approach, in which the two phenomena are considered simultaneously, has been attempted recently (Kronegger, Mali, Ferligoj, & Doreian, 2012;Ferligoj, Kronegger, Mali, Snijders, & Doreian, 2015;Zinilli, 2016), all under the framework of stochastic actor-based models (Snijders, 2001). This approach is, however, inherently parametric: it assumes the forms of the PA and transitivity functions a priori, thus risks losing fine details of the two phenomena, details that are difficult to be captured by any parametric functional forms. We argue that the ideal method, whenever possible, should combine the best of both worlds: it should consider both phenomena simultaneously, and it should not assume any functional forms for them. Our main contributions are three-fold. In our first contribution, we propose a network growth model that combines non-parametric PA and transitivity functions and derive an efficient Minorize-Maximization (MM) algorithm (Hunter & Lange, 2000) to estimate them simultaneously. This iterative algorithm is guaranteed to increase the model's log-likelihood per iteration. We demonstrate through simulated examples that our approach are capable of capturing complex details of PA and transitivity, while the conventional approaches cannot (cf. Fig. 1). We also perform a systematic simulation to confirm the performance of our algorithm. In our second contribution, we suggest a method to quantify the amount of contributions of PA and transitivity in the growth process of a network. Our quantification exploits the probabilistic dynamic process induced by the network growth formula and can be easily extended to other network growth mechanisms. In our third contribution, we apply the proposed methods to two real-world co-authorship networks and uncover some interesting properties that would be unavailable under conventional approaches. In particular, as contrast to the typical power-law functional form assumption, the transitivity effect seems to be highly non-power-law. We also found that transitivity dominated PA in the growth processes of both networks. This suggests the importance of indirect relations in scientific creative processes: it does matter who your collaborators collaborate with. All the proposed methods are implemented in the R package FoFaF. The rest of the paper is organized as follows. The proposed method is discussed in details in Section 2. In Section 3, we discuss how to exploit the probabilistic dynamic process imposed by the model formula to sensibly quantify the amount of contributions of PA and transitivity. We apply the proposed method to two real-world collaboration networks and discuss the results in Section 4. Concluding remarks are given in Section 5. Proposed Method We first review briefly the history of PA and transitivity modelling and then describe our network growth model that incorporates non-parametric PA and transitivity functions. We also explain its relation to some conventional network models. We then discuss maximum partial likelihood estimation for the model and provide two simulated examples to demonstrate how our method works. We conclude the section with a systematic simulation to investigate the performance of the proposed method. PA and transitivity modelling The notion of a rich-get-richer phenomenon has its root in the theoretical works of Yule (Yule, 1925) and Simon (Simon, 1955). Its status as a fundamental process in informetrics was cemented by revolutionary works of Merton (Merton, 1968) and Price (Price, 1965(Price, , 1976. The term "preferential attachment" was coined by Barabási and Albert when they re-discovered the mechanism in the context of complex networks (Albert & Barabási, 1999). In PA, the probability a node with degree k receives a new edge is proportional to its PA function A k . When A k is an increasing function on average, the PA effect exists: a node with a high degree k is more likely to receive more new connections. To estimate the PA phenomenon in a network is to estimate the function A k given that network's growth data. Various non-parametric approaches (Newman, 2001a;Pham, Sheridan, & Shimodaira, 2015) and parametric ones (Massen & Jonathan, 2007;Gómez, Kappen, & Kaltenbrunner, 2011) have been proposed. In parametric methods, power-law function forms, e.g., A k = (k + 1) α , are often employed (Krapivsky, Rodgers, & Redner, 2001). Transitivity started out as a concept in psychology (Heider, 1946) and was developed theoretically in the framework of social network analysis by Holland and Leinhardt in the 1970s (Holland & Leinhardt, 1970, 1971, 1976. It was introduced to the informetrician's modelling toolbox in 2001 when Newman provided a heuristic method to estimate the transitivity function in real-world co-authorship networks (Newman, 2001a) and Snijders introduced his now-famous stochastic actor-based models that include transitivity as a network formation mechanism (Snijders, 2001). In transitivity, the probability that a pair of two nodes with b common neighbors is proportional to the transitivity function B b . When B b is an increasing function on average, the transitivity effect is at play: the more common neighbors a pair of nodes share, the easier for them to connect. Similar to the case of PA, non-parametric approaches (Newman, 2001a) and parametric approaches (Kronegger et al., 2012;Ferligoj et al., 2015;Zinilli, 2016) have been proposed to estimate B b from observed network data. We re-emphasize that all existing methods either consider PA or transitivity in isolation, or are of a parametric nature. Proposed network model Our model can be viewed as a discrete Markov model, which is a popular framework in social network modeling (Holland & Leinhardt, 1977). Let G t denote the network at time t. Starting from a seed network G 0 , at each time-step t = 1, · · · , T , v(t) new nodes and m(t) new edges are added to G t−1 to form G t . In particular, at the onset of time-step t, let k i (t) denote the degree of node i and b ij (t) the number of common neighbors between nodes i and j in G t−1 . Our model dictates that the probability that a new edge emerges between node i and node j at time step t is independent of other new edges at that time and is equal to where A k is the PA function of the degree k and B b is the transitivity function of the number of common neighbors b. In other words, the un-ordered pair of the two ends (i, j) of a new edge follows a categorical distribution over all un-ordered pairs of nodes existing at time t. Each pair's weight is proportional to the product of PA and transitivity values of that pair at t. Thus this formulation can capture simultaneously PA and transitivity effects. Suppose that the joint distribution of v(t), m(t), and G 0 is governed by some parameter vector θ. We make a standard assumption, which is virtually employed in all network models, that θ is independent of A k and B b . As we shall see later, this independence assumption enables a partial likelihood approach in which one can ignore θ in estimating A k and B b . Next we discuss the relation between the model in Eq. (1) and models in the literature. Related models As explained earlier, while there are models that either include a non-parametric A k function (Pham et al., 2015) or a non-parametric B b function (Newman, 2001a), Eq. (1) is the first to combine both non-parametric functions. It includes as special cases some well-known complex network models, such as the Barabási-Albert model (Albert & Barabási, 1999) or the Erdös-Rényi-with-growth model (Callaway, Hopcroft, Kleinberg, Newman, & Strogatz, 2001). The well-known stochastic actor-based model (Snijders, 2001(Snijders, , 2017Ripley, Snijders, Boda, Vörös, & Preciado, 2018) has been employed in studies of scientific co-authorship networks (Kronegger et al., 2012;Ferligoj et al., 2015;Zinilli, 2016). It is, however, not clear how to convert the PA and transitivity functions in our probabilistic setting to those in the setting of stochastic actor-based model, since the two models are defined differently. We note that the PA and transitivity phenomena are virtually modelled in a parametric manner in the stochastic actor-based model. One key assumption of the model in Eq. (1) is that A k and B b do not depend on t, i.e., they stay unchanged throughout the growth process. While this time invariability assumption is standard and employed in all the network models mentioned previously, there is a growing body of models departing from it. A time-varying A k has been discussed in the context of citation networks (Csárdi, Strandburg, Zalányi, Tobochnik, & Érdi, 2007;Wang, Yu, & Yu, 2008;Medo, Cimini, & Gualdi, 2011), while different parametric forms for such A k are studied by Medo (Medo, 2014). More recently, the R package tergm (Krivitsky & Handcock, 2019) allows the estimation of time-varying parametric PA and transitivity functions. There is, however, no existing work that employs time-varying and non-parametric modelling simulataneously, presumably for the reason that a huge amount of data is probably needed in such a model. It is likely that in practical situations one always has to choose between non-parametric modelling and time-varying modelling. We demonstrate in Section 4.4 that the time invariability assumption do indeed hold in all real-world networks analyzed in this paper. Maximum Partial Likelihood Estimation Here we describe how to estimate the parameters of the model in Eq. (1). Denote D = {G 0 , G 1 , · · · , G T } the observed data, and let A = [A 0 , A 1 , . . . , A kmax ] with A k > 0 be the PA function and B = [B 0 , B 1 , . . . , B bmax ] with B b > 0 be the transitivity function. Here k max is the maximum degree and b max is the maximum number of common neighbors between a pair of nodes. Given D, our goal is to estimate A and B without assuming any specific functional forms, an approach we call "non-parametric". With the independence assumption stated in the previous section, the part of the log-likelihood that contains A and B and the part of the log-likelihood that contains θ are separable, i.e., L(A, B, θ|D) = L(A, B|D) + L(θ|D) holds, where L denotes the log-likelihood function. This allows us to ignore θ in estimating A k and B b . Starting from Eq. (1), with some calculations we arrive at where n k1,k2,b (t) is the number of node pairs (i, j) that satisfy (k i (t), k j (t), b ij (t)) = (k 1 , k 2 , b) with k 1 ≤ k 2 at time-step t, and m k1,k2,b (t) is the number of new edges between such node pairs. The number of new edges at time-step t is then expressed as m(t) = kmax k1=0 kmax k2=k1 bmax b=0 m k1,k2,b (t). Although analytically maximizing L(A, B|D) is intractable, we can derive an efficient MM algorithm that iteratively updates A and B. See Appendix A for its derivation. We also denote the final result of the algorithm andB, estimates of A and B. Illustrated examples We demonstrate the effectiveness of our method in two examples. In the first example, we simulate a network using Eq. (1) with A k = 3 log(max k, 1) α + 1 and B b = 3 log(max b, 1) α + 1. This functional form, which deviates substantially from the power-law form, has been used to demonstrate the working of non-parametric PA estimation methods (Pham et al., 2015). The network has a total of N = 1000 nodes. At each time-step, one new node is added to the network with m(t) = 5 new edges. In the second example, we first estimate A k and B b by applying our proposed method to a real-world co-authorship network between authors in statistics journals (cf. Section 4), and then use these parameter values for simulating a network based on Eq. (1). In the process, we kept the initial condition and the number of new nodes and new edges at each time-step exactly as what were observed in the real-world network. We apply three estimation methods to each simulated network. The first is our proposed method, which jointly estimates the non-parametric functions A k and B b . The second is a joint parametric method, which jointly estimates PA and transitivity using the simplistic functional forms A k = (k + 1) α and B b = (b + 1) α . This parametric formation is used widely in various PA and transitivity estimation methods (Massen & Jonathan, 2007;Gómez et al., 2011). The third method ignores the joint existence of PA and transitivity: it consists of two sub-methods: the first one is a non-parametric method for estimating PA in isolation (Pham et al., 2015), and the second one is a maximum likelihood version of a non-parametric method for estimating transitivity in isolation (Newman, 2001a). The results are shown in Fig. 1. In both examples, while the joint parametric method somehow succeeded in obtaining the general trends of A k and B b , it failed to capture the deviations from the power-law form in the two functions. The non-parametric-in-isolation method grossly over-estimated both PA and transitivity mechanisms, due to its complete disregard of their joint existence. The proposed method worked reasonably well, succeeding in capturing the PA and transitivity functions in fine details. Simulation study We perform a systematic simulation to evaluate how well the proposed method can estimate A k and B b . We choose A k = (k + 1) α and B b = (b + 1) β as the true functions. This power-law functional form has been used in previous simulation studies of PA estimation methods (Pham et al., 2015;Pham, Sheridan, & Shimodaira, to appear). We consider five values (0, 0.5, 1, 1.5, and 2) for the exponent α and six values (0, 0.5, 1, 1.5, 2, 2.5, and 3) for the exponent β. These are the ranges of α and β observed in Section 4.2. For each combination of α and β, we simulate 10 networks. In each network, the total number of nodes is 1000 and there are five new edges at each time-step. For each simulated network, we first estimate A k and B b as described in the previous section and then fit (k + 1) α and (b + 1) β to the estimation results to find the estimates of α and β. In other words, we indirectly measure how well A k and B b are estimated by looking at how well α and β are estimated: if the estimates of α and β are good, the estimations of A k and B b are likely successful, too. Figure 2 shows the true and estimated values of α and β. The proposed method successfully recovers α and β in all combinations. This implies that the estimation of A k and B b went well. Quantifying the amount of contributions of PA and transitivity Our model leads to a simple answer to a previously-unraised yet fascinating question: how can one compare the amount of contributions of PA and transitivity in the growth process of a network? To the best of our knowledge, no one has attempted to quantify the amount of contributions of different network growth mechanisms. To answer this question, one must find a meaningful way to define the amount of contributions so that they are computable and comparable. We achieve this by considering the dynamic process expressed in Eq. (1). This probabilistic dynamic process suggests that the variability of the PA/transitivity values in the set of node pairs is a sensible measure for the amount of contribution of PA/transitivity. Let us define the amount of contributions of PA and transitivity at time-step t. Denote them as s PA (t) and s trans (t), respectively. Taking logarithm of both sides of Eq. (1), one gets: with is the logarithm of the normalizing constant at time-step t and is independent of i and j. Equation (3) implies that, looking locally at a node pair (i, j), PA and transitivity contribute to log 2 P ij (t) by the amounts of log 2 [A ki(t) A kj (t) ] and log 2 B bij (t) , respectively; the amount of contribution is measured by log 2 fold-changes. What is important globally is, however, the relative sizes of all log 2 [A ki(t) A kj (t) ] and log 2 B bij (t) at that time-step t. For example, consider the case when A k = 1, ∀k. In this case, the the value of log 2 [A ki(t) A kj (t) ] will be the same for all node pairs, and thus PA would have no role in determining which pair would get a new edge. By considering the case when B b = 1, ∀b, one can see that the same reasoning should also apply to log 2 B ij (t). The exponents are estimated by a two-step procedure: first A k and B b are estimated jointly by the proposed method, then (k + 1) α and (b + 1) β are fitted to the estimated results by least square. Each estimated point is the mean of the results of 10 simulations, with the error bars display the standard errors of the mean. This observation prompts us to define s PA (t) and s trans (t) as the standard deviations of log 2 [A ki(t) A kj (t) ] and log 2 B bij (t) , respectively, when (i, j) is sampled based on Eq. (1). Let U (t) be the set formed by all node pairs (i, j) that exist at time-step t. The probability P ij (t) in Eq. (1) can be written explicitly as The aforementioned standard deviations can be calculated as follows. and log 2 B bij (t) are invariant to constant factors in A k and B b , and thus s PA (t) and s trans (t) are well-defined. The use of base-2 logarithms allows us to interpret s PA (t) and s trans (t) as log 2 fold-changes; a contribution value of s indicates a change of the probability 2 s times in Eq. (1). We also note that, although A k and B b are assumed to be time-invariant, k i (t), b ij (t), and P ij (t) change over time, thus leading to dynamic nature of s PA (t) and s trans (t). In real-world situations, what are available to us is not the true values A and B, but only their estimateŝ A andB. We plug these estimates into Eqs. (4) and (5) to obtainŝ PA (t) andŝ trans (t), estimates of s PA (t) and s trans (t). The requirement that (i, j) is sampled from Eq. (1) is needed to faithfully reflect the probabilistic dynamic process and leads to the following interpretation of s PA (t) and s trans (t). Assume that at some time-step t we observed m(t) ≥ 2 new edges whose end points are (i 1 , j 1 ), · · · , (i m(t) , j m(t) ). Consider the sample standard deviation of log 2 (B bi l j l (t) ) for l = 1, · · · , m(t), which is defined as Plugging in the estimates and B, we can viewŝ PA (t) andŝ trans (t) as the estimates of the expectations of the sample standard deviations in PA and transitivity values observed at the end points of new edges at time-step t. As we shall see in Section 4.3, this interpretation also gives us a mean to visualize how well the model fits an observed network. Finally, we note that this quantification approach is not limited to PA and transitivity. Given a growth formula in which all growth mechanisms are combined in a multiplicative way, for example, as in Eq. (1), the standard deviations of the logarithmic value of each growth mechanism can be used as a measure of the contribution of that mechanism. Table 1 shows the summary statistics for two networks. The ratios ∆|V |/|V | and ∆|E|/|E| are both close to one, which indicate that each network grows out from a very small initial network. Since the number of new edges ∆|V | is loosely corresponding to the number of available data in our statistical model, STA has the biggest amount of data. The clustering coefficients in both networks are rather high, but nevertheless fall in the normal range observed in real-world networks (Newman, 2001b). Table 1: Summary statistics for two scientific co-authorship networks. |V | and |E| are the total numbers of nodes and edges in the final snapshot, respectively. T is the number of time-steps. ∆|V | and ∆|E| are the increments of nodes and edges after the initial snapshot, respectively. C is the clustering coefficient of the final snapshot. k max is the maximum degree and b max is the maximum number of common neighbors. It is instructive to look at more fine-grained statistics. Figures 3A and B show the distributions of the number of collaborators k in the final snapshot of SMJ and STA, respectively. Since the degree distributions in two datasets exhibit signs of heavy-tails, we fitted one of the most representative class of heavy-tail distribution, the power-law distribution k −γ deg , to these degree distributions by Clauset's procedure (Clauset, Shalizi, & Newman, 2009). This procedure first chooses the minimum degree k min from which the powerlaw holds, and then uses a maximum likelihood approach to estimate the power-law exponent γ deg . The estimated power-law exponents for degree distributions in SMJ and STA are 2.97 and 3.35, respectively. These values fall in the range of 2 < γ deg < 4, which is a commonly observed range for γ deg in real-world networks (Newman, 2005;Clauset et al., 2009). Two co-authorship networks The situation with the distributions of b ij is, however, less clear. Figures 3C and D show the distributions of the number of node pairs with b common neighbors in the final snapshot of SMJ and STA, respectively. We also fitted the power-law distribution b −γcn to the distributions of b by Clauset's procedure and found that γ cn in SMJ and STA are 2.99 and 3.22, respectively. The power-law form, however, seems to be not a very good fit for these distributions. The ranges of b in two distributions seem to be too narrow to say anything definitely about the tails. To the best of our knowledge, no previous work has studied the distributions of b ij , either in co-authorship networks or any other network types. Since figuring out the distributional form of b ij is not our main goal, we leave this task as future work. PA and transitivity effects are highly non-power-law Applying the proposed method to two data-sets, we found that the estimated PA and transitivity functions display non-power-law and complex trends (Figures 4). In both networks, the value of A k increases on average in k, which implies the existence of the PA phenomenon: the more collaborators an author gets, the more likely they would get a new one. This is consistent with previous results in the literature, in which the phenomenon has been found in collaboration networks in diverse fields (Newman, 2001a;Milojević, 2010;Kronegger et al., 2012;Ferligoj et al., 2015). The situation with the transitivity functions is more complex. In both SMJ and STA, there is a huge jump when b goes from 0 to 1: B 1 /B 0 is about 60 times in SMJ and almost 100 times in STA. These jumps in B b values have been previously observed in co-authorship networks (Newman, 2001a;Milojević, 2010). After this initial jump, B b , however, stays relatively horizontal in both SMJ and STA, before slightly increases again in SMJ. This complex departure from the power-law form renders any statement about a universal transitivity effect moot. The value of B b at every b > 0 is, however, at least one order of magnitude higher than B 0 , which suggests that, co-authors of co-authors seem to be at least ten times more likely to become new co-authors, comparing with the case when there is no mutual co-author. It is informative to supplement the non-parametric analysis with a parametric one, since the theoretical literature offers many insights in this context. Here, we follow the standard practice and fit the power-law functional forms A k = (k + 1) α and B b = (b + 1) β (Krapivsky et al., 2001;Jeong, Néda, & Barabási, 2003;Pham et al., 2015). To find the PA attachment exponent α and the transitivity attachment exponent β, we substitute these forms to Eq. (1) and numerically maximizes the resulted log-likelihood function with respected to α and β. Table 2 shows the estimated values of α and β. The PA attachment α in both networks are in the sub-linear region, i.e., 0 < α < 1, which is a frequently observed range in real-world networks (Newman, 2001a;Pham et al., 2015;Ronda-Pupo & Pham, 2018). While this region has been shown to give rise to a heavy-tail degree distribution when there is only PA at play (Krapivsky et al., 2001), there is no such theoretical result when PA jointly exists with transitivity. It is, however, not entirely unreasonable to expect that the sub-linear value of α is responsible for the observed heavy-tail degree distributions in Figs. 3A and B. This implies the existence of the PA phenomenon: a highly-connected author is likely to get more new collaborations than a lowly-connected one. B: The transitivity effect is highly non-power-law in both networks. While B b greatly increases when b changes from 0 to 1 in both networks, after this initial huge jump, B b stays relatively horizontal in SMJ and only slightly increases in STA. The huge jump at b = 1 implies that co-authors of co-authors is at least ten times more likely to become new co-authors, comparing with the case when there is no mutual co-author. The transitivity attachment exponents β are both greater than 1, indicating an exponentially faster growth rate of the transitivity function comparing to the PA function. This is evident in, for example, STA: while A 10 is less than 10, B 10 is already larger than 100. To the best of our knowledge, there is no theoretical result on the effect of β on the structure of a growing network, even for the supposedly simpler case when there is only transitivity. Overall, the results in this section indicate the joint existence of PA and transitivity phenomena in both networks. Our non-parametric approach revealed that a conventional power-law functional form in a parametric approach may not be the best to describe the two phenomena. For A k , the power-law form fits reasonably well the low-degree part, but cannot capture the deviations from the power-law form in the high-degree part. For B b , the power-law form is even less suitable. We hope our non-parametric findings could offer hints on more suitable parametric forms for A k and B b . Transitivity dominated PA in both networks After obtaining the estimates andB, we can compute the amount of contributions of PA and transitivity in the growth process of each network by plugging these estimates into Eqs. (4) and (5). The estimated amount of contributionsŝ PA (t) andŝ trans (t) are shown in Fig. 5 as solid lines. In each network,ŝ trans (t) is greater thanŝ PA (t) for all t. One might ask whether these tendencies hold for the true values s PA (t) and s trans (t) as well, or they are just artifacts arising when we plug andB into Eqs. (4) and (5). We demonstrate by simulations that, if the true A and B are close to the estimateŝ A andB, s PA (t) and s trans (t) are similar toŝ PA (t) andŝ trans (t). For each real network, we simulated 100 networks based on Eq. (1) using the estimates andB as true functions. We kept all the aspects of the growth process that are not governed by Eq. (1) the same as what observed in the real network. This includes using the observed initial graph and the observed numbers of new nodes and new edges at each time-step in the simulation. Since andB are the true PA and transitivity functions for each simulated network, we were able to calculate the true contributions of PA and transitivity in each simulated network using Eqs. (4) and (5). The behaviours of the simulated contributions are very similar to the estimated contributionsŝ PA (t) andŝ trans (t), which indicates that the latter are likely to be reliable. As explained in Section 3, one can interpret the contributionsŝ PA (t) andŝ trans (t) as estimates of the expectations ofĥ PA (t) andĥ trans (t), the sample standard deviations of PA and transitivity values at end points of actually-observed new edges at time-step t. This is expressed as where the estimatesŝ PA (t) andŝ trans (t) slightly overestimate the expectations, because Eĥ trans (t) ≤ (Eĥ trans (t) 2 ) 1/2 ≈ŝ trans (t). Overall, the data indicate the governing role of transitivity in the growth processes of both networks: it is mostly the differences in the transitivity values that decide where new collaborations are formed. This intuitive result is consistent with previous results which found that common neighbors are more effective than PA at link prediction in co-authorship networks (Liben-Nowell & Kleinberg, 2007). If PA was what dominates, a scientist would only need to indiscriminately acquire as much collaborators as possible in order to boost their number of collaborators in the future. In light of the current result, they, however, might need to be more selective, since a collaborator who has collaborated with a lot of people might offer more advantages. Diagnosis: time-invariance and goodness-of-fit Finally, we consider two questions that are critical to our real-world data analysis. The first concerns the validity of the time-invariance assumption of A k and B b in two networks: in each network, do A k and B b stay relatively unchanged throughout the growth process? The second is whether Eq. (1) is a reasonably good model for the networks. Although Fig. (6) already hinted at an affirmative answer for both questions, we examine each question in finer details. Time invariance of the PA and transitivity functions One way to answer the first question is to compare the A k and B b in Fig. 4 with the A k and B b estimated using only some portion of the growth process, for many different portions. If they are similar, one can conclude that A k and B b indeed stay unchanged throughout the growth process, and thus the time-invariance assumption is valid. To this end, from each original network, we create three new networks. The first new network ("First Half") contains only the first half of the growth process, thus allows estimating A k and B b in this portion. In the second new network ("Initial 0.5"), we set the initial time at the middle of the time-line, effectively enabling estimation of A k and B b of the second half of the growth process. In the third new network ("Initial 0.75"), we set the initial time at the 3/4 point of the time-line. This network lets us estimating A k and B B in the last quarter of the growth process. The estimated A k and B b in these three new networks then are compared with the A k and B b obtained from the full growth process (Figure 7). Visual inspection of Fig. 7 suggests that both the PA and transitivity functions stay relatively unchanged in the growth process of each network. This validates the time-invariance assumption. Goodness-of-fit We use a simulation-based approach to investigate the goodness-of-fit of the model. For each real-world network, we re-use the simulation data used in Fig. 5, which consists of 100 simulated networks generated using the estimated A k and B b of that network as true functions. We compare some statistics of the simulated networks with the corresponding statistics of the real network. If Eq. (1) is a good fit, then the observed statistics and the simulated statistics must be close. Similar simulation-based approaches have been proposed for inspecting goodness-of-fit of exponential random-graph models (Hunter, Goodreau, & Handcock, 2008) and stochastic actor-based models (Conaldi, Lomi, & Tonellato, 2012;J. Lospinoso, 2012). For an overview, we look at how well the model can replicate the observed degree curves. In Fig. 8, for each real-world network we choose uniformly at random ten nodes from the top 1% of all nodes in term of the number of new edges accumulated during the growth process. For each node, we then plot the evolution line of the observed degree value and the simulated degree value. The closer this line to the line of equality is, the better the model captures the observed degree growth of that node. Although for some nodes the simulated degree sometimes tends to be lower than the observed degree, the lines are overall reasonably close to the identity line, which implies the model captured the degree growth well. For a closer inspection, we then look at how well the model replicates the probability distribution of new edges during the growth process. In particular, consider sampling uniformly at random an edge e from the set of all new edges in the growth process. Suppose that e is between a node pair with degrees K 1 and K 2 (K 1 ≤ K 2 ) and the number of their common neighbors is X. The relative frequency, or observed probability, that K 1 = k 1 , K 2 = k 2 , and X = b is p k1,k2,b = t m k1,k2,b (t)/ While "First Half" contains only the first half of the growth process, the initial time is set at the middle and at the 3/4 point of the time-line in "Initial 0.5" and "Initial 0.75", respectively. In each data-set, all four PA /transitivity functions agree well with each other, which suggests that the PA and transitivity functions stay relatively unchanged throughout the growth process. which m k1,k2,b (t) is the number of new edges emerged at time t between a node pair whose degrees are k 1 and k 2 and their number of common neighbors is b. The probability p k1,k2,b thus summarizes information about the associations of k 1 , k 2 , and b at the end points of new edges through out the growth process. Our joint estimation of PA and transitivity is compared with two conventional approaches in which PA (Pham et al., 2015) and transitivity (Newman, 2001a) are estimated in isolation. For each of these two approaches, we first estimate the PA/transitivity function in isolation and then use the estimated function to generate 100 networks in order to inspect how well each existing method replicates p k1,k2,b . In order to visualize this probability distribution, which is multi-dimensional, we slice it into many one-dimensional ones by conditioning. Firstly, we look at with the convention that p k1,k2,b = 0 whenever k 1 > k 2 or k 2 > k max . This is the probability distribution of K 1 + K 2 conditioning on the event X ∈ B. Since we know from Fig. 3 that the number of node pairs with b = 0 or b = 1 is vastly greater than the rest, we consider two probability distributions p k|b≤1 and p k|b≥2 and show their cumulative probability distributions in Fig. 9. In all cases, the joint estimation approach best replicated the observed distributions. It is surprising to observe that the B b -in-isolation approach, which does not explicitly leverage any information about k, has more or less the same replication performance as the A k -in-isolation approach, which explicitly does. This suggests that the dimension of b preserves a fair amount of the information about k. Secondly, we look at where K is a non-empty set of un-ordered pairs. This is the probability distribution of X conditioning on the event (K 1 , K 2 ) ∈ K. Given a pair of node whose degrees are k 1 and k 2 and their number of common neighbours is b, there is a natural condition imposed on b: b must be not greater than either k 1 or k 2 . So if one chooses K such that k 1 or k 2 could be too small, the range of b would be severely limited. For this reason, we consider two probability distributions: p b| max(k1,k2)≤9 and p b| max(k1,k2)≥10 , both allow a large range for b. Their cumulative distributions are shown in Fig. 10. Once again, the joint estimation approach best replicated the observed cumulative probability distributions in all cases. While the B b -in-isolation approach replicated fairly well the observed distributions in most cases, the A k -in-isolation approach completely failed to do so in all cases. This implies that, while the dimension of b seems to preserve a fair amount of the information about k 1 and k 2 , the dimensions of k 1 and k 2 maintain little information about b. Overall, the joint estimation approach performed comparatively well. The surprisingly good performance of the B b -in-isolation approach is, in fact, in agreement with the dominating role of B b in the growth process of both networks. Combining the results in Fig. 8 with those in Figs. 9 and 10, we conclude that the joint estimation approach captured reasonably well both first-order and second-order information of the networks. This good fit is consistent with the fact that the key assumption of time-invariability of A k and B b is satisfied in both networks. Conclusion We proposed a statistical network model that incorporates non-parametric PA and transitivity functions and derived an efficient MM algorithm for estimating its parameters. We also presented a method that is able to quantify the amount of contributions of not only PA and transitivity but also many other network growth mechanisms by exploiting the probabilistic dynamic process induced by the model formula. Figure 9: Observed and simulated cumulative probability distributions p k|b≤1 and p k|b≥2 of k = k 1 + k 2 in two networks. For each estimation method, we generate 100 networks from the estimation result and report the average values over 100 simulations. A and B: the cumulative probability distribution p k|b≤1 in SMJ and STA, respectively. C and D: the cumulative probability distribution p k|b≥2 in SMJ and STA, respectively. In all cases, our joint estimation approach replicated the observed distributions comparatively well. Figure 10: Observed and simulated cumulative probability distributions p b| max(k1,k2)≤9 and p b| max(k1,k2)≥10 in two networks. For each estimation method, we generate 100 networks from the estimation result and report the average values over 100 simulations. A and B: the cumulative probability distribution p b| max(k1,k2)≤9 in SMJ and STA, respectively. C and D: the cumulative probability distribution of p b| max(k1,k2)≥10 in SMJ and STA, respectively. In all cases, our joint estimation approach replicated the observed distributions comparatively well. We showed that the proposed network model is a reasonably good fit to two real-world co-authorship networks and revealed intriguing properties of the PA and transitivity functions in those networks. The PA function is increasing on average in both networks, which implies the PA effect is at play. Excluding the high degree part, it does follow the conventional power-law form reasonably well. The transitivity function is, however, highly non-power-law in two networks: it jumps greatly after b = 0, but stays relatively horizontal or only slightly increases afterwards. This non-conventional form implies that co-authors of co-authors seems to be at least ten times more likely to become new co-authors, comparing with the case when there is no mutual co-author. We also found transitivity dominating PA in both networks, which suggests the importance of indirect relations in scientific creative processes. There are some fascinating directions for further developing the statistical methodology. Firstly, although the proposed model and most other network models in the literature assume that new edges at each timestep are independent, such edges are hardly so in real-world collaboration networks. Efficiently relaxing this assumption might lead to better models for this network type. Secondly, it is curious to see whether one could take the time-invariability test developed for stochastic actor-based models (J. A. Lospinoso, Schweinberger, Snijders, & Ripley, 2011) and adapt it to our model. On the application front, this work lays out a potentially fruitful approach for analyzing complex networks, while raising more questions than it answers. Does transitivity always dominate PA in co-authorship networks? Which parametric forms are capable of capturing the fine details seen in Fig. 4? What are the properties of PA and transitivity in co-authorship networks at the level of institutions or countries? We hope this paper has convinced informetricians to include non-parametric modelling of PA and transitivity into their toolbox. Acknowledgements This work was supported in part by JSPS KAKENHI Grant Numbers JP19K20231 to TP and JP16H02789 to HS. The funding source had no role in study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication. A An MM algorithm for estimating the non-parametric PA and transitivity functions To maximize the partial log-likelihood function l(A, B) in Eq. (2), we derive an instance of the Minorize-Maximization algorithms (Hunter & Lange, 2000). Denote A (q) k the value of A k at iteration q (q ≥ 0), and A (q) = [A b and B (q) in a similar way. Starting from some initial values (A 0 , B 0 ) at iteration q = 0, we want to compute (A (q+1) , B (q+1) ) from (A (q) , B (q) ). In MM algorithms, one derives such update formulas by first finding a surrogate function Q(A, B) that satisfies l(A, B) ≥ Q (A, B), ∀A, B and l(A (q) , B (q) ) = Q(A (q) , B (q) ), and then maximize the surrogate function. One can prove that, if (A q+1 , B q+1 ) maximizes Q(A, B), then l(A (q+1) , B (q+1) ) ≥ l(A (q) , B (q) ), i.e., the objective function is increased monotonically per iteration. Since there can be many surrogate functions that satisfy the conditions, the main indicator for evaluating a particular Q(A, B) is how easily we can maximize it. Based on previous works (Pham et al., 2015;Pham, Sheridan, & Shimodaira, 2016), the following function is a surrogate function of l: where K := k max and B := b max . The product A i A j B l in the numerator of the third term in the r.h.s. of Eq. (A.1) prevents parallel updating of A and B. One way to deal with this product is to apply the AM-GM inequality (Hunter & Lange, 2004): where m i,k,· (t) := B l=0 m i,k,l (t) and m ·,·,b := K i=0 K j=i m i,j,b (t). Based on these formulas, at each iteration A (q+1) and B (q+1) can be computed in parallel without solving any additional optimization problems. This enables the method to work with large data-sets. The objective function value l(A (q+1) , B (q+1) ), as explained earlier, is guaranteed to be increasing in q. The standard deviation ofĥ trans (t) then can be calculated by plugging andB into the above formula. The standard deviation ofĥ PA (t) follows the same derivation.
10,721.2
2019-10-01T00:00:00.000
[ "Computer Science", "Mathematics" ]
Application of fractional differential equation in economic growth model: A systematic review approach : In this paper we review the applications of fractional differential equation in economic growth models. This includes the theories about linear and nonlinear fractional differential equation, including the Fractional Riccati Differential Equation (FRDE) and its applications in economic growth models with memory effect. The method used in this study is by comparing related literatures and evaluate them comprehensively. The results of this study are the chronological order of the applications of the Fractional Differential Equation (FDE) in economic growth models and the development on theories of the FDE solutions, including the FRDE forms of economic growth models. This study also provides a comparative analysis on solutions of linear and nonlinear FDE, and approximate solution of economic growth models involving memory effects using various methods. The main contribution of this research is the chonological development of the theory to find necessary and sufficient conditions to guarantee the existence and uniqueness of the FDE in economic growth and the methods to obtain the solution. Some remarks on how further researches can be done are also presented as a general conclusion. Introduction The development of mathematics is so fast at present, especially the topic of derivatives and integrals which initially were only oriented on the natural number order, now it is expanded to the fractional-order, encompasing the rational and real numbers. Although derivatives and integrals with fractional order are believed to have been introduced since 1695, significant developments have only occurred in the early 21st century. This can be seen from the large numbers of scientific papers that examine problems related to derivatives and integrals with fractional order, and their applications in various fields of science, including in the fields of engineering and economics. To find the solution of a Fractional Differential Equation (FDE), first it needs to show that the FDE indeed has a solution. Thus, it is necessary to analyze the FDE specifically to show the conditions needed to guarantee its existence and uniqueness of the solution given an initial value for the FDE. An FDE is often used to model growth incorporating memory effect. Since many economical processes have memory effect in their nature, the FDE is a suitable concept to model the growth of many economical processes. The memory effect is also a nature in the definition of the fractional derivatives. Hence, calculus fractional serve as a backbone of the effect of memory on the economic growth model. Fractional calculus is a generalization of classical calculus, in general, calculus here refers to derivatives, integrals, and differential equations. Based on [1], derivatives or integrals with integer-order have local properties (the next state is not influenced by the current and previous state), while fractional derivatives have non-local properties (the next state depends on the current state and all previous states). Thus, FDE has a memory effect, because fractional derivatives or FDE have non-local properties. This is a major advantage of fractional derivatives over classical (integer order) derivatives, where the effect is generally ignored. In addition, this memory effect does not only apply to time variables but can also apply to other variables, such as price. Financial variables such as asset prices or product prices require more long-term memory to estimate price fluctuations in future periods based on fluctuations in previous periods [2]. Before discussing the theory of the effect of memory on the economic growth model, first we discuss the notion of economic growth. Economic growth is the process of changing a country's economic condition towards better conditions for a certain period. Economic growth can also be interpreted as the process of increasing the production capacity of an economy which is manifested in the form of an increase in national income. The existence of economic growth is an indication of the success of economic development in a country. Economic growth shows the increase in the production of goods and services in a region in a certain time interval. In general, the higher the level of economic growth, the faster the process of increasing the output of the region, means that the prospects for regional development are getting better. Hence, technically, the economic development is defined as an increase in output per capita in the long term [3]. When the economic growth is model by an FDE it means it is assumed that there is a memory effect in the economic process being modeled. In this paper we will review existing literatures of FDE both from mathematical development, such as the techniques developed by the authors to find the solution of the FDE, and from applications sides. Special emphasizing will also be presented on the methods used to prove the existence and uniqueness of the solution. In general the worth of a review paper is the compiling, summarizing, critisizing, and synthesizing the available information on the topic being considered. It is expected that current paper can clarify the state of knowledge and identify needed research in the area of FDE and its applications [4]. The method on how the review is done will be presented in the following section followed by results and their discussion shaping future directtion of the potential research that could be done. Materials and methods The materials used to conduct the research are composed of two different types, the first one is the conceptual material and the second one is the media in which the concepts are presented, such as conference papers, books, and primary publication in scientific journals. Materials The first type of materials that we consider in this study includes conference papers, books, book chapters, and primary publications in scientific journals. While the second types of material includes mathematical concepts, such as differential equations, and related attributes. What we mean by related attributes are the things like fractional (a mathematical concept) as well as general concepts like its applications in economics and other areas. The attributes that we consider also includes the keywords in this paper, e.g., FRDE, economic growth model, memory effect, etc. For the economic model we emphasize on the following Harror-Domar model We especially consider the economic growth model of Harrod-Domar [5], i.e., where v is a positive constant, called the investment ratio and describes the rate of acceleration, 1 is the marginal productivity from capital (acceleration rate), I(t) is net investment function and ( ) is the first-order derivative against time t of the function Y(t). Based on [5], it is assumed that the net investment value is a fixed part of the profit proportional to the difference between income, PY(t), and cost, C(t), where cost given by where m is the net investment rate (0 < m < 1), that is the profit-sharing used for the net investment. The cost C(t) = aY(t) + b is a linear function, where a is the marginal cost, that is part of the cost that deepends on the value of the output, while b is an independent cost that is part of the cost that does not depend on the value of the output. Eq (2) together with the FDE concept are the central of the review in this paper. The FDE is critically depends on the types of fractional derivatives we used in defining the derivatives. For further discussion, we will use the following definitions. Definition 2.1 The fractional derivative according to Caputo is defined or usually written in the form where n -1< α < n, n ϵ N, where N is a natural number. Thus is not limited to numbers between 0 and 1, but is a rational number or even a real number. Definition 2.2 The Riemann-Liouville fractional integral of order is defined as with of any order, a is lower bound, a < x, x > 0, f(x) is an analytical function and is continuous in the interval [a, x]. I α is a fractional integral of order α with initial condition a = 0, thus the symbol of fractional integral of order α with initial condition a = 0 of f(x) can be written as where D α is a fractional derivative operator of order α with initial conditions a = 0. Meanwhile, D -α is an integral operator which is the inverse of the D α operator. Methods The research method is done through tracing primary literatures on fractional differential equations with emphasize on its application to economic growth models. We study carefully the development of theories that have been worked on by previous researchers and presented mainly in chronological order to gain relationships among the development and results therein, so that it is hoped that a gap of important issues can be obtained which may direct to new theories needed to be developed and applied in economic growth models with the involvement of memory effect. This research is conducted in three stages, namely: 1) Review the results that have been achieved in previous researches, encompasing both the applications of the FDR in economic growth model and the mathematical methods used to solve the models. This is necessary to determine and set targets to be achieved in the next research. The study will be conducted through journals, textbooks, or other media. Beside the study of the chronological development of the literatures, we also carried out the bibliometrical analysis. This study is performed to uncover or identify research position in this area using the Publish or Perish application program. The following section summarizes the results of our review. Review on the techniques to solve an FDE The existence and uniqueness of the solution of some nonlinear FDE with known initial value can be found in [6]. In this reference, the authors have obtained a unique solution from a nonlinear FDE by the use of a contraction map T and the contraction principle. They proved that a map T has at least one fixed point at C ([0,1], R) using the Schauder's fixed-point theorem. The existence and uniqueness of the linear FDE initial value problem can also be found in [7]. In their paper, the existence of linear FDE solution is proven by the help of the Laplace transform on the fractional derivative sequence and the Riemann-Liouville derivative definition, while the uniqueness of linear FDE solution is proven by the use of the linear properties of fractional derivative and Laplace transform. Gambo et al. [8] studied the fractional Cauchy problem by generalizing the left Caputo fractional derivative in continuous and differentiable function space and proving the existence as well as the uniqueness of a nonlinear FDE solution if it meets Lipschitz's requirements. Further, the existence and singularity of solutions of nonlinear fractional differential equations using various methods, such as fixed point theory, basic theory of inequalities, peano locales and extreme solutions can be found in [9]. Many research on the applications of FDE on the economic growth model generally use linear FDE, while most economic phenomena are nonlinear. So the theory of the FDE needs to be developed into nonlinear FDE forms, including the application of the Fractional Riccati Differential Equation (FRDE) on the economic growth model. One of the most common models is the FRDE with Caputo fractional derivative. The methodology to find the FRDE solution using the concept of Caputo fractional derivative can be found in [10] and some results for FRDE with incomplete meromorphic functions can be found in [11]. Busawon and Johnson (2005) presented a closed-form analytical solution of FRDE in accordance to homogeneous linear Ordinary Differential Equation (ODE) [12] and then used to obtain analytical solutions of second-order homogeneous linear ODE with FRDE. Several numerical methods have been developed to solve FRDE such as variational iteration method, Laplace-Adomian-Pade method, Adomian decomposition method, Homotopy Perturbation methods, Chebyshev finite difference method, asymptotic decomposition method and Optimal homotopy asymptotic [13][14][15][16][17][18]. In addition, analytical and exact solutions have attracted the interest of researchers to solve fractional Riccati differential equations [19][20][21]. Harko et al. [19] presents ten new exact solutions for fractional Riccati differential equations by assuming certain relationships between coefficients in the form of some integral or differential operator. Jaber and Al-Tarawneh (2016) present an exact solution from the fractional Riccati differential equations. The exact solution is obtained using the following stages: 1) reducing it to a second-order linear GDP, 2) converting it into a Bernoulli equation, 3) obtaining the solution by considering an integral condition R(x), where R(x) is the constant part of fractional Riccati differential equations [20]. Further, Khaniyev and Merdan (2016) studied an analytical solution of fractional Riccati differential equations with conformable fractional derivatives [21]. In General, the fractional Riccati differential equations is a nonlinear differential equation with a complex form of equations so that some analytic techniques are difficult to solve, maybe even unsolvable [22]. In finding nonlinear fractional differential equations solutions generaly it can be used several iteration methods such as the Adomian Decomposition Method (ADM), Variational Iteration Method (VIM), Laplace Adomian Decomposition, Differential Transformation, Homotopy Perturbation, and others. Literatures related to the approximate solutions of the fractional differential equation using ADM and alike methods are discussed in the following studies. Initially, Adomian (1988) introduced the decomposition method to find an approximate solution of a Differential Equation [23]. Then Genga et al. [24] studied a piecewise variational iteration method or a modified variational iteration method to solve the fractional Riccati differential equations. Followed by Jafari and Tajadodi (2010) who presented the solution of fractional Riccati differential equations using the He variational iteration method [25]. Further Faraz et al. [26] utilized variational iteration method to solve partial fractional Riccati differential equations. In the following year, Bhakelar and Listdar-Gejji (2012) completed a logistic PDF using the new iterative method, the Adomian decomposition method, and the homotopy perturbation method [27]. Some researchers review the methods, such as Duan et al. [28] who emphasized on the Adomian decomposition method and its modifications, as well as its application to differential equations. Further development done by Jafari et al. [29] who introduced a modified variational iteration method with Adomian polynomials nonlinear forms to solve the fractional Riccati differential equations. Also a modification in the form of time delay is presented by Mohammedali et al. [30] who give an approximate solution of the Riccati differential equation in the form of a non-homogeneous matrix with a time delay using the variational iteration method. Finally, a combination of the Adomian decomposition method and the Sumudu integral transformation is developed to obtain an approximate solution of the fractional Riccati differential equations in [31]. Review on the applications of FDE in economics One of the important topics of the applications of fractional derivatives and fractional integral is the description of economic phenomena involving the concept of memory effects. The application of fractional derivatives to generalize ideas in the framework of solving economic dynamics problems by involving memory effects can be found in [32]. The authors in this reference suggest that determining the price elasticity of demand for a product and their applications in economics can use fractional derivatives to express the memory effect of the process. This approach has advanced the theory of economic growth. The theory of economic growth is the theory that explains the phenomenon of socio-economic change. This field now includes more advanced theory, such as the generalization of economic growth models that involve memory effects. In the early stage, the theory used linear fractional differential equation [33] to model the national economic growth, with case study of gross domestic product in Spain. This kind of work is done using derivative and integral models with fractional order [34]. Other case studies who looked at the applications of the fractional derivative are done by several authors, e.g., those who used the fractional derivative of Caputo to simulate gross domestic product growth in China, United States and Italy [35]. The economic growth model was developed by experts regarding the idea of improving socio-economic conditions. In the early development the model was developed by ignoring the effect of memory. However, since in realty there are many socio-economic process that depend on the effect of memory. For example, an acceleration of economic growth usually involves memory effect. Among the authors that considered this effect in their model are [36] in which they developed a model using a discrete-time approach. Other authors [37] discussed the economic processes involving long and short-term memory effects. They modeled the effect by fractional differential equation, where the fractional derivative used is the Grunwald-Letnikov. They found the exact solution using the Fourier transform [37]. The theory of economic growth is grouped into two parts, namely the classical economic growth and the neo-classical economic growth. Adam Smith's classical economic growth theory states that a country's economic growth is determined by two main factors, namely population growth and output growth. Meanwhile, the neo-classical economic growth theory put forward by senior economists named Robert Solow and T. W. Swan focuses on three main factors that affect economic growth, namely capital, labor, and technological development. Based on this, the neo-classical theory believes that an increase in the number of workers can increase per capita income. However, without the development of modern technology, this increase will not be able to provide positive results on national economic growth. As for some solutions of economic model involving memory effects are investigated by some of the research. Machado et al. [38] discussed the economic growth model using multidimensional scaling methods and state-space portrait analysis. Paper [39] proved that the local fractional derivative of the differential function is an integer order or zero-order derivative and show the local fractional derivative is the limit of the Caputo fractional derivative. Paper [40] formulated interpretations of the economy by using the concept of T-indicators and derivatives of fractional Caputo which allows describing the economic processes with memory effects. Paper [41] proposed a generalized model of economic growth from the logistic FDE and the Volterra integral equations involving memory effect and crisis effect. The memory effect means that the economic factors and parameters at a given time depend not only on the values at that time but also on the values of the previous time. The mathematical description of the memory effect uses the fractional-order derivative theory, while the crisis effect is a sudden price change in the form of a price explosion which can be represented by a Gaussian function with zero mean and small variance. Paper [42] constructed a mathematical model of the economic process with various types of memory. Luo et al. [43] implemented the calculus of fractional to analyze the economic growth model to simulate the gross domestic product in Spain. Pakhira et al. [44] studied several models Economic Order Quantity (EOQ) depends on the memory effect that has an important role to handle business policy on the inventory system. Paper [45] formulated two dynamic intersection principles with memory effects, namely the principles of changes in the rate of technological growth and changes in dominance. Pakhira et al. [46] presented an inventory model with a memory effect and show the results of both long-term and short-term memory effects on the minimum average total cost and the optimal ordering interval. Paper [47] constructed a mathematical model at economic growth with the memory effect and the delay time distribution that is continuous. This model can be considered a generalization of the standard macroeconomic model. Paper [48] formulated three types of general inventory models that fit the classic inventory model with nonlinear demand levels and then analyzed the behaviors of those models. Tejado et al. [49] presented economic models for countries that are members of the G7 in 1973-2016. Acay et al. [50] investigated several economic problems with the help of non-local fractional operators which include Caputo, Caputo-Fabrizo, Atangana-Baleanu-Caputo (ABC), and development of the Mittag-Leffler kernel. Some literature related to fractional order in economic models can be seen in [51][52][53][54]. Review on the existence and uniqueness of the FDE solution Although FDE has been applied to various fields of science, including economics, research on the existence and uniqueness of nonlinear FDE in economic growth models has not been widely researched. FDE involving the Riemann-Liouville fractional differential operator with the order of 0 < < 1 is often used in modeling some phenomena of economic growth. This show that it is necessary to find for the existence and uniqueness condition of the FDE solution of this type. In this article, we review the basic theory of existence and uniqueness of the nonlinear FDE solution. The existence and uniqueness of nonlinear FDE solutions are often obtained using the results of the fixed-point theory, the theory of inequality, local peano, and extreme solutions [9]. The following shows the theorems related to the theory of existence and uniqueness of the FDE solution. Theorem 3.1. Initial Value Problem [9] Consider the initial value problem (IVP) for fractional differential equation given by where ∈ ([0, ], ), is the fractional derivative of and is such that 0 < < 1. Since is assumed continuous, the IVP (3) is equivalent to the following Volterra fractional integral that is, every solution of (4) is also a solution of (3) and vice versa. Here and elsewhere Γ denotes the Gamma function. one of the foregoing inequalities being strict. Suppose further that ( , ) is nondecreasing in for each and then it is obtained Theorem 3.3. Nonstrict [9] Assume that the conditions of Theorem 3.2 hold with nonstrict inequality (5) and Eq (6). Suppose further that whenever ≥ and > 0. Then, (0) < (0) and > ( + 1) implies Theorem 3.4. Local Peano Existence [9] Consider the fractional differential equation Theorem 3.6. Uniqueness [55] Assume that with some * > 0 and some > 0. Furthermore, let the function : → ℝ be bounded pada and fulfill a Lipschitz condition concerning the second variable; i.e., with some constant > 0 independent of , , and . Then, denoting as in Theorem 1 there exists at most one function : [0, ] → ℝ solving the IVP for FDE. The theorem on the existence and uniqueness of the FDE solution is very similar to the classical theorem in the case of differential equations [45]. The solution is equivalent to the nonlinear Volterra integral equation of the second kind. Based on the Ascoli-Arzela theorem and Schauder's fixed point theorem, the theorem on the existence and uniqueness of the FDE solution is obtained uses a unique fixed-point as an aid. Fractional Riccati Differential Equation (FRDE) The economic growth model with the involvement of the effects of memory loss, is given by the differential equation with fractional order α > 0 of Eq (1) that describes the relationship between net investment and the value of marginal output. By substituting Eq (2) into (1), we obtain Furthermore, if the production cost is linear, i.e., C(t) = aY(t) + b, then Eq (3) becomes: So we have a linear FDE of order α > 0 representing the economic growth model that involves memory effect. We have seen in the review results that this kind of equations has received much attention is literatures. Meanwhile if the production cost is quadratic, then the equation will end up to the first order Riccati differential equation. The first order Riccati differential equation is a special form of the nonlinear differential equation, that is with P(x) ≠ 0 and P(x), Q(x), R(x) is a function of x. Meanwhile, the Fractional Riccati Differential Equation is given by * ( ) = ( ) + ( ) ( ) + ( ) 2 ( ), ∈ ℝ, 0 < ≤ 1, > 0, with the initial conditions where is the order of the fractional derivative, is an integer, ( ), ( ), and ( ) are a real, and is a constant. The economic growth model can be developed by assuming the cost function C(t) in the form of the quadratic, cube, even root form. Furthermore, for the quadratic form of the cost function, it means that if the output value is greater, the cost will be quadratically bigger. If the cost function is quadratic in the form form C(t) = aY 2 (t) + kY(t) + b, where a, k is the marginal cost that is part of the cost that depends on the output value. As for b is an independent cost, that is, the part of the cost that does not depend on the output value, then Eq (4) becomes: The meaning of the variables in Eq (16) is as follows: ( 0 + )( ) = fractional derivative of order from Y(t) against t; 0 = fractional derivative operator of order against t with t > 0; t = time; Y(t) = output value / number of products produced during time t process production. Unlike the linear fractional model of economis growth, which has received much attention is literatures, the literature on economic growth models taking form as FRDE is still regarded rare. This is a gap in the literature of FRDE used in economic growth models. Figures 1-3 show this gap visually reflecting most (less) heavily discussed among related concepts in economic growth models using fractional derivative in the modeling process and analysis. The figures are generated by the VOSviewer application program as a results of bibliometrical analysis using the Publish or Perish application program. Conclusions In this paper we have reviewed literatures on fractional derivatives, linear and nonlinear FDEs, together with their applications on natural economic growth having the effects of memory in the models. Furthermore, we have presented the review results on the solution of the economic growth models in the form of FRDE. The results show that unlike the linear fractional model of economic growth, which has received much attention is literatures, the literature on economic growth models taking form as FRDE is still regarded rare. This is a gap in the literature of FRDE used in economic growth models since many economic processes have memory effect in their nature. This finding can be used as a base to formulate future research in the area of applications of fractional differential equation in economic growth modeling.
6,067.6
2021-01-01T00:00:00.000
[ "Economics", "Mathematics" ]
A Novel Approach for Permittivity Estimation of Lunar Regolith Using the Lunar Penetrating Radar Onboard Chang’E-4 Rover : Accurate relative permittivity is essential to the further analysis of lunar regolith. The traditional hyperbola fitting method for the relative permittivity estimation using the lunar penetrating radar generally ignored the effect of the position and geometry of antennas. This paper proposed a new approach considering the antenna mounting height and spacing in more detail. The proposed method is verified by numerical simulations of the regolith models. Hence the relative permittivity of the lunar regolith is calculated using the latest high-frequency radar image obtained by the Yutu-2 rover within the first 24 lunar days. The simulation results show that the relative permittivity is underestimated when derived by the traditional method, especially at the shallow depth. The proposed method has improved the accuracy of the estimated lunar regolith relative permittivity at a depth of 0–3 m, 3–6 m, and 6–10 m by 35%, 14%, and 9%, respectively. The thickness of the lunar regolith at the Chang’E 4 landing site is reappraised to be 11.1 m, which improved by ~8% compared with previous studies. Introduction The airless Moon is the closest extraterrestrial object to the Earth. It is also the most frequently explored planetary body with 129 exploration missions completed so far [1]. On 3 January 2019, China's Chang'E-4 spacecraft successfully landed in the Von Karmen crater within the South Pole-Aitken basin (SPA) on the far side of the Moon [2]. It is the first in situ exploration for human beings on the far side of the Moon, which is conducive to unveiling the mystery of the lunar subsurface structures [3]. The lunar regolith is the transitional zone between the solid Moon and the free space, which contains essential information about the geological evolution of the Moon [4]. The study of lunar regolith is critical to better understand the origin and evolution of the lunar surface activities. Relative permittivity describes the ability of dielectric materials to store and release energy. It is an important electromagnetic property, which is closely related to electromagnetic (EM) wave velocity, bulk density, and loss tangent of lunar materials [4]. Therefore, the accurate measurement and estimation of the relative permittivity are very important for lunar radar observations [5]. There are three methods to estimate the relative permittivity of the lunar materials: laboratory sample measurement, remote sensing, and in situ detection [6][7][8][9][10][11]. As to sample measurement, it is the most accurate and direct method compared with the other two methods. Laboratory measurements of Remote Sens. 2021, 13, 3679 2 of 14 Apollo samples show that the relative permittivity of lunar regolith has a broad range from 2.3 to 6.5 [4], with a typical value of 2.7 [12,13]. However, only the dielectric properties of the sampling position around the Apollo landing sites can be tested, and the number of sample collection sites is limited. The laboratory environment is quite different from that of the Moon. The moisture and temperature in the laboratory environment probably affect the accuracy of the relative permittivity test [6]. For radar remote sensing measurement of the dielectric properties of the lunar surface [7,14], the relative permittivity can be obtained by combining the time delay and layer thickness measured by other methods like laser altimetry [15][16][17]. The advantage of orbiting and ground-based radar can be used to detect the dielectric properties over large areas. However, the spatial resolution of this method is not as good as that of in situ detection. For example, the ALSE (Apollo lunar sounder experiment) onboard Apollo 17 had a resolution of 300 m in free space, while the Chang'E-3 (CE-3) radar has a range resolution of meters level [18,19]. The lunar penetrating radar (LPR) onboard the CE-3 mission made the first in situ radar survey of the subsurface structures of the Moon [19]. The Chang'E 4 (CE-4) spacecraft is a backup of the CE-3, so that the scientific instruments are almost the same [20]. The LPR consists of two channels, namely low-and high-frequency channels [21]. The low-frequency channel operates at 60 MHz, with a bandwidth of 40 MHz to 80 MHz, and a range resolution of meters level. The high-frequency channel has a center frequency of 500 MHz, and its bandwidth ranges from 250 MHz to 750 MHz [21]. The range resolution of the high-frequency channel is better than 0.3 m [19]. The relative permittivity can be derived by several different methods using the LPR, e.g., the method of the surface reflection [11], the dual-antenna inversion [22], and the hyperbola fitting [10,[23][24][25][26]. The surface reflection method is affected by the direct coupling wave, ground reflection, the lunar surface roughness, which can only estimate the relative permittivity of the shallowest layer of the lunar regolith. The surface relative permittivity is estimated to be~2.9 and~2.91 at CE-3 and Chang'4 landing sites, respectively [11,27]. Ding et al. [28] assumed a dichotomy boundary presence at a rocky hill region detected by the Yutu-1 rover. The relative permittivity is derived by comparing the subsurface radar reflectors with the actual interpolation depth, and the result is~9 [28]. Zhang et al. [22] used the LPR dual high-frequency data to inverse the relative permittivity of lunar regolith at the CE-3 landing site. Dong et al. [29] estimated the wave velocity and the relative permittivity of the CE-4 landing site by the 3D velocity spectrum method. The hyperbola fitting method has been widely applied to relative permittivity estimation both in Ground Penetrating Radar (GPR) and the LPR field. The average relative permittivity of the lunar regolith at the CE-3 landing site is estimated to be~3.2 based on the high-frequency LPR radar image [10]. Three layers at the CE-3 landing site can be recognized by the relative permittivity distribution [23,24], and the error analysis between different hyperbolic shape recognitions is discussed [26]. The relative permittivity of lunar materials is deduced to be 3.5 on average within the depth range of~12 m at the CE-4 landing site [2]. However, in previous works on LPR [2,10,[22][23][24][25], the traditional hyperbola fitting method assumes that antennas are close to the surface. It is not the case for the LPR antenna system, which has an antenna height of 0.3 m, and spacing of 0.16 m [21]. The uncertainty of relative permittivity estimation caused by the antenna position and geometry has not been assessed and evaluated by the previous works. In this paper, a new EM wave spreading model and the relative permittivity calculation algorithm are proposed, considering the influence of the antenna height and spacing. To verify the performance of the new approach, we used the finite-difference time-domain (FDTD) method to simulate the EM response of various models, including homogeneous models and stochastic models. We analyzed the effects of antenna height and spacing. Meanwhile, we calculated and compared the relative permittivity of the lunar regolith by both traditional and new methods. Meanwhile, we calculated and compared the relative permittivity of the lunar regolith by both traditional and new methods. Geological Context of the CE-4 Landing Site The Chinese CE-4 spacecraft has successfully landed on the floor of the Von Karman Crater in the South Pole-Aitken basin (SPA; Figure 1a). The SPA is the largest basin discovered in the solar system, and the age is estimated to be 4.2 Ga [30]. The materials from the upper layer of the lunar mantle to the depth of ~100 km might be excavated to the surface, including olivine and low-calcium pyroxene in the SPA [1,31]. The Von Karman crater was formed in the SPA, and the age is dated to ~3.6 Ga [32]. The CE-4 landing site is located on the ejecta ray of the Finsen crater, which contributes an ejecta deposit of ~7 m calculated by the model proposed by Pike et al. [33]. The thickness of the lunar regolith is estimated to be ~12 m by the LPR, which is thicker than that of the CE-3 landing site [2,[34][35][36]. The loss tangent of lunar materials is estimated to be ~0.005, which is close to that of typical lunar regolith [2,35]. It shows that the EM attenuation rate at CE-4 is less than that of regolith at the CE-3 landing site [37]. The origin of those regolith developed on the ejecta materials source from nearby craters (e.g., Finsen, Alder, Von Karman L, and L' [2]). LPR Data Collections and Processing The Yutu-2 rover has walked for 589.6 m during the first 24 lunar days ( Figure 1b). The LPR obtains 266,073 and 48,282 traces of high-frequency data (antenna 2B) and lowfrequency data, respectively. In LPR data collection, 1-bit quantization and non-uniform sampling methods were adopted [21]. A variable gain method is used to compress the strong signals and amplify the weak signals. Besides, to improve the signal-to-noise ratio, each trace of LPR data results from the accumulation of multiple measurements. Thus, it is necessary to pre-process the raw data to recover the actual LPR signal [19]. LPR Data Collections and Processing The Yutu-2 rover has walked for 589.6 m during the first 24 lunar days ( Figure 1b). The LPR obtains 266,073 and 48,282 traces of high-frequency data (antenna 2B) and lowfrequency data, respectively. In LPR data collection, 1-bit quantization and non-uniform sampling methods were adopted [21]. A variable gain method is used to compress the strong signals and amplify the weak signals. Besides, to improve the signal-to-noise ratio, each trace of LPR data results from the accumulation of multiple measurements. Thus, it is necessary to pre-process the raw data to recover the actual LPR signal [19]. The flow of data processing of the LPR starts from level 2B data. The processes in this study mainly include trace editing, band-pass filtering, background removal, time delay adjustment, and spherical and exponential compensation (SEC) gain setting [2]. Traditional Hyperbolic Fitting Method The hyperbola fitting method has been widely used in the GPR field [38]. It is based on the assumption that the depth of reflectors is much greater than the height and spacing of the GPR antenna so that the geometry and position of the antenna can be ignored. The buried reflector within the lunar regolith forms a hyperbola curve in the radar image [38]. The relative permittivity of the lunar regolith above the reflector is closely related to the hyperbolic curve pattern, which can be obtained by the hyperbolic fitting method. The velocity of the LPR radar pulse in the lunar regolith can be described as where µ r is the relative permeability of the medium; ε r is the relative permittivity of the lunar materials; c is the speed of the EM wave in the free space (here we suppose c = 0.3 m/ns). Generally, the relative permeability of the lunar regolith is approximately close to 1 [6], so that Equation (1) can be rewritten as follows The time delay of the reflected echo can be described as where (x, 0), (x 0 , −H) are the position of the Yutu-2 rover and the reflector (as shown in Figures 2 and 3). The variables t and x can be obtained from the hyperbolic curve in the radar image. H, v, and x 0 can be derived by the hyperbola fitting. Once the v is derived, the relative permittivity of the lunar material can be simply calculated by the Equation (2). The flow of data processing of the LPR starts from level 2B data. The processes in this study mainly include trace editing, band-pass filtering, background removal, time delay adjustment, and spherical and exponential compensation (SEC) gain setting [2]. Traditional Hyperbolic Fitting Method The hyperbola fitting method has been widely used in the GPR field [38]. It is based on the assumption that the depth of reflectors is much greater than the height and spacing of the GPR antenna so that the geometry and position of the antenna can be ignored. The buried reflector within the lunar regolith forms a hyperbola curve in the radar image [38]. The relative permittivity of the lunar regolith above the reflector is closely related to the hyperbolic curve pattern, which can be obtained by the hyperbolic fitting method. The velocity of the LPR radar pulse in the lunar regolith can be described as where μr is the relative permeability of the medium; εr is the relative permittivity of the lunar materials; c is the speed of the EM wave in the free space (here we suppose c = 0.3 m/ns). Generally, the relative permeability of the lunar regolith is approximately close to 1 [6], so that Equation (1) can be rewritten as follows The time delay of the reflected echo can be described as where (x, 0), (x0, −H) are the position of the Yutu-2 rover and the reflector (as shown in Figures 2 and 3). The variables t and x can be obtained from the hyperbolic curve in the radar image. H, v, and x0 can be derived by the hyperbola fitting. Once the v is derived, the relative permittivity of the lunar material can be simply calculated by the Equation (2). New Method The hyperbola fitting method has good performance in the GPR field. However, this is not the case for LPR as shown in Figures 2 and 3. The layout of the LPR antennas is quite special. The transmitting and receiving antennas are allocated at the bottom of the Yutu-2 rover, and it has to be lifted to avoid obstacles while the LPR travels on the lunar surface. The antenna height and spacing are ~0.3 m and 0.16 m, respectively [21]. A new method considering antenna height and spacing is required. According to Snell's law, the following equations can be derived. Figure 3a,c, the following equations can be derived. New Method The hyperbola fitting method has good performance in the GPR field. However, this is not the case for LPR as shown in Figures 2 and 3. The layout of the LPR antennas is quite special. The transmitting and receiving antennas are allocated at the bottom of the Yutu-2 rover, and it has to be lifted to avoid obstacles while the LPR travels on the lunar surface. The antenna height and spacing are~0.3 m and 0.16 m, respectively [21]. A new method considering antenna height and spacing is required. According to Snell's law, the following equations can be derived. where,θ u1 , θ u2 , θ d1 and θ d2 are the incident angle and the refraction angle of the up-going wave and the downward-going wave, respectively. According to the geometric relationship, as shown in Figure 3a,c, the following equations can be derived. Remote Sens. 2021, 13, 3679 6 of 14 where h and L are the antenna height and antenna spacing, respectively. The position of the echo reflector, the transmitting and receiving antenna are (x 0 , −H), (x 0 − L/2, h), (x 0 + L/2,h), respectively. The position of the incident and emission points are (x 1 , 0) and (x 2 , 0), respectively. Assuming the propagation distances in the free space and lunar material are l 1 and l 2 , respectively. Hence, the two-way travel time t can be re-described as where l 1 and l 2 can be expressed as Suppose the reflector is horizontally in the middle of the transmitting antenna and the receiving antenna as shown in Figure 3b,d. The incident angle and the refraction angles are θ 1 and θ 2 , respectively. The propagation distances of the radar pulse in free space and lunar material are l 01 and l 02 , respectively. t 0 is the time delay of the reflected echo. According to the geometric relationship, the following equations can be derived l 01 = 2h cos θ 1 (14) l 02 = 2H cos θ 2 (15) Suppose θ = θ 2 , Equations (12)-(15) can be combined as Combining the Equations (4)- (8), (16)- (17), we can obtain a group of equations as Equation (18). In Equation (18), x 1 , x 2 , ε, H, θ, t 0 , and x 0 are unknown variables, x, t, h, L, and c are the known variables. The number of equations is smaller than those of unknown variables. To calculate these equations, we have to obtain at least two unknown variables by other methods to constrain the solution of the above equations. The hyperbolic echo pattern of the buried object has a symmetrical feature with the peak. Thus, x 0 can be obtained by the hyperbolic fitting method, and t 0 can be obtained by finding the peak of the hyperbola from the radar image. Therefore, the unknown variables that remained are x 1 , x 2 , ε, H, and θ. Equation (18) can be used to calculate these five unknown variables so that relative permittivity can be obtained. Specifically, as for a certain point (x, t) on the hyperbola curve, once the peak of the corresponding hyperbola curve (x 0 , t 0 ) is determined, one relative permittivity ε can be obtained by combining (x, t), (x 0 , t 0 ), and Equation (18). In other words, one relative Remote Sens. 2021, 13, 3679 7 of 14 permittivity can be derived by one point on the hyperbola curve. For a hyperbola curve on the radar image, the relative permittivity of the new method is the average of all calculated relative permittivities. Simplified Modeling Homogeneous models are the most ideal model, of which the relative permittivity is set to be the same both in the horizontal and the vertical direction (as shown in Figure 4a). Besides, previous studies on the CE-4 landing site and the Apollo sample measurement have indicated that the lunar regolith is not vertically homogeneous, and the relative permittivity increases with the depth. Therefore, we also established the regolith model with the relative permittivity increasing from 2 to 4 (as shown in Figure 4b). Simplified Modeling Homogeneous models are the most ideal model, of which the relative permittivity is set to be the same both in the horizontal and the vertical direction (as shown in Figure 4a). Besides, previous studies on the CE-4 landing site and the Apollo sample measurement have indicated that the lunar regolith is not vertically homogeneous, and the relative permittivity increases with the depth. Therefore, we also established the regolith model with the relative permittivity increasing from 2 to 4 (as shown in Figure 4b). Besides the relative permittivity, the rock depth is another important parameter for geological interpretation, it can be calculated by the two methods and can be used to compare the two methods. Therefore, we showed the estimated rock depth along with the dielectric constant. Stochastic Modeling The interior structure of the lunar regolith is rather complicated, which includes numerous buried rock fragments of different sizes [4,39,40]. The relative permittivity distribution of lunar regolith is not homogeneous, which exists stochastic disturbances in general [4]. In some places, the disturbance is uniform both vertically and horizontally, while in some other places, obvious horizontal disturbance can be seen [9]. Thus, we established two different stochastic models with fewer and more horizontal disturbances, respectively, as shown in Figure 4c,d. Here, we applied the stochastic modeling method to simulate the lunar regolith, which is closer to the real situation according to the statistic [22,41]. Previous works have applied this method both in the GPR and LPR numerical simulations [41][42][43]. In addition to the simulation method, the effectiveness of the new method can also be verified by laboratory experiments. FDTD Simulation Once the regolith models are established. The numerical simulation of LPR is performed by using the two-dimensional gprMax software to obtain the simulated radar images [44,45]. The established regolith models include the homogeneous model, relative permittivity linearly increasing model, and the stochastic model. The input source is the Ricker wavelet, and the center frequency is set as 500 MHz, which is the same as that of the high-frequency LPR [21]. The size of the models is set to 12 × 10 m, the grid size is set to 0.01 m, and the time window is 140 ns. The rock is set horizontally in the middle of the model, with the depth ranging from 1 m to 10 m. Results As shown in Figure 4a-d, to verify the accuracy of the proposed method, we established four types of simulation models, including the homogeneous model, the vertically increasing model, the stochastic model with fewer horizontal disturbances, and the stochastic model with more horizontal disturbances. We calculated the relative permittivity by the traditional hyperbola fitting method and the new method and compared the estimated value with the real value set in the simulation model. Simulations for Different Models The estimation results of the simplified models and the stochastic models are shown in Figure 4e-l. Figure 4e-h and 4i-l, respectively show the relative error of the estimated relative permittivity and the depth, respectively. The estimated results show that the accuracy of the traditional method depends on the depth of the reflector (blue dots). The result of the new method (red dots) has less dependency on depth and shows good robustness compared with the hyperbola fitting method. Compared with the results of the simplified models, there exists disturbance in the estimated relative permittivity for the stochastic models. The Influence of Antenna Height and Spacing on Both Methods The traditional hyperbola fitting method ignores the antenna height and spacing, which causes errors for dielectric constant estimation, especially at the shallow depth. As shown in Figure 4, the estimation accuracy of the hyperbola fitting method increases with the decrease in depth. In particular, the error is intensely upheaved at the shallow depth. To further study the influence of antenna height, we established several homogeneous models with different antenna heights and spacings. The influence of antenna height on the accuracy of both methods is illustrated in Figure 5 (Figure 5e-h). Most of the relative errors are within ±5%, and all of the relative errors are within ±10% for the new method. Although subtle discrepancy occurs at a depth less than 2 m for the new method, the results of the new method are barely influenced by the antenna height (red results of Figure 5). relative errors are within ±10% for the new method. Although subtle discrepancy occurs at a depth less than 2 m for the new method, the results of the new method are barely influenced by the antenna height (red results of Figure 5). Furthermore, we also considered the influence of antenna spacing on both methods. The various antenna height is set to be 0.1 m, 0.16 m, 0.22 m, and 0.28 m, respectively. The rock depths are set to be 1 m, 2 m, 3 m, 5 m, 7 m, and 10 m. The calculated results are shown in Figure 6a-h. Both the relative error of estimated relative permittivity and depth are analyzed. The results show that the accuracy caused by the antenna spacing has less influence on both methods compared with that of antenna height. The subtle discrepancy can be observed for the hyperbola fitting method at the shallow depth. In conclusion, we can infer that the proposed method is much more robust to calculate the relative permittivity compared with the traditional method. Figure 6a-h. Both the relative error of estimated relative permittivity and depth are analyzed. The results show that the accuracy caused by the antenna spacing has less influence on both methods compared with that of antenna height. The subtle discrepancy can be observed for the hyperbola fitting method at the shallow depth. In conclusion, we can infer that the proposed method is much more robust to calculate the relative permittivity compared with the traditional method. The high-Frequency LPR Radar Image within the First 24 Lunar Days Eighty-three hyperbolic echo patterns are recognized in the high-frequency LPR radar image within the first 24 lunar days (red curves in Figure 7). The radar survey distance within the first 24 lunar days is~589.6 m, and its routine is shown in Figure 1b. As shown in Figure 7, only obvious hyperbola-like curves are selected in the radar image, and some detailed hyperbola curves are plotted in Figure 8. We applied both the hyperbola fitting method and our proposed method to calculate the relative permittivity. The estimated results are plotted in Figure 9a. The difference of the calculated relative permittivity by the two methods increases with the decrease in depth (Figure 9a), which is consistent with those of the simulation results ( Figure 4). The traditional method underestimated the relative permittivity, especially at the shallow depth. Based on the relative permittivity calculated by the proposed method, we obtained an empirical relationship between the relative permittivity and two-way travel time (red line in Figure 9a), expressed as Equation (19 The high-Frequency LPR Radar Image within the First 24 Lunar Days Eighty-three hyperbolic echo patterns are recognized in the high-frequency LPR radar image within the first 24 lunar days (red curves in Figure 7). The radar survey distance within the first 24 lunar days is ~589.6 m, and its routine is shown in Figure 1b. As shown in Figure 7, only obvious hyperbola-like curves are selected in the radar image, and some detailed hyperbola curves are plotted in Figure 8. We applied both the hyperbola fitting method and our proposed method to calculate the relative permittivity. The estimated results are plotted in Figure 9a. The difference of the calculated relative permittivity by the two methods increases with the decrease in depth (Figure 9a), which is consistent with those of the simulation results ( Figure 4). The traditional method underestimated the relative permittivity, especially at the shallow depth. Based on the relative permittivity calculated by the proposed method, we obtained an empirical relationship between the relative permittivity and two-way travel time (red line in Figure 9a), expressed as Equation (19). The processed high-frequency LPR radar image within the first 24 lunar days. The radar image is obtained based on the level 2B data after bandpass filter (the corresponding filtering parameters are set to 100, 250, 750, 900 MHz, respectively), de-wow, background removal, and SEC gain. The red curves indicate the hyperbola picked in the radar image. The data used for imaging is available on the website http://moon.bao.ac.cn/. Based on the fitting result, we transformed the two-way travel time versus depth for accurate estimation of the lunar regolith thickness at the CE-4 landing site. As shown in Figure 9b, the blue and red lines represent the depth derived by the relative permittivities that are calculated by the traditional hyperbola fitting method and the proposed method, respectively. The calculated regolith depths are 12 m and 11.1 m, respectively. Therefore, the estimated thickness of the lunar regolith at the CE-4 landing site is improved ~8% compared with that calculated by Li et al. [2]. The Comparison of the Traditional and Proposed Method Both methods use the hyperbola curves to calculate the relative permittivity. The hy- Figure 9. The calculated relative permittivities of the lunar regolith and the converted depth. (a). The calculated relative permittivity by the two methods based on the LPR data. The red and blue dot indicates the relative permittivity calculated by the hyperbola fitting method and the proposed method, respectively. The green and maple line represents the fitted result of the relative permittivity estimated by the hyperbola fitting method and the proposed method, respectively. (b) The derived depth by different relative permittivity varies with time delay. The label "Traditional" represents the relative permittivity calculated by Li et al. [2] Constant relative permittivity is used for depth transform. The label Proposed indicates the result of the proposed method. Based on the fitting result, we transformed the two-way travel time versus depth for accurate estimation of the lunar regolith thickness at the CE-4 landing site. As shown in Figure 9b, the blue and red lines represent the depth derived by the relative permittivities that are calculated by the traditional hyperbola fitting method and the proposed method, respectively. The calculated regolith depths are 12 m and 11.1 m, respectively. Therefore, the estimated thickness of the lunar regolith at the CE-4 landing site is improved~8% compared with that calculated by Li et al. [2]. The Comparison of the Traditional and Proposed Method Both methods use the hyperbola curves to calculate the relative permittivity. The hyperbola fitting method assumes that the antenna height and spacing are zero, and only one equivalent relative permittivity can be obtained for all selected points from a hyperbola curve, which reflects the equivalent dielectric distribution above the reflector. In other words, it can only obtain one relative permittivity from one hyperbola curve. As for the proposed method, the relative permittivity is not derived from the hyperbola curve pattern or curve fitting. Instead, once the position of the peak point of the hyperbola is determined, each point on the hyperbola curve can calculate one relative permittivity according to the geometric relationship. In general, a relative permittivity can be obtained at each picked point on a specific hyperbola curve, the result of the new method is the average of all calculated relative permittivities. The Influence of Antenna Height and Antenna Spacing The hyperbola curve fitting method is based on a simplified model and ignores the antenna height and spacing. The estimation error at the shallow depth caused by the simplified model is more obvious than that at the deep depth. This is mainly because antenna height and spacing are comparable to the reflector depth when the reflector is at the shallow depth, in which case the antenna height and spacing should not be ignored. However, at the deep depth, the depth of the reflector is much smaller than the antenna height and spacing, the influence of the antenna height and spacing on the simplified model of the hyperbola fitting method becomes subtle. As for the proposed method, although subtle discrepancy can be observed at the shallow depth, the estimation accuracy has less dependence on antenna height and spacing. Conclusions The hyperbola fitting method is based on the simplified model, which ignores the antenna height and spacing. Simulation results show that the influence of antenna height on the traditional method increases with the decrease in depth. The previous works using the hyperbola fitting method have underestimated the relative permittivity of the lunar regolith, especially at the shallow depth. The proposed method has good performance at both shallow and deep depth, and the accuracy is less dependent on the depth of reflectors. We selected eighty-three obvious hyperbola curves in the LPR radar image and calculated the relative permittivity both by the traditional and proposed methods. The result shows that the proposed method improved the calculated relative permittivity at a depth of 0-3 m, 3-6 m, and 6-10 m by 35%, 14%, and 9%, respectively. Finally, the estimated thickness of the lunar regolith at the CE-4 landing site is~11.1 m by the proposed method, which improved by~8% compared with that of the traditional method.
7,193.2
2021-09-15T00:00:00.000
[ "Physics" ]
Terahertz bandwidth RF spectrum analysis of femtosecond pulses using a chalcogenide chip We report the first demonstration of the use of an RF spectrum analyser with multi-terahertz bandwidth to measure the properties of femtosecond optical pulses. A low distortion and broad measurement bandwidth of 2.78 THz (nearly two orders of magnitude greater than conventional opto-electronic analyzers) was achieved by using a 6 cm long As2S3 chalcogenide waveguide designed for high Kerr nonlinearity and near zero dispersion. Measurements of pulses as short as 260 fs produced from a soliton-effect compressor reveal features not evident from the pulse’s optical spectrum. We also applied an inverse Fourier transform numerically to the captured data to re-construct a time-domain waveform that resembled pulse measurement obtained from intensity autocorrelation. ©2009 Optical Society of America OCIS codes: (070.4340) Nonlinear optical signal processing; (070.4790) Spectrum analysis (190.4360) Nonlinear optics, devices; (190.7110) Ultrafast nonlinear optics. References and links 1. J. P. Curtis, and J. E. Carroll, “Autocorrelation systems for the measurement of picosecond pulses from injection lasers,” Int. J. Electron. 60(1), 87–111 (1986). 2. R. Trebino, K. W. DeLong, D. N. Fittinghoff, J. N. Sweetser, M. A. Krumbügel, B. A. Richman, and D. J. Kane, “Measuring ultrashort laser pulses in the time-frequency domain using frequency-resolved optical gating,” Rev. Sci. Instrum. 68(9), 3277–3295 (1997). 3. J. Li, M. Westlund, H. Sunnerud, B.-E. Olsson, M. Karlsson, and P. A. Andrekson, “0.5-Tb/s eye-diagram measurement by optical sampling using XPM-induced wavelength shifting in highly nonlinear fiber,” IEEE Photon. Technol. Lett. 16(2), 566–568 (2004). 4. M. T. Kauffman, W. C. Banyai, A. A. Godil, and D. M. Bloom, “Time-to-frequency converter for measuring picosecond optical pulses,” Appl. Phys. Lett. 64(3), 270–272 (1994). 5. M. A. Foster, R. Salem, D. F. Geraghty, A. C. Turner-Foster, M. Lipson, and A. L. Gaeta, “Silicon-chip-based ultrafast optical oscilloscope,” Nature 456(7218), 81–84 (2008). 6. C. Dorrer, and D. N. Maywar, “RF Spectrum analysis of optical signals using nonlinear optics,” J. Lightwave Technol. 22(1), 266–274 (2004). 7. J. L. Blows, P. Hu, and B. J. Eggleton, “Differential group delay monitoring using an all-optical signal spectrumanalyser,” Opt. Commun. 260(1), 288–291 (2006). 8. T. Luo, C. Yu, Z. Pan, Y. Wang, J. E. McGeehan, M. Adler, and A. E. Willner, “All-optical chromatic dispersion monitoring of a 40-Gb/s RZ signal by measuring the XPM-generated optical tone power in a highly nonlinear fiber,” IEEE Photon. Technol. Lett. 18(2), 430–432 (2006). 9. G. P. Agrawal, Nonlinear Fiber Optics (Academic Press, San Diego, California, 3rd edition, 2001). 10. M. Pelusi, F. Luan, T. D. Vo, M. R. E. Lamont, S. J. Madden, D. A. Bulla, D.-Y. Choi, B. Luther-Davies, and B. J. Eggleton, “Photonic-chip-based radio-frequency spectrum analyser with terahertz bandwidth,” Nat. Photonics 3(3), 139–143 (2009). 11. M. Takahashi, R. Sugizaki, J. Hiroishi, M. Tadakuma, Y. Taniguchi, and T. Yagi, “Low-loss and low-dispersionslope highly nonlinear fibers,” J. Lightwave Technol. 23(11), 3615–3624 (2005). 12. E. Tangdiongga, Y. Liu, H. de Waardt, G. D. Khoe, A. M. J. Koonen, H. J. S. Dorren, X. Shu, and I. Bennion, “All-optical demultiplexing of 640 to 40 Gbits/s using filtered chirp of a semiconductor optical amplifier,” Opt. Lett. 32(7), 835–837 (2007). 13. R. Salem, M. A. Foster, A. C. Turner, D. F. Geraghty, M. Lipson, and A. L. Gaeta, “Signal regeneration using low-power four-wave mixing on silicon chip,” Nat. Photonics 2(1), 35–38 (2007). 14. L. W. Couch II, Digital and Analog Communication Systems (Prentice Hall Inc. New Jersey, 1997). #108701 $15.00 USD Received 12 Mar 2009; revised 29 Apr 2009; accepted 5 May 2009; published 19 May 2009 (C) 2009 OSA 25 May 2009 / Vol. 17, No. 11 / OPTICS EXPRESS 9314 15. S. J. Madden, D.-Y. Choi, D. A. Bulla, A. V. Rode, B. Luther-Davies, V. G. Ta'eed, M. D. Pelusi, and B. J. Eggleton, “Long, low loss etched As2S3 chalcogenide waveguides for all-optical signal regeneration,” Opt. Express 15(22), 14414–14421 (2007), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-22-14414. 16. M. R. Lamont, C. M. de Sterke, and B. J. Eggleton, “Dispersion engineering of highly nonlinear As2S3 waveguides for parametric gain and wavelength conversion,” Opt. Express 15(15), 9458–9463 (2007), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-15-9458. 17. S. V. Chernikov, E. M. Dianov, D. J. Richardson, and D. N. Payne, “Soliton pulse compression in dispersiondecreasing fiber,” Opt. Lett. 18(7), 476–478 (1993). 18. M. Scaffardi, F. Fresi, G. Meloni, A. Bogoni, L. Potí, N. Calabretta, and M. Guglielmucci, “Ultra-fast 160:10 Gbit/s time demultiplexing by four wave mixing in 1 m-long Bi2O3-based fiber,” Opt. Commun. 268(1), 38–41 (2006). 19. V. G. Ta'eed, L. Fu, M. Pelusi, M. Rochette, I. C. Littler, D. J. Moss, and B. J. Eggleton, “Error free all optical wavelength conversion in highly nonlinear As-Se chalcogenide glass fiber,” Opt. Express 14(22), 10371–10376 (2006), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-14-22-10371. 20. T. Shoji, T. Tsuchizawa, T. Watanabe, K. Yamada, and H. Morita, “Low loss mode size converter from 0.3 μm square Si wire waveguides to singlemode fibres,” Electron. Lett. 38(25), 1669–1670 (2002). 21. M. D. Pelusi, F. Luan, E. Magi, M. R. Lamont, D. J. Moss, B. J. Eggleton, J. S. Sanghera, L. B. Shaw, and I. D. Aggarwal, “High bit rate all-optical signal processing in a fiber photonic wire,” Opt. Express 16(15), 11506– 11512 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-15-11506. 22. A. Prasad, C.-J. Zha, R.-P. Wang, A. Smith, S. Madden, and B. Luther-Davies, “Properties of GexAsySe1-x-y glasses for all-optical signal processing,” Opt. Express 16(4), 2804–2815 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-4-2804. Introduction Progress in high-speed optical communications, and applications of ultra-fast phenomena, demand advanced diagnostic tools that can monitor short pulses of broad spectral width.This is generally achieved using ultra-fast nonlinear optics, most commonly for temporal waveform measurement by the various techniques of intensity autocorrelation [1], FROG [2] or optical sampling [3].High resolution measurements of short pulses can also be performed by a time lens scheme [4], which has recently demonstrated the performance advantage of using nonlinear optics in more compact, chip-based devices [5].Beyond temporal measurements, another useful diagnostic is the radio-frequency (RF) spectrum of a pulse, given by the power spectrum of its temporal intensity waveform.This is routinely used in telecommunications and microwave photonics for characterizing distortions in amplitude or phase, and it is typically measured using an electrical spectrum analyzer with an expensive high-speed photo-detector.The measurement bandwidth of this system is, however, limited by the electronics to several tens of gigahertz, which is inadequate for emerging higher bandwidth applications. An alternative approach for RF spectrum measurement has been demonstrated using ultrafast nonlinear effects during propagation of the signal with a cw probe in hundreds of meter lengths of silica-based highly nonlinear fiber (HNF).This scheme, shown schematically in Fig. 1(a), enables the signal RF spectrum to be captured on an optical spectrum analyzer (OSA) with measurement bandwidths of ≈800 GHz [6].Its effectiveness for performance monitoring of 40 Gb/s signals [7], [8] has been demonstrated.However, capturing the broader RF spectrum of shorter pulses, also relies on avoiding chromatic dispersion in the waveguide, which can distort the signal under test, and weaken its nonlinear interaction with the copropagating probe due to their group-velocity mismatch [9], i.e. "walk-off". To remedy this, we recently reported a photonic-chip based RF spectrum analyzer (PC-RFSA) [10], employing a dispersion-shifted chalcogenide (ChG) waveguide with a nonlinearity coefficient (γ) of several hundreds times larger than silica-based HNF [11].This allowed use of a shorter (centimeter scale), and low dispersion waveguide to enhance the measurement bandwidth and reduce signal distortion.In contrast to other nonlinear chip-scale devices such as semiconductor optical amplifiers [12], and silicon waveguides [5], [13], ChG avoids free-carrier effects which can complicate pulse propagation dynamics.Our PC-RFSA used a 16 cm long waveguide, to demonstrate a multi-terahertz measurement bandwidth (for a 20 nm wavelength signal-probe separation) and the measurement of 320 Gb/s signals [10]. In this paper we take advantage of the broadband and low distortion capability of the PC-RFSA to achieve multi-THz span RF spectrum analysis of much shorter, sub-picosecond pulses.This demands a much broader wavelength separation between the signal and probe, to accommodate the broader pulse spectrum under test.To counter the corresponding increase in walk-off, we used a shorter 6 cm long dispersion-shifted As 2 S 3 waveguide, with higher γ.This led to a broader measurement bandwidth of 2.78 THz for a signal-probe wavelength separation of 50 nm.The PC-RFSA was used to measure pulses as short as 260 fs produced from a soliton-effect compressor, and revealed features such as temporal width and amplitude distortions (associated with non-optimum compression), that are not easily inferred from its optical spectrum.We also investigated numerical processing of the captured traces by an inverse Fourier transform, to reconstruct a time-domain waveform that resembled the pulses measured by autocorrelation [1].This emphasizes the broadband capability of the PC-RFSA for capturing the RF spectrum of such short pulses spanning multi-terahertz bandwidths. Operating principle and waveguide properties Figure 1(a) shows the schematic of the all-optical RF spectrum analyzer [6].Measuring the RF spectrum of an optical signal requires capturing the power spectrum of its intensity 2 , where F denotes the Fourier transform [9], [14] from time (t) to frequency (f) domains according to F[I(t)] = ∫I(t)⋅exp(j⋅2π⋅f⋅t)dt, where the integration is over t from −∞ to + ∞.This function is distinct from the power spectrum of the signal electric field (E), given by S(f) = |F[E(t)]| 2 , which is measured directly by an OSA. Figure 1(b) compares the calculated G(f) and S(f) for a hypersecant function {I(t) = sech 2 (t)} broadened by different amounts of dispersion.The curves highlight how G(f) narrows inversely with pulse width while S(f) remains unchanged, which is the well known basis for dispersion monitoring [7], [8], [10].Measuring G(f) without the bandwidth constraints of a photo-detector connected to an electrical spectrum analyser, was demonstrated using the optical Kerr-effect of a waveguide during co-propagation of the signal at center frequency f s , with a weaker cw probe at frequency f p [6].By this all-optical approach, the waveguide refractive index (n) varies temporally with I(t) according to n(I) = n 0 + n 2 ⋅I, where n 0 and n 2 are the linear and nonlinear refractive indices respectively.For an optical signal power, P (related to intensity by I = P/A eff where A eff is the effective mode area [9]), the probe undergoes cross-phase modulation (XPM) in proportion to ∆φ = P⋅ γ⋅L (ignoring propagation losses) [9], where L is the waveguide length, and γ the nonlinearity coefficient given by γ = (2π/λ s )•(n 2 /A eff ) for a signal wavelength λ s corresponding to f s .This generates frequency modulation sidebands around the monochromatic probe whose output electric field (of normalized amplitude) at f p is given by where k is a constant.For ∆φ «1 (such that the exponential series is exp (jk∆φ) ≈ {1 + jk∆φ}), its output optical spectrum becomes S p (f) ∝ |γ L| 2 •G(f-f p ), which enables G(f) to be measured on an OSA [6].The ultra-fast response of the Kerr-effect, originating from the χ (3) susceptibility of the waveguide material [9] allows this to be performed with an enormous measurement bandwidth spanning 10 THz in principle. In practice however, the measurement bandwidth is limited by the waveguide dispersion parameter, D, which can both distort the signal under test and weaken the XPM efficiency due to walk-off between the co-propagating signal and probe [10].The scaling of efficiency with wavelength separation, therefore poses a challenge for measuring broader pulse spectra.The PC-RFSA on the other hand, takes advantage of As 2 S 3 glass's high n 2 ≈3 × 10 −18 m 2 /W (≈100 times of silica) and the ability to tailor A eff to small dimensions.This allows D to be shifted closer to zero at 1550 nm, and produce higher γ in shorter waveguides, thereby providing the performance advantages of a distortion-free broad measurement bandwidth. Chip fabrication [15] involved depositing a 0.85 µm thick film of As 2 S 3 (n 0 ≈2.4) on an oxidized silicon wafer (n 0 = 1.44) by thermal evaporation.From this, 6 cm long straight ribs, 2 µm wide, were formed by photolithography and dry-etching.The chip was then over-clad with a polymer layer (n 0 = 1.51) [15], hand-cleaved and coupled to lensed fibers with a 2.5 µm spot diameter giving a total insertion loss of ≈14.4 dB (between its fiber connectors) for the fundamental transverse magnetic (TM) mode.This was constituted by coupling and propagation losses of approximately 10.5 dB and 0.65 dB/cm, respectively.Mode solving by the Finite Element Method predicted the fundamental TM mode with A eff ≈1.2 µm 2 at 1550 nm corresponding to γ ≈9900 W −1 km −1 at 1550 nm.This is notably higher than for our previous waveguide [10], which helps compensate for its shorter length.The schematic of the 6 cm size chip is shown in Fig. 2 As with our original PC-RFSA [10], fabrication of such a small-dimension rib waveguide induces significant waveguiding dispersion with an opposite sign to the large (and normal) material dispersion of As 2 S 3 glass (−357 ps/nm.kmat 1550 nm wavelength [15]).This enabled D to be tuned to small or even anomalous values [16].Mode solving shows this is achieved when the field penetrates the top of the rib, and indicated that D was shifted to ≈28 ps/nm.km(i.e.anomalous).To take advantage of this, both signal and probe were coupled to the TM mode using fiber polarization controllers (PC's) positioned before the waveguide.This simultaneously ensured their polarization states were aligned for maximum XPM. Experimental results An advantage of using a short, low dispersion waveguide in the PC-RFSA is the broadened measurement bandwidth.This was experimentally characterized using a sinewave optical signal formed by the interference of two, equal power, cw lasers with tunable wavelength separation centred at λ s = 1550 nm [6].The polarization states of both lasers were aligned by PC's to maximize the sinewave modulation depth.The beat signal was then combined with a cw probe at wavelength, λ p = 1600 nm, and launched into the waveguide with total average powers at its input connector being 48 mW, and 32 mW respectively.The power of the XPM tone generated around λ p was then measured while tuning the sinewave beat frequency to obtain the curve in Fig. 2(b).This indicated that the 3 dB single-sided bandwidth was 2.78 THz, which covers the frequency range of interest for the pulses generated from our source.It is important to note, that this measurement used a 2.5 times wider signal-probe wavelength separation than our original PC-RFSA [10], which highlights its performance advantage in terms of walk-off, as described in more detail in Section 4. In other words, a narrower bandwidth would be expected if the longer 16 cm waveguides were used. The impact of the waveguide's dispersion on short pulse propagation was investigated from autocorrelation measurements.The source shown in Fig. 3(a) was used to generate pulses with ≈11 nm spectral width (estimated from the OSA trace in Fig. 3(b)), which equates to a Fourier transform-limited pulse width of ≈230 fs.These were launched into the waveguide at an average power of 16 mW at the input connector.The output was then measured on an intensity autocorrelator (of second-harmonic generation (SHG) type [1]) via an erbium-doped fiber amplifier (EDFA).This revealed high quality pulses with full width at half maximum (FWHM) duration of 260 fs, as shown in Fig. 3(b).The measurement was then repeated with the waveguide substituted by a variable optical attenuator (VOA) of comparable length (from connector to connector), and its attenuation matched to the waveguide insertion loss.This produced nearly identical output pulses of 250 fs FWHM as shown, highlighting the low distortion achieved, as expected from the dispersion length calculations described in Section 4. The RF spectrum measurement was then performed using the PC-RFSA set-up shown in Fig. 3(a).The pulse source consisted of an active mode-locked fiber laser (MLFL) emitting ~2 ps pulses at 10 GHz repetition rate and 1540 nm center wavelength.These were boosted in an EDFA and launched into a dispersion-decreasing fiber (DDF), with energy corresponding to approximately a fundamental soliton.The DDF was designed with a dispersion parameter continuously decreasing along its 340 m length, to induce adiabatic soliton compression [17].For an optimum average launch power of 126 mW (i.e. 6 W peak power and 13 pJ energy), the DDF emitted high quality fundamental soliton pulses of 260 fs FWHM, as shown from the intensity autocorrelation measurement in Fig. 4a(i).Although the temporal waveform closely fitted a sech 2 pulse shape, the optical spectrum measured on an OSA appeared distorted, as shown in Fig. 4a(ii).This is indicative of non-ideal compression associated with too rapid a perturbation of the soliton energy along the DDF (with respect to the pulse dispersion length, L d [9]).The result is the formation of a broad lower intensity pedestal in the time domain that spectrally interferes with the soliton.This complicates reading the pulse bandwidth from its conventional FWHM, which in turn complicates inference of the Fourier transform-limited pulse width from the expected time-bandwidth product of 0.315 for a hypersecant pulse.At a higher input power to the DDF of 251 mW, the pulse spectrum was further broadened and distorted, as shown in Fig. 4b(ii).Intensity autocorrelation measurements also indicated temporal broadening to 390 fs (Fig. 4b(i)).The RF spectrum was then measured by launching the pulses at λ s = 1540 nm into the PC-RFSA with a 32 mW cw probe at λ p = 1600 nm.The total average launch power at the waveguide input connector was ≈100 mW.Such a wide wavelength separation between the signal and probe was essential to accommodate the very broad pulse bandwidths generated by the source without spectrally interfering with the XPM broadened probe.This emphasizes the advantage of using the short, dispersion-shifted ChG waveguide (as discussed in Section 4).A few meters of dispersion compensating fiber (DCF) was also inserted before the PC-RFSA to The RF spectra of the soliton-compressed pulses corresponding to both optimum and higher DDF input power are shown in Fig. 4(iii).The measurement trace with ≈35 dB dynamic range represents a single side-band of the XPM broadened probe captured by the OSA, using a spectral resolution of 0.01 nm (≈1.2 GHz) with the wavelength axis converted to frequency.Unlike for the optical spectra in Fig. 4(ii), the envelopes of the RF spectra more closely follow hypersecant functions.The rippling in the RF spectrum envelope observed for higher input power to the DDF is associated with the temporal waveform distortion corresponding to the pedestal wings in the autocorrelation trace of Fig. 4b(i). To test how truly representative the RF spectrum measurement is of the pulse, we investigated the numerical processing of the data to reconstruct a temporal waveform that could be compared to SHG autocorrelation measurements.This was performed by numerically deleting the probe component, and taking half of the remaining power spectrum (X(f)) i.e. the longer wavelength sideband, and combining it with a mirror image of itself (in frequency) to form G(f). The operation described in Sect. 2 was then reversed to reconstruct I(t) by taking an inverse Fourier transform of [G(f)] 1/2 .To compare this with measurements from a SHG type autocorrelator, the autocorrelation of I(t) was generated by applying the operation A(τ) = ∫I(t)I(t-τ) dt (where ∫ denotes integration from + ∞ to -∞).These displayed good agreement with the SHG measurement traces as shown in Fig. 4(i).Their FWHM for both the optimum and higher input power into the DDF were 300 fs and 380 fs respectively. The autocorrelation waveform has an important relation to G(f) by the Wiener-Khintchine thereom [14] which states that the RF spectrum equals the Fourier transform of the autocorrelation function, i.e.G(f) = F -(A(τ)).This allows the autocorrelation waveforms plotted in Fig. 4(i) to be reconstructed more directly by simply taking an inverse Fourier transform of G(f). While the Wiener-Khintchine theorem provides an exact, direct relation between the RF spectrum and autocorrelation, it is well known that the autocorrelation cannot always unambiguously retrieve the shape of more complex pulses [1], and there exists the possibility of different pulses possessing the same autocorrelation.Considering an asymmetric pulse shape for example, the temporal intensity would Fourier transform into a complex function, whose phase information would be lost in the power spectrum, leading to a symmetric autocorrelation.These limitations would equally apply to the reconstructed I(t).Nevertheless, its ability to resolve temporal features such as pedestal is effectively shown. The temporal resolution of the reconstructed waveform is determined by the sampling point time step which from discrete Fourier transform theory [14] equals 1/(N.df)where N and df are the number of samples and sampling frequency for G(f) respectively.In our example, X(f) contained 2960 samples with df corresponding to a wavelength step of 0.01 nm.Achieving a sampling time under 100 fs for the waveform trace required increasing N from 5920 to 12060 points by appending an expanded noise floor to G(f) out to higher frequencies. Discussion The spectral bandwidth of a pulse provides a useful means to infer the temporal FWHM (T) of a known waveform via its time-bandwidth product.For a hypersecant pulse, this is given by T × F osa = 0.315, where F osa is the spectral FWHM (in frequency) of S(f) measured on an OSA.However, distortion of the optical spectrum as in Fig. 4(b) often makes estimation of F osa unreliable.In such cases, the RF spectrum can be more effective.However, this requires a modified constant for the time-bandwidth product.This can be determined for a soliton by considering its intensity in normalized units given by I(t) = sech 2 (t), whereby T = 1.763.The corresponding optical spectrum can be shown to be S(f) = sech 2 (1.763 2 ⋅f /0.315) with F osa = 0.315/1.763.The RF spectrum found by numerical fit is G(f) = sech 4 (4⋅f).By noting that the 3 dB (FWHM) bandwidth of [G(f)] 1/2 = sech 2 (4⋅f) equaling 1.763/4 is equivalent to the 6 dB bandwidth of G(f) denoted F rf , we obtain the ratio of F osa /F rf = 0.405.So the modified time- bandwidth product in terms of the 6 dB single-sided bandwidth of G(f) denoted ∆F 6dB becomes T × ∆F 6dB = 0.388.This was applied to the RF spectrum traces in Fig. 4c using the ∆F 6dB readings of 1.41 THz and 1.05 THz for the optimized and higher input DDF power examples respectively.This translates to T of 275 fs and 370 fs respectively which are within 5% and 8% of the FWHM obtained from the measured autocorrelation and numerically processed RF spectra of Fig. 4(i) respectively. The PC-RFSA's performance advantages, in terms of its broadened measurement bandwidth and improved accuracy, stem from the waveguide's broadband low dispersion, which minimizes both distortion of the signal, and group velocity mismatch with the probe [9].For a wavelength separation between signal and probe of ∆λ = 50 nm, the walk-off length whereby the delay due to group-velocity mismatch equals half a period of the sinewave signal of frequency f max can be estimated by the equation, L w = 1/|f max ∆λ (2D − S∆λ)|, where S is the dispersion slope i.e. variation of D with wavelength.This implies that walk-off would be negligible for f max « 6 THz (in order to satisfy L w » 6 cm assuming S = 0), which is consistent with the measurement bandwidth from Fig. 2. Its inverse scaling with L as plotted in Fig. 5, means f max reduces to « 2.2 THz for a 16 cm length waveguide, (as was used in our original PC-RFSA [10]), assuming the same ∆λ and D values.It also highlights how the walk-off effect diminishes sharply for L < 6 cm, and varies significantly for ∆λ values between 20 and 60 nm.Note, given ∆λ is so large, a more accurate calculation of f max would account for nonzero S and higher-order dispersion terms. In terms of measurement accuracy, the dispersion length L d [9] whereby the width of a pulse of initial FWHM, T, broadens significantly is given by L d = 2π c•T 2 /{3.11 |D| (λ s ) 2 }, where a hyper-secant pulse shape is assumed.This implies that pulse broadening would be negligible in the waveguide for T » 81 fs (in order to satisfy L d » 6 cm), which is consistent with the negligible dispersion observed for the 250 fs pulse in Fig. 3 These calculations were repeated using the parameters of various HNF assuming an equal γ.L product as our chip of 0.59 W −1 for comparison.Considering a dispersion-flattened silica HNF (i.e. S = 0), with typical values of D = 0.5 ps/nm.km,and γ = 20 W −1 km −1 at 1550 nm, the equivalent longer L of 29.7 m would translate to f max « 673 GHz and T » 241 fs.Although alternate HNF based on either Bi 2 O 3 [18], or ChG [19], have reported higher γ values of 1250 and 1200 W −1 km −1 respectively, translating to equivalent shorter L of 47 and 50 cm respectively, both fibers also exhibit large normal dispersion parameters of D = −310 and −560 ps/nm.kmat 1550 nm respectively.Consequently, their respective performance parameters (assuming S = 0) are degraded to f max « 68 GHz and 36 GHz, and T » 0.73 ps and 1.05 ps.These calculations highlight the potential performance advantage of dispersionshifted and highly nonlinear chip-scale devices, particularly when considering very broadband pulses. There are several routes to improving the performance of the PC-RFSA including lowering the insertion loss and increasing the nonlinear response.The large coupling loss, arising from the overlap mismatch between the modes of the lensed fiber and rib waveguide, could be significantly improved by incorporating on-chip tapers, as demonstrated for silicon waveguides [13].Losses below 1 dB per facet for coupling from nano-scale rib waveguides into standard single mode fiber has been reported by various tapering techniques [20].A lower propagation loss (of around half) is also expected from optimizing the rib etching process to reduce its surface roughness, which for lower rib heights (i.e.thinner ChG films), leads to an increased scattering loss as more of the mode field penetrates it.Both improvements would permit an insertion loss comparable to the 4.5 dB value demonstrated for a tapered As 2 S 3 fiber of similar length and A eff [21].Finally, use of alternative chalcogenide glasses, such as Ge 11 As 22 Se 67 [22], would offer about a factor of four increased nonlinear response compared with As 2 S 3 .Such advances would permit a higher dynamic range for the measurement or operation with even lower optical launch powers. Conclusions Analysis of femtosecond pulses from their RF spectrum spanning multi-THz bandwidths has been demonstrated for the first time through the use of a 6 cm long dispersion-shifted chalcogenide waveguide.The combined high nonlinearity, and broadband low dispersion, enabled the RF spectrum of pulses as short as 260 fs to be measured with negligible pulse broadening within the waveguide, achieving a broad measurement bandwidth of 2.78 THz.Characterization of pulses from a soliton-effect compressor revealed temporal features that could not be observed by conventional optical spectrum measurements alone.Furthermore, numerical processing of the RF spectrum was investigated as a means for reconstructing the temporal waveform, which showed good agreement with pulses measured by intensity autocorrelation, in terms of pulse shape, width and pedestal.Also, the use of the Weiner-Khintchine theorem enabled the autocorrelation to be directly generated by an inverse Fourier transform of the RF spectrum.The results highlight the effectiveness of this scheme as a useful diagnostic tool for the characterization of ultra-short pulses. Fig. 2 . Fig. 2. (a).(Upper) Micrograph image of typical As2S3 planar rib waveguide cross-section, and (lower) schematic of the 6 cm size chip coupled to lenzed fibers.(b) The measurement bandwidth of the PC-RFSA determined from the power of the side-band tone generated around the probe by XPM, as a function of the input signal sine-wave frequency, when signal and probe are separated by 50 nm. Fig. 3 . Fig. 3. (a).Experimental setup of the PC-RFSA for measuring the RF spectrum of femtosecond pulses generated from a DDF (b) Evaluation of pulse broadening in the As2S3 chip by measuring the intensity autocorrelation of the optical field reaching the OSA, either with the (blue solid curve) chip in place, or (red dots) substituted for a VOA.The pulse FWHM are 260 and 250 fs respectively.(Inset) Optical spectrum of DDF output measured on an OSA. Fig. 4 . Fig.4.Measurement traces of soliton-compressed pulses emitted from a DDF, for different input launch powers of (a) 126 mW, and (b) 251 mW, captured as (i) temporal waveforms from (solid curve) SHG intensity autocorrelator, and (dots) reconstruction by numerical processing of PC-RFSA output RF spectra, and plotted in linear and (inset) log scales, (ii) optical spectra from an OSA and (iii) (solid curve) RF spectra from the PC-RFSA, compared to (dotted curve) numerically calculated sech 4 fit.
6,650
2009-05-25T00:00:00.000
[ "Engineering", "Physics" ]
Implementation of Flutter-based Learning Management System (LMS) at Universitas Andi Djemma Palopo Learning Management System (LMS) is a tool that is essential to redound an interaction between instructor and the learners and considering the technology headway nowadays (gadget). Therefore, research has been made to develop an LMS application for mobile devices in Universitas Andi Djemma Palopo, especially in the informatics engineering department using the Flutter framework. The method used for this research is R&D which stands for Research and Development with ADDIE Development Model, Analysis, Design, Development, Implementation, and Evaluation. The result of this research is known: 1) This application (LMS) made using the Flutter framework and Firebase database simplifies the development process; 2) This application was made for the informatics engineering department, Universitas Andi Djemma Palopo. This mobile app has several useful features, including making a class, class discussion, making subjects, assignments, and attendance; 3) The LMS implementation uses a questionnaire based on usability and has been obtained with an eligibility percentage of 83.03%. The app has been declared as very feasible based on the eligibility percentage. INTRODUCTION Since 1945, the national education curriculum has changed in history, namely in 1947, 1952, 1964, 1968, 1975, 1984, 1994, 2004, and the 2006 curriculum. All national curricula are designed based on the same foundation, namely Pancasila and the Constitution. 1945, the difference is in the main emphasis of the goals of education and the approach to realizing it. The curriculum changes are, of course, accompanied by different educational purposes because, in each of these changes, there is a certain goal to be achieved to advance our national education (Wirianto, 2014). In line with these developments, information technology has also developed at a very high speed, changing society's paradigm in seeking and obtaining information, which is no longer limited to newspaper, audiovisual, and electronic information but also information technology. Other sources of information, one of which is through the Internet (Elyas, 2018), especially smartphone devices, and especially in education, are used as a learning medium. Learning Media, in general, are teaching and learning process tools (Mutia et al., 2019). Education is a process of communication from educators to students that contain educational information, which has elements of educators as sources of information, media as a means of presenting ideas and educational materials, and students themselves. Some aspects of This approach get a touch of information technology media, thus sparking the birth of the concept of e-learning (Sukmaindrayana and Wildan, 2017). E-learning is a teaching and learning tool that uses electronic circuits (LAN, WAN, or internet) to deliver learning content, interaction, or guidance (Elyas, 2018), e-learning is also a distance learning method that is deep in the process and utilizes computer technology (Gani, 2018). The term "e" or the abbreviation of electronics in e-learning is used for all technologies used to support teaching efforts through internet electronic technology. (Kristiani, 2016). E-learning itself is a method in general, and of course, it has many implementations in different devices or conditions, one of which is the Learning Management System or LMS. These devices are very useful considering that humans are currently in the Coronavirus Disease pandemic. The Ministry of Education and Culture (Kemendikbud) has asked all universities to provide learning facilities during the Covid-19 emergency at the university level. Helping the government and the community to learn from home, work from home, and carry out social restrictions to break the chain of the spread of Covid-19 (Wijaya et al., 2020). Based on regulations issued by the local government, Universitas Andi Djemma Palopo itself has also implemented social restrictions in the teaching and learning process, so lecturers and students must use electronic learning media available on the internet. However, learning platforms tend not to have full features following the needs of lecturers in general at Universitas Andi Djemma Palopo. They had an internal LMS that has added value in increasing university credibility. Based on that, e-learning management (LMS) is needed to meet the needs of teaching staff and students at Universitas Andi Djemma Palopo. Research Procedure The method used in this research uses the ADDIE development model popularized by Robert Maribe Branch, a development research model consisting of five stages: Analysis, Design, Development, Implementation, and Evaluating (Cahyadi, 2019). The description of each step is (Sugiyono, 2013): In analyzing, researchers need two things: problems and needs. Problems obtained from the data collection results will then produce requirements, and these needs will later become the basis for making the system. Data collection will be carried out through literature studies, interviews, and observations to obtain the necessary information. Of course, the information is used as a reference used in subsequent stages. b. Design After getting the information needs, the next stage is making a prototype or the application design process, which will be displayed in the form of black lines and writing on a white background and will later resemble the appearance of a smartphone application. c. Development Development is the core stage of making the system or application itself. It can be called the realization process of the design results that have been made or is commonly referred to as coding. Coding is done to produce a program or application in the form of a solution to existing problems and the requested needs. The IDE used is Visual Studio Code, Flutter as a development framework, and Firebase Services as a DBMS. d. Implementation Implementation is the stage where the application that has been built will be used directly by the user according to the original function or purpose, but only for limited trials. e. Evaluating The evaluation stage is the process of correcting the shortcomings of the application made after being tested. Because the trials are limited, the improvements made are also minor. Data Analysis Data analysis conducted results from a questionnaire where each question has its weight. Of course, the questions refer to the usability context of the e-learning application or in the sense that the test is carried out with a usability format questionnaire. Testing the questionnaire with usability format using descriptive analysis data techniques with the following calculations: After getting the score data from the test results, the percentage is calculated using the formula. After that, the results rate is converted into a statement according to the following interval percentage table (Sudaryono, 2015). Analysis After interviewing several sources, the conclusions drawn from these activities are: 1) The learning management system in Informatics Engineering Department at Universitas Andi Djemma Palopo uses several online platforms provided on the internet, which means it does not yet have a learning platform. 2) The platforms used also tend not to have some of the desired features; for example, attendance is done on different applications, or other platforms are only asynchronous, so they must use another platform again. 3) The device chosen to conduct online learning is a smartphone because it is easier to use and is mobile. Therefore, creating a learning platform that can back up several features in one application/platform is necessary. The observation phase was carried out at the research site to observe how the conventional (face-to-face) learning process took place and online learning. During this pandemic, education tends to be done online to reduce the risk of spreading the virus so that conventional learning processes are rarely carried out anymore. However, learning procedures must comply with health protocols. In online learning, the platforms often used are Google Classroom, Zoom, Zoho (attendance), Microsoft Teams, and other media that can obtain on the internet for free or paid. Design A system overview in the form of a use case diagram is made from this stage, which will be developed later (Figure 1). The activity diagram will be described per user, namely lecturers and students ( Figure 2) and (Figure 3). At the endpoint of the activity diagram of lecturers and students, there is a "melakukan kegiatan pembelajaran" activity, and the intended actions are illustrated in the generalization of the use case ( Figure 1). TEKNIK: Jurnal Ilmiah Ilmu-Ilmu Teknik Vol. 7, No. 1, Maret 2022p-ISSN 2656-7288, e-ISSN 2656 5 After finding problems from interviews and observations and describing solutions to problems using use case diagrams and activity diagrams. Then the initial design of the system display will be made, which can be seen in the following figure: Development The system is developed using Flutter, an open-source SDK or framework developed by Google to create applications that can run on Android and iOS operating systems (Dian, 2018). Using Flutter is fairly easy to create user interfaces because Flutter uses the concept of widgets to create text, forms, and buttons. A widget is needed for each component; here are the results of the realization of the design phase using the Flutter: The database system used as data storage for the application is Firebase. Firebase is a platform for realtime applications. When the data changes, the application connected to Firebase (website or mobile app) will update it directly (Sanad et al., 2018). Firebase has a complete library for most web and mobile platforms and can be combined with other frameworks such as node, java, javascript, and others (Susanti et al., 2016). The services used in Firebase are: a. Authentication Authentication is used to store credential data by users who log into the application. This service is equipped with complete methods to simplify the development process, especially in the back-end. The methods used in this application are login, register, and forget the password. Cloud Firestore is a flexible and scalable database for mobile, web, and server development (Firebase, 2018). Firestore is used to store data in text or information that is later displayed in the application later in realtime. The user does not need to refresh/reload the application to get the latest data, which is one of the advantages of Firebase Firestore. Storage is used to store files that have been uploaded into the application in the form of snapshots of attendance signatures, lecturer material files, and student assignments. The data is stored in a bucket (directory) in storage named according to the class's class code created by the lecturer. Implementation The implementation/testing was carried out on a limited basis due to the COVID-19 pandemic to minimize virus spreads. In this testing, one user was selected as a lecturer to log in according to his role, and the rest became students who would join the class created by the lecturer. Students then carry out learning activities using the features provided in the LMS application. Evaluation The evaluation stage uses the formula described before. The results of the tabulation scores from the questionnaire data are in the following table: CONCLUSION Based on the results and discussions that have been carried out. The LMS development process using the flutter framework, which is implemented in the Informatics Engineering Study Program, Universitas Andi Djemma Palopo, it can be concluded that: 1) The Learning Management System built has three core users, namely lecturers, students, and admins. Each user has features capable of conducting online learning, including creating/viewing materials, assignments, attendance, and discussing in a class forum; 2) The combination of the Flutter Framework and Firebase Database is beneficial in developing this application because the services provided by both Flutter and Firebase are complete and according to needs; 3) The data obtained from the questionnaires have been analyzed and resulted in a eligibility percentage level reaching 83.03%, which means that this e-learning application is suitable for use on a particular scale and has stable performance.
2,668.4
2022-03-05T00:00:00.000
[ "Computer Science", "Education", "Engineering" ]
Theoretical investigation of a plasmonic substrate with multi-resonance for surface enhanced hyper-Raman scattering Because of the unique selection rule, hyper-Raman scattering (HRS) can provide spectral information that linear Raman and infrared spectroscopy cannot obtain. However, the weak signal is the key bottleneck that restricts the application of HRS technique in study of the molecular structure, surface or interface behavior. Here, we theoretically design and investigate a kind of plasmonic substrate consisting of Ag nanorices for enhancing the HRS signal based on the electromagnetic enhancement mechanism. The Ag nanorice can excite multiple resonances at optical and near-infrared frequencies. By properly designing the structure parameters of Ag nanorice, multi- plasmon resonances with large electromagnetic field enhancements can be excited, when the “hot spots” locate on the same spatial positions and the resonance wavelengths match with the pump and the second-order Stokes beams, respectively. Assisted by the field enhancements resulting from the first- and second-longitudinal plasmon resonance of Ag nanorice, the enhancement factor of surface enhanced hyper-Raman scattering can reach as high as 5.08 × 109, meaning 9 orders of magnitude enhancement over the conventional HRS without the plasmonic substrate. Unlike normal Raman scattering (RS) that resulting from the scattering of a single photon, hyper-Raman scattering (HRS) is an inelastic sum-frequency scattering from two photons. The two photons with frequency of ν are inelastically scattered from a ground state to a virtual state with energy equal to 2ν − ν vib or 2ν + ν vib , corresponding to Stokes and anti-Stokes scattering, respectively, where ν vib is vibrational frequency 1,2 . Depending on the symmetry of molecular, HRS may probe Raman and infrared active modes as well as the so-called silent modes that are seen neither in Raman nor in infrared spectrum. The silent modes can offer complementary vibrational information that can't be revealed by both RS and infrared absorption 3 . Because of this capability to provide better insight into the structure and interaction of molecules, HRS is considered as a spectroscopic technique with more sensitive than RS with respect to surface environmental changes 4 . However, due to the two-photon process and the little cross-sections of about 10 6 times weaker than RS, the experimental detection for HRS is very difficult 5 . For a long time in the past, HRS has not been considered as an applied spectroscopic technique until people discovered surface enhancement Raman scattering (SERS) phenomenon. Inspired by SERS, surface enhanced hyper-Raman scattering (SEHRS) effect has been realized by enhancing the weak hyper-Raman scattering through the strong local electric field that is generated by exciting localized surface plasmon resonances (LSPRs) in metal nanostructures (Au, Ag, Cu, etc.) [6][7][8][9][10] . With the development of experimental technology and theoretical research, SEHRS spectrum has attracted great attention and been used in many fields [11][12][13][14] , such as single molecule detection 11 , cell PH sensors 15 , spectral imaging like other nonlinear optical imaging 16,17 . The electromagnetic field enhancement factor (EF) in SEHRS can be estimated as follows 18 : where ν and ( 2 ) s s vib ν ν ν ν = − are the frequency of incident light and scattering light, respectively. |g| is the local electric field enhancement at the location of probe molecule. Different from SERS, SEHRS has higher-order dependence on the incident light intensity, thus indicates higher enhancement factor than SERS. Figure 1 shows the enhancement mechanism of SEHRS and the comparison with SERS. From equation (1), an optimum plasmonic substrate for SEHRS applications require that (i) the significant electric-field enhancements occur simultaneously at two different spectral regions around ν and ν s ; and (ii) the electric hotspots generating at the two spectral regions should overlap spatially, i.e. in the same spatial locations. Both are indispensable. However, for a regular plasmonic substrate, electromagnetic enhancement resulting from |g(ν)| 4 and |g(ν s )| 2 are not easy to be obtained at the same time, because different surface plasmon resonances usually have different "hot spot" locations. For SERS, a single plasmon resonance is broad enough to enhance both the incident and Raman scattering fields. But for SEHRS, the plasmonic substrates should support double-resonances with the same "hot spot". Therefore, in contrary to its linear counterpart with rapid developments, SEHRS has only been sparsely studied despite the great potential for detecting low energy molecular vibrations and vibrational molecular modes inactive in both RS and infrared absorption [19][20][21][22] . In order to promote the application and development of SEHRS technology in different fields, it is particularly important to develop SEHRS substrates which can enhance the incident light and the scattered light in the same spatial locations. In fact, for surface enhanced nonlinear optical processes, the plasmonic substrate with multiple resonances at the same spatial locations is indispensable 23,24 . Precise control of the size and morphology of metallic nanostructures is critical for tuning the LSPR energy and intensity as well as improving the efficiency of light manipulation 25 . Silver or gold nanorice structures support both transverse and longitudinal resonances, with the latter tunable from the visible to the near-infrared spectral range by variation of the aspect ratio (length/diameter) 26,27 . With increasing aspect ratio of nanorice, higher-order multipolar LSPR modes can be excited 28,29 , which facilitates their applications in ultrasensitive sensing 28 , surface-enhanced Raman spectroscopy 29 , and catalysis 30 . These current studies of silver or gold nanorices won't pay attention to multi-resonance phenomenon with the spatial overlap of the electric hotspots for surface enhancement hyper-Raman scattering. In this article, we numerically studied a kind of plasmonic substrate for SEHRS application. The plasmonic substrate consists of Ag nanorices that supporting multi-resonances. It is found that two of the excited plasmon modes not only have large field enhancements at the same "hot spots" but also spectrally match with the excited light and second-order Stokes scattering. Therefore, the Ag nanorice is expected to be used for enhancing the SEHRS process. By investigating the far-field scattering spectra and the near-field hot-spot distribution of the nanorice with different length or diameter, we discussed the influence of geometric parameters on the signal enhancement of HRS. The theoretical value of maximum EF for SEHRS can reach 5.08 × 10 9 . Our study is expected to provide theoretical supports for fabricating superior SEHRS substrate for single molecule detection and unknown molecular recognition. Multipolar Plasmonic-Resonant Structure The schematic of nanorice is shown in Fig. 2, where both the geometry parameters and the incident light polarization configuration are labeled. A plane wave is incident with an angle of θ to the normal direction 31 . All calculations about the nanorice, including the scattering spectra and the electric field distributions, were carried out by COMSOL Multiphysics 5.2, a commercial three-dimensional numerical simulation software based on finite-element method (FEM). Perfect matched layers (PML) were employed for surrounding boundaries to avoid spurious reflections. For simplification, the isolated Ag nanorice was assumed to be in air with the refractive index n = 1. Introducing dielectric substrates in practice for probe molecules does not modify the optical properties of the nanorice, but only shifts the resonance to longer wavelengths together with a slight increase in linewidth 32,33 . The permittivity of Ag nanorice was taken from the experimental data given by Johnson and Christy 34 . Scattering cross sections were computed according to scattering formulation: 35 . During the calculation, a good convergence was obtained by utilizing adaptive meshing technique to handle the structure with large Results and Discussions Spectral tunability of the multipolar resonances. The localized field enhancement caused by plasmon resonance contributes greatly to the light-matter interactions at the nanoscale, especially when the probed molecules are located at the "hot spots". The Ag nanorice with length of L = 460 nm and diameter of d = 100 nm was used to excited multiple LSPRs. The simulated scattering spectra of the nanorice under normal and oblique incidence are shown in Fig. 3. The scattering spectra indicate that as the incident angle is θ = 0°, the nanorice has two distinct resonances at the wavelength of 1210 nm and 510 nm, corresponding respectively to the first-(I) and the third-longitudinal resonance (III) mode of the nanorice. While for θ = 45°, there are three resonances, appearing at the wavelength of 1210 nm, 670 nm and 510 nm and corresponding to the first three multi-resonance modes, respectively. According to the electromagnetic enhancement mechanism and Equation (1), to achieve a significant enhancement of HRS signal, the spectral positions of the multi-resonance modes and the exciting and scattering lights should match. We noted that the wavelengths (1210 nm and 670 nm) of the first-and second-longitudinal resonance just match with that of the exciting light and the second-order Stokes light of SEHRS 36 . Changing the value of length and diameter of the nanorice can shift the spectral position of multi-resonance 27 . Figure 4 shows the scattering spectra of the nanorice with different geometrical sizes under oblique incidence (θ = 45°). In Fig. 4(a), when the geometrical parameter L was increased from 360 to 480 nm, the first two scattering peaks all shift toward long wavelengths, but the redshift is more obvious for the dipole resonance mode. Conversely, the two resonances shift toward short wavelengths when the geometrical parameter d increases from 60 to 140 nm, as shown in Fig. 4(b). In addition, the second-and third-longitudinal resonance peaks at the wavelength of 670 nm and 510 nm become more obvious with the increase of d. That is the high-order mode of the surface plasmon resonances become stronger as the diameter of nanorice increase. However, the first-longitudinal resonance mode have little changes on the peak value, but show an obvious increase in the linewidth, which Figure 2. Schematic of the Ag nanorice structure. The Ag nanorice locates in the x-y plane, with its normal direction n along the z axis. Linearly polarized light illuminates this structure at normal incidence (θ = 0°) or oblique incidence (θ > 0°). Δν is hyper-Raman frequency shift, and its unit is inverse centimeter. According to equation (2), when the exciting light is assumed at the wavelength of 1210 nm, the corresponding second-order Stokes wavelength should be around 670 nm. By optimizing the geometrical parameters of Ag nanorice, the wavelengths of dipole resonance and second-order resonance can be adjusted to match with that of the exciting light and the second-order Stokes scattering light (see Fig. 5). It would be advantageous for the enhancement of SEHRS, because at round the resonance wavelengths the local electric fields have maximum enhancement. Coherent oscillation of two distinct resonances in the same spatial location is highly desired for SEHRS substrates 38 . However, this is generally impossible for a simple plasmonic structure. In Fig. 6(a), when the incident angle is θ = 0°, a significant enhancement of electric field with the "hot spots" in the ends of the long axis of the nanorice were observed for the first-order resonance mode at the wavelength of 1210 nm. But there is a weak enhancement for the second-longitudinal resonance at the wavelength of 670 nm, which is disadvantage for SEHRS. However, it is different for the case of oblique incidence, because the oblique incidence can break the symmetry of the configuration and have some multipolar resonance mode be excited 39 . For θ = 45°, significant "hot spots" at the ends of nanorice can also be seen clearly for the second-order mode (670 nm), which overlaps with those of the first-order resonance (1210 nm), as shown in Fig. 6(b). Resulting from the electric field enhancement occurring at the same "hot spots" but two different wavelengths, the signal intensity of SEHRS will be enhanced as discussed above. Next, we investigate the dependence of the scattering spectra and the EF of SEHRS on the incident angle of a plane wave for an Ag nanorice substrate. Figure 7(a) displays the scattering spectra of the nanorice structure as the excitation angles changing from 0° to 90°. It can be seen clearly that as the incident angle increases, the relative intensity of the first-longitudinal resonance (~1210 nm) gets weak while the second-longitudinal resonance (~670 nm) is firstly strengthened and then weakened. At 0° and 90°, the second-longitudinal resonances are completely suppressed. According to the scattering spectrum, the resonance frequency, the linewidth and the quality factor of the surface plasmon resonance can be determined, which can directly indicate the far-field performance of the resonance mode and simultaneously give some insights on near-field effect 37 . However, an accurate estimate of EF of SEHRS should base on the enhancement of the localized electric field. According to equation (1), the value of EF, i.e. SEHRS enhancement factor can be obtained by calculating the electric-field enhancements of the first-and the second-longitudinal resonance at 1 nm from the end of the nanorice. Figure 7(b,c) shows the calculated results of excitation enhancement (|g(ν)| 4 ) and scattering enhancement (|g(ν s )| 2 ), which are contributed by the resonances at the wavelength of 1210 nm and 670 nm, respectively. The calculated values of the maximum EF under different incident angles are shown in Fig. 7(d). The value of EF, firstly increases, reaching the maximum at about 30°, and then diminishes significantly with the incident angle. The small value of EF at θ = 0° or 90° is mainly because the second-longitudinal resonance can't be excited effectively, as indicated in Fig. 6. Through the above analysis, SEHRS EF reaches the maximum of 5.08 × 10 9 at the incident angle of 30°. This indicates that the SEHRS signal is ~9 orders of magnitude larger than the standard HRS, reaching the sensitivity of single-molecule detection 40,41 . By contrast, for normal incidence θ = 0°, when the scattering enhancement is small and the field optimization is not achieved, the calculated EF of SEHRS is only 4.51 × 10 7 (|g(ν)| 4 |g(ν s )| 2 = 48.9 4 × 2.87 2 = 4.51 × 10 7 ). Therefore, the EF of SEHRS under the case of θ = 30° (5.08 × 10 9 ) is ~113 times larger than that of θ = 0° (4.51 × 10 7 ), which demonstrates that the plasmonic substrate with significant field enhancement at the excitation and simultaneously the scattering wavelength is very important for SEHRS applications. Conclusions We have theoretically investigated a SEHRS substrate consisting of Ag nanorice with multi-resonance. By exciting the first-and second-longitudinal plasmon resonance, we demonstrate this plasmonic structure can generate electric field "hot spots" at the same spatial locations when different resonance modes are excited. The resonance frequencies of the two modes match exactly with the excited light and the second-order Stokes scattering. The electric field "hot spots" of the nanorice excited at the specific spectral position can be tuned actively by changing the excitation orientation of the plane wave. Due to the improvement of both the excitation and the emission enhancement in SEHRS process, the theoretically predicted EF for SEHRS can reach 5.08 × 10 9 at the incident angle of 30°, getting the sensitivity for single-molecules detection. The plasmonic substrate with multi-resonance developed here also holds promise for other applications, such as other nonlinear spectroscopy, stimulated Raman scattering and multiphoton imaging.
3,539.2
2018-08-08T00:00:00.000
[ "Materials Science", "Physics" ]
Quantitative analysis of crustal deformation, seismic strain, and stress estimation in Iran via earthquake mechanisms This study investigates the variations in stress, strain, and deformation of the Earth’s crust in Iran arising from tectonic movements and seismic activities. We employed the Kostrov and Molnar methods to quantify these parameters, focusing on the influence of different zoning techniques on the estimations Introduction The strain rate of the Earth's crust can be estimated by a variety of methods, including seismic techniques that utilize earthquake mechanisms and GPS methods that track the displacement and velocity of the Earth's crust.One significant advancement in this field was made in 1974 by Kostrov (Kostrov, 1974), who introduced an Equation for estimating both the magnitude and directions of seismic strain.This methodology has been widely adopted by researchers globally, including those in Iran, to assess seismic stress and strain.An example of its application can be seen in the work of Tesauro (Tesauro et al., 2006), who used this approach to estimate seismic and geodetic strains across Central Europe in blocks of 0.5°× 0.5°. In Iran, researchers, including (Masson et al., 2005;Zolfaghari, 2009;Ansari and Zamani, 2014;Zarifi et al., 2014) have estimated the size and direction of the principle strains.Masson et al. (2005) utilized two seismic catalogs- Jackson et al. (1995) and the Harvard University Global Centroid Moment Tensor (GCMT) catalog-to estimate seismic stress and strain.Zolfaghari (Zolfaghari, 2009) estimated the seismic and geodetic strain rate in Alborz, considering historical earthquakes with magnitudes greater than 1.6.The data used by Zarifi et al. (2014) for these estimations extended up to 2013.Rashidi et al. (2019) employed the inversion of focal mechanism and geodetic data to achieve the strain rate and stress fields (Rashidi and Derakhshani, 2022).In Iran, given the unique characteristics of fault activity (Nemati and Derakhshani, 2021;Rashidi et al., 2023a;Mohammadi Nia et al., 2023) and earthquakes in each region, estimating the magnitude and direction of stress, strain, and displacement speed of the Earth's crust is critically important (Derakhshani and Eslami, 2011).In our research, we calculated the directions of maximum pressure and tension, as well as the seismic strain rate, using both Kostrov and Molnar methods.Additionally, we measured the horizontal and vertical velocity of the Earth's surface, analyzing 637 earthquakes with moment magnitudes greater than 5.5, spanning from 1909 to 2016.Our new estimates are distinct and have been compared with those of other researchers who used different data sets and zoning techniques. In the Kostrov method, we are required to use large earthquakes (magnitude > 5.5), leading to the omission of smaller earthquakes.Although the seismic energy released by microearthquakes is not comparable to that of large earthquakes, it cannot be neglected.Consequently, the Molnar method was employed to account for microearthquakes, which contribute significantly to the seismic energy released in every seismic area.Another crucial aspect is the spatial distribution of seismic energy in tectonically active areas.Microearthquakes can affect a wider area, distributing seismic energy more broadly.A significant limitation of both methods is the unavailability of complete and reliable earthquake catalogs, which are essential for estimating and describing the total strain history of the area. Materials and methods The Kostrov method (Kostrov, 1974), as outlined in Eq. 1, is the primary method employed in this research to estimate the average seismic strain rate ε ij (in nanostrains per year) for a set of N earthquakes. In this Equation, μ represents the average shear modulus (3.3 × 10 10 N/m 2 ) in the continental crust, as detailed by Stein and Wysession (Stein and Wysession, 2009).The variable ν denotes the spatial volume of the crust affected by the considered earthquakes, while T refers to the time interval of the data collection.M represents the strain tensor, where M 0n is its scalar component, and M ijn is the vector component, representing the tensor of the n th earthquake.The volume ν is calculated by multiplying the area of the selected block by the thickness of the seismogenic layer in each range.To estimate the magnitude of the seismic strain rate, only the scalar part of the tensor is required. According to researchers such as Jackson et al. (1995), it is generally accepted that the selection of regular blocks in zoning methodologies should be viewed as a means for simplifying and averaging the application of Kostrov's Equation.This implies that the dimensions of the selected blocks ought to encompass the length of most faults within the area under study.Moreover, the approach to tectonic zoning should, as far as possible, align with the tectonic features of the area, as illustrated in Figure 1A.However, it is important to note that triangular zoning, while potentially useful, may not align as closely with the area's tectonics and fault lines.Additionally, data processing in triangular zones can be more challenging compared to square zoning. In the context of Iran, the longest coseismic faults observed have a maximum length of 125 km, as noted by Berberian et al. (1999).This measurement is associated with the 1997 Zirkuh Qaen earthquake, which had a moment magnitude (M W ) of 7.2. Therefore, for this study, Iran and its surrounding areas were segmented into 336 blocks, each measuring 1º × 1º, as illustrated in the map shown in Figure 1B.However, aligning with the perspectives of other researchers like (Jenny et al., 2004), it's also suggested that the dimensions of the selected blocks should encompass a uniform range, particularly from a geological standpoint.This contrasts with the approach of Ansari and Zamani (Ansari and Zamani, 2014), whose tectonic blocks are considerably larger and do not closely follow fault lines.Zarifi et al. (2014) have also explored this area of study.In our research, we have implemented and compared both of these approaches to understand their respective impacts and outcomes. In this study, we focus on all significant earthquakes with a magnitude greater than 5.5 (M > 5.5) and shallow earthquakes with a depth of less than 25 km to estimate strain rate using the Kostrov method.To analyze the mechanisms of these earthquakes, we utilized two catalogs: one from Jackson et al. (1995), which includes 86 earthquakes, and the Harvard University Global Centroid Moment Tensor (GCMT) catalog, comprising 551 earthquakes.The latter encompasses earthquakes from two distinct periods: from 1909 to 1975, with moment magnitudes ranging from 6.0 to 7.4, and from 1976 to 2016, with moment magnitudes between 4.3 and 7.7. Due to the small number of earthquakes with a depth of more than 30 km in Iran, our study's methodologies have been specifically tailored for the shallow crust, limited to depths of less than 25 km.Consequently, certain significant seismic events, such as the 2013 Saravan Sistan earthquake (M W 7.7; depth ranging from 70 to 95 km, as per GCMT data), were not included in our data processing.This earthquake, characterized by a normal faulting mechanism, was attributed to the extensional forces on the Makran plate, which is known for its increasing subduction angle beneath the crust (Nemati, 2019;Derakhshani et al., 2023). In this study, the depth of the seismogenic layer in Iran was assumed to have a maximum depth of 20 km.This assumption was based on the data from local seismic networks and waveform modeling, with specific references to Nemati et al. (2011) (A) Map illustrating the mechanisms of earthquakes in Iran, as studied in this research.The map delineates 14 tectonic zones, identified based on seismic states, fault lines, and their respective mechanisms.The zones are labeled as follows: WA, Western Alborz; EA, Eastern Al-borz; AZ, Azarbaijan; KP, Kopeh Dagh; TB, Tabas; NZ, North Zagros; WZ, Western Zagros; CZ, Central Zagros; EZ, Eastern Zagros; SI, Sistan; KB, Kuhbanan-Bam; DB, Dasht-e-Bayaz; DR, Doroune, and MA, Makran.(B) Map showing seismic strain estimations based on 1°× 1°blocks.In this map, the color coding represents different strain rate ranges: brown for 0.01 to 0.1, light blue for 0.1 to 1, dark blue for 1 to 10, light green for 10 to 100, and dark green for 100 to 1,000 nstrain/yr.Areas without significant earthquake activity or with strain rates less than 0.01 nstrain/yr are marked in red.The error margins for these estimations are sourced from (Hessami et al., 2003). Frontiers in Earth Science 03 frontiersin.org Alborz region, Hatzfeld et al. (2003) in the Zagros area, and Berberian et al. (1999) in eastern Iran.The earthquake data for the 336 small blocks were individually extracted using a Fortran program in a Linux environment.Subsequently, data processing for each block was conducted separately.The editing and processing stages utilized Excel and Origin software, respectively.Furthermore, all maps pertinent to this research were created using GMT software, as illustrated in Figure 1. In this research, to estimate seismic strain rates, we employed two different zoning approaches: 1) For a general investigation based on tectonic zones, we considered occurred earthquakes, information about the tectonics of the study area, mechanisms of the faults, focal mechanisms of the earthquakes, seismic states, fault trends, and geological evidence presented in various studies, e.g., Zagros (Navabpour andBarrier, 2012), Kopeh Dagh (Hollingsworth et al., 2006); Alborz (Rashidi, 2021), Sistan (Ezati et al., 2022b(Ezati et al., , 2023;;Rashidi et al., 2022), resulting in fourteen defined tectonic zones.2) For detailed investigation, we segmented the study area into 336, 1°× 1°blocks.These tectonic zones encompass several 1°× 1°blocks; the blocks within different tectonic zones were examined, and then the zones were compared.In other research, e.g., Raeesi et al. (2017) and Masson et al. (2005), larger blocks (2°× 2°) have been used for seismicity analysis; however, as more detailed information has become available in recent years, we have opted for 1°× 1°blocks to enable a more detailed analysis of seismic stress and strain. Masson et al. ( 2005) have noted more significant variations in the Alborz region compared to the Zagros region.Upon examining the map in Figure 1B, which covers Zagros, Alborz, and eastern Iran, it is evident that the area of Dasht-e-Beyaz and Abiz in the east of Iran (experiencing a maximum strain rate of 1,000 nstrain/year) undergoes more deformation than both the Alborz and Zagros regions.In Alborz, the level of deformation is observed to be higher than in Zagros.The deformations in Zagros are primarily confined to the crustal volume that generated the 1972 Qir-Karzin earthquake (M S 6.9) as documented by (Dewey and Grantz, 1973), and the 1977 Khorgo Bandar Abbas earthquake (M S 7.0) as described by (Berberian and Papastamatiou, 1978).This is indicated by the light green coloration in Figure 1B. To ensure that smaller earthquakes (M < 5.5) are not overlooked, this study employs the Molnar method (Molnar, 1979) as a secondary approach.This method, which also incorporates smaller earthquakes, is used to estimate seismic strain using Eqs 2-4. In these Equations, the coefficients a, b, c, and d correspond to the Gutenberg-Richter Equation (Gutenberg and Richter, 1956) (Eq.3), respectively.The coefficients for the Hanks and Kanamori (Hanks and Kanamori, 1979) Equation, which establishes the relationship between M W and M 0 (Eq.4), are 1.5 and 16.05.In Eq. 2, M 0max refers to the largest seismic moment among earthquakes in a selected block. For the application of the Molnar method, a comprehensive catalog that includes microearthquakes is necessary.To this end, the seismic catalog of the International Institute of Seismology and Earthquake Engineering of Iran (IIEES), which contains data on 23,331 earthquakes, was chosen.To standardize and harmonize the magnitudes of the earthquakes in this catalog, primarily comprised of m b , M S , and M L magnitudes, the relationships proposed by Nemati and Tatar (Nemati and Tatar, 2015) Eq. 5 were utilized in conjunction with Eq. 4. The details of this are illustrated in Figure 2. (5) To accurately estimate strain using the Molnar method, it is necessary to first determine seismic parameters.Table 1 presents these parameters for the 14 tectonic areas of Iran, as delineated in Figure 1A.A key piece of information in this table is the magnitude of completeness (M C ) for the data in each area.The M C can be readily identified from the Gutenberg-Richter (Gutenberg and Richter, 1956) diagram, where it corresponds to the first bend in the graph (Woessner and Wiemer, 2005). As illustrated in Figures 2A-C, the areas shaded in light and dark blue represent regions where strain estimations are similar among the two methods.Notably, Molnar's method yields a higher strain rate value for the Dasht-e-Beyaz area in Eastern Iran.Conversely, in the Northern Zagros region, this method estimates a lower strain rate, possibly due to the prevalence of micro-earthquakes as opposed to larger seismic events in this area. An important limitation of the two methods could be magnitude transformation because the magnitudes in the utilized catalog must be unified.During magnitude transformation, due to fundamental differences between different magnitude scales it can affect the precision of the results, although this is not inevitable.Regarding the differences between the Kostrov and Molnar methods, it is crucial to note that the Kostrov method, which uses large earthquakes, accounts for the majority of the seismic energy released in a seismic area.It is significant that the seismic energy from a single large earthquake could equate to the total energy of a complete sequence of aftershocks from that earthquake in a seismic cycle (Nemati, 2014).Thus, the Kostrov method is more comprehensive in using the majority of localized seismic energy, while the Molnar method is useful for considering the distributed seismic energy in a tectonic area.A notable advantage of the Kostrov method is that it uses the seismic moment of large earthquakes obtained by mechanisms calculated mainly based on waveform modeling.Waveform modeling uses teleseismic waves, which do not accurately capture the anisotropy of the crust, unlike microearthquakes.Therefore, the Kostrov method could be better at reflecting the energy of seismic sources.The comparison of the two methods regarding input data demonstrates that the Kostrov method is better, as it can use more reliable and sufficiently long earthquake catalogs compared to the input data for the Molnar method.Recording microearthquakes requires a sufficiently dense seismic network, unavailable in the pre-instrumental era .During this period in seismology, we have more or less reliable data for large earthquakes but not for microearthquakes. Assuming coaxiality between stress and strain, the direction of the compressional and tensional stresses in the Earth's crust within a seismic area can be determined based on the orientation of pressure (A) Estimation of seismic strain rate using the Kostrov method; (B) Representation of both small and large earthquakes employed in the Molnar method; and (C) Seismic strain rates estimation across Iran's tectonic blocks using the Molnar method, based on independently calculated components of the moment tensor of earthquakes.In panel (A), the strain rate ranges are depicted with different colors: light blue for 0 to 1, dark blue for 1 to 10, and green for 10 to 100 nstrain/yr.Similarly, in panel (C), the color coding is as follows: brown for a range of 0.01-0.1,light blue for 0.1 to 1, dark blue for 1 to 10, light green for 10 to 100, and dark green for 100 to 1,000 nstrain/yr, representing both contractional and extensional strains. Frontiers in Earth Science 05 frontiersin.org10.3389/feart.2024.1408873 TABLE 1 Seismic Parameters of the 14 tectonic areas in Iran for strain estimation using the Molnar Method.This table includes several key columns: the name of each tectonic block, the total number of earthquakes recorded in that block, the absolute magnitude derived from the Gutenberg-Richter diagram (Gutenberg and Richter, 1956), the specific seismic parameters associated with each block, the standard deviation of the Gutenberg-Richter Equation as applied to each block, the standard error of the regression analysis (fitting) for the Equation, the maximum recorded magnitudes of earthquakes in each block, and the maximum scalar moment of the largest earthquake in each block.(P) and tension (T) vectors of earthquakes in that region.To achieve this, two sets of calculations are necessary: Block name 1The resultant compressive and tensile stresses of earthquakes in each square block.2The directions of these stresses estimated and compared for each earthquake individually, as delineated in Eqs 6, 7 (Stein and Wysession, 2009).In these equations, S represents the strike direction, D denotes the dip, and R signifies the slip direction or rake of the seismic fault. The equations for the tension (T) and pressure (P) vectors are defined as: In the Zagros and Eastern Iran regions, the orientation of the pressure vectors aligns with the direction of convergence between the Arabian and Eurasian plates (Ghanbarian et al., 2021).According to the research by Vernant et al. (2004), this convergence occurs at a rate of approximately 21 mm/year in Southern Iran, predominantly in a north-northeast direction (as shown in Figures 3A, B). Our study has conducted a comparative analysis of the pressure and tension vectors across Iran (illustrated in Figure 3A), which are mapped using a 1º × 1º block arrangement.These vectors are juxtaposed against the geodetic strain vectors reported by (Raeesi et al., 2017), which are also depicted using the same block arrangement.Our comparison reveals that, except for the Makran and Alborz regions, there is a notable correlation between the orientations of the vectors in the two maps, indicating a general consistency in the directional alignment of tectonic movements and strain distribution across these regions. Crustal movements To estimate the various components of the Earth's crust displacement velocity, it is essential to analyze the different components of the seismic moment tensor of earthquakes (M ij in 1).The six independent components of this tensor can be calculated using Eqs 8-13, as described by Jackson et al. (1995), utilizing three key parameters: strike (S), dip (D), and slip direction (R) of the seismic fault.These parameters were sourced from (Jackson et al., 1995) and the GCMT catalog.Since the internal strain within the Earth's crust partially manifests as displacement between crustal blocks, it is feasible to estimate the displacement velocity of the crust in various directions.This can be achieved by using the components of the strain tensor derived from the moment tensor of earthquakes, as outlined by (Jackson and McKenzie, 1988).They have also established equations linking the displacement velocity of the crust in different directions to the components of both the strain tensor and the moment tensor of earthquakes.Once the independent components of the earthquake moment tensor are determined using the aforementioned equations, the displacement speed of the crust in the horizontal and vertical directions can be estimated using Eqs 14, 15, as suggested by (Pondrelli et al., 1995). In these equations, x, y, and z represent the length and width (which are equal to one degree for regular blocks), and thickness of the selected block, respectively.Here, the thickness corresponds to that of the seismogenic layer for which the velocity is being estimated.Vh and Vz denote the horizontal and vertical rates of crustal displacement within the selected block, as illustrated in Figure 4. Since the dimensions of the blocks in Figures 4A, B are oriented in the north and east directions, x, y, and z correspond to the north, east, and vertical directions, respectively. The maps presented in Figure 4 indicate that blocks #204 and #295 exhibit the highest rates of horizontal displacement.Specifically, block #204 was the site of the significant Tabas Golshan earthquake (M S 7.4) and its subsequent aftershocks (Berberian, 1979).This area shows pronounced horizontal displacement rates.Additionally, block #295 demonstrates the maximum horizontal displacement speed, which can be attributed to the compressive forces generated by the earthquakes in this region, as depicted in Figure 4A. Berberian (Berberian, 1995) analyzed the active tectonics in the Zagros region, attributing the area's geological features to the activity along two primary types of faults: the main longitudinal thrust faults and the transverse vertical faults.The longitudinal thrusts play a significant role in the region's tectonic structure (Ghanbarian and Derakhshani, 2022).Additionally, the transverse vertical faults, exemplified by the Kazeroun and Sabzpoushan faults, contribute to the area's geological complexity.The High Zagros region, in particular, has experienced uplift due to the activity along the High Zagros fault, evidenced by historical seismic events like the earthquake on 18 November 1226 AD, which had a magnitude of M W = 6.4 and a maximum intensity of I0 = VII. Based on the current position of Paleozoic rocks along the high Zagros belt, Berberian (Berberian, 1995) estimated the cumulative amount of vertical displacement along this fault to be more than 6 km (Berberian, 1981).Although the mechanism of earthquakes does not confirm the thinning of the Earth's crust in the subsidence of the Zagros foreland, Berberian (Berberian, 1981) has pointed out the uplift of the high Zagros since the Lower Miocene along the Zagros fault.Estimation of the displacement speed of the crust in the main horizontal (A) and vertical (B) directions by using the independent components of the moment tensor of earthquakes.In panel (A), red, brown, light blue, dark blue and green colors indicate the speed range from 0.001 to 0.01, 0.01 to 0.1, 0.1 to 0.1, 0.1 to 10, and 10-12.6 mm per year, respectively, and also in panel (B), red, brown, light blue and dark blue colors indicate the speed range from 0.001 to 0.01, 0.01 to 0.1, 0.1 to 1.0, and 0.001 1-8.7 mm per year and the yellow color also indicates subsidence, which is between 0.001 and 9.1 mm per year.In these maps, the gray blocks are without earthquakes. Frontiers in Earth Science 09 frontiersin.org Other subsidences can also be seen on the western coast of Makran, eastern Jazmurian, northern Lut, southern coasts, and the central subsidence of the Caspian Sea, which is consistent with the tectonics of the mentioned areas.Thinning of the crust in the areas of Kopeh Dagh, western Alborz, and Talash, where we expect uplift and crustal deformation, is not consistent with the tectonics of those areas. According to Figure 4B, red, brown, and blue colors can be interpreted as crustal uplift or subduction, and yellow color can also be interpreted as subsidence or thinning of the Earth's crust in that area.Therefore, the uplifted Zagros area is associated with the uplift or subduction of the crust, and the Zagros foredeep is associated with the subsidence or thinning of the Earth's crust, which is in fair agreement with the tectonics of the mentioned areas.Other subsidence can also be seen in the western coast of Makran, eastern Jazmurian, northern Lut, southern coasts, and the central subsidence of the Caspian Sea, which is consistent with the tectonics of the mentioned areas.Thinning of the crust in the areas of Kope Dagh, western Alborz, and Talash, where we expect uplift and crustal deformation, is not consistent with the tectonics of those areas. Discussion The tectonic processes, such as crustal thinning and subsidence of certain zones, are not precisely timed.Using the methods in this research, we can only estimate stress, strain, and crustal velocity.While large blocks are beneficial for stress and strain estimation due to their compatibility with area tectonics, they introduce more errors due to the necessity of averaging. The average stress, strain, and velocity values in blocks are attributed to specific points in different shaped blocks.This averaging assumes uniformity across a block, which is not entirely accurate.An optimal estimation balances the influences of tectonics, topography, and uniformity errors. The estimated strain rate of 0.01-411 nstrain/yr in this research contrasts with Tesauro et al., 2006's findings for Central Europe, likely due to differing tectonic regimes and data ranges.This discrepancy also reflects the variance in convergence rates between the Iran-Arabia and Africa-Europe regions. Comparisons with other studies in Iran (Ansari and Zamani, 2014;Zarifi et al., 2014;Raeesi et al., 2017) reveal inconsistencies in estimated strain rates, likely due to different block classifications, earthquake counts, calculation methods, and data ranges. The tectonic areas in this study were chosen based on seismotectonic perspectives, considering factors like fault directions, dominant mechanisms, and earthquake distributions.Notably, the direction of mountain ranges like Alborz and Zagros has not significantly changed, although Alborz shows a sinusoidal form. Crustal deformations have been investigated using seismic stress and strain analysis in different research, such as tectonic stress and the spectra of seismic shear waves from earthquakes (BRUNE JN, 1970); horizontal stress orientations (Lund and Townend, 2007); active crustal deformation in tow seismogenic zones of the Pannonian region (Bus et al., 2009); crustal deformation map of Iran (Khorrami et al., 2019); Stress-strain characterization of seismic source fields (Jordan and Juarez, 2021); evolution of the Stress and Strain field and Seismic Inversion of fault zone (Khoshkholgh et al., 2022); active deformation Patterns in the Northern Birjand Mountains, Iran (Ezati et al., 2022a); tectonic Evolution of Fault Splays in the East Iran Orogen (Rashidi et al., 2023b); seismic strain and seismogenic stress regimes in the crust of the southern Tyrrhenian region (Neri et al., 2003). The significant variation in strain rate estimations across different zoning methods suggests that intermediate zoning might provide a more accurate estimation.However, the zoning shape should be consistent with the area's topography, and the block size must consider the length of coseismic faults. Incorporating small earthquakes (M < 5.0) in strain estimation shows their significant contribution, challenging the convention of focusing only on larger earthquakes.This is the first time seismic strain in Iran has been estimated using Molnar's method, which accounts for small earthquakes. GPS data shows interseismic deformation, such as Figure 8 of (Zarifi et al., 2014), while the focal mechanism of earthquakes shows the seismic strain caused by the earthquakes.Hence, these two parameters are different from each other.GPS data processing is a common method for estimating Earth's crust shortening.Discrepancies in velocity rates estimated by seismic data and actual tectonic movements in Iran, particularly in the Zagros foreland subsidence, may be attributed to salt layers in the sedimentary cover. Talebian and Jackson's research on Zagros (Talebian and Jackson, 2004) suggests most earthquake foci are within the sedimentary cover, possibly influenced by salt layers.This might explain the vertical displacement in this area. In the regions where crustal thickening occurs, reverse faults with corresponding mechanisms are expected.Consequently, earthquake mechanisms, maximum stress directions, and seismic strains must be perpendicular to these reverse faults.Thus, the crustal thickening in the High Zagros region is attributed to the activity of the High Zagros reverse fault, as evidenced by historical seismic events like the earthquake on 18 November 1226 AD, which had a magnitude of MW = 6.4 and a maximum intensity of I0 = VII.In this region, the seismic strain rate is high, and the direction of seismic strain is perpendicular to the High Zagros reverse fault.The lithologic units of the High Zagros differ from those of the Zagros foreland.The latter includes salt layers in the sedimentary cover, which undergo ductile deformation.Consequently, in the Zagros foreland, deformation predominantly manifests as folding rather than fracturing (earthquakes); furthermore, in this region, the seismic strain rate values are low, and the magnitudes of earthquakes are generally moderate to low.This is due to the earthquakes occurring at a depth of 10 km; as their waves move toward the surface and encounter salt domes, their energy is reduced.Therefore, there is no crustal thickening in the Zagros foreland, and the return period of the earthquakes is very long.Thus, crustal thickening in the High Zagros results from high rates of seismic strain, while crustal thinning in the Zagros foreland is due to low rates of seismic strain. Conclusion In conclusion, this study offers valuable insights into the diverse tectonic and seismic dynamics of Iran.By categorizing the region Frontiers in Earth Science 10 frontiersin.orginto seven distinct parts based on the direction of tension, pressure, and crustal displacement velocities, we gain a deeper understanding of its complex geological landscape.These regions encompass the northeastern and southwestern Zagros belt, the northeastern and southwestern sectors of eastern Iran, the eastern and northeastern Kopeh Dagh, as well as the eastern and middle Alborz, along with the Talash region, which spans western and eastern Alborz, and Azerbaijan.This comprehensive categorization serves as a crucial tool for unraveling the intricate interplay of tectonics and seismicity in this geologically complex region. FIGURE 4 FIGURE 4 in the
6,438.6
2024-06-20T00:00:00.000
[ "Geology" ]
Exploration and Research of Human Identification Scheme Based on Inertial Data The identification work based on inertial data is not limited by space, and has high flexibility and concealment. Previous research has shown that inertial data contains information related to behavior categories. This article discusses whether inertial data contains information related to human identity. The classification experiment, based on the neural network feature fitting function, achieves 98.17% accuracy on the test set, confirming that the inertial data can be used for human identification. The accuracy of the classification method without feature extraction on the test set is only 63.84%, which further indicates the need for extracting features related to human identity from the changes in inertial data. In addition, the research on classification accuracy based on statistical features discusses the effect of different feature extraction functions on the results. The article also discusses the dimensionality reduction processing and visualization results of the collected data and the extracted features, which helps to intuitively assess the existence of features and the quality of different feature extraction effects. Introduction Inertial data are data obtained by inertial sensors (gyros, accelerometers, magnetometers), including triaxial acceleration, triaxial angle or angular velocity and attitude angles, etc. With the development of MEMS (Micro-Electro-Mechanical System) technology, inertial data can be measured by a magnetometer and an IMU (Inertial Measurement Unit), which is a combination of gyroscopes and accelerometers. Magnetometers and IMUs have been widely used in wearable sensors due to their small size, light weight, low power consumption, and portability [1,2]. Due to the uniqueness of the creature's posture during the movement, the inertial data can be used to distinguish the current movement state of the creature [3,4], which has become more and more widely used in medical rehabilitation, virtual reality, somatosensory games and other fields [5,6]. The unique movement characteristics of biological individuals can also be used as biometrics to identify biological identities. In the field of intelligent surveillance, compared to face recognition, fingerprint recognition, and iris recognition, which are limited by resolution, space, and distance, this type of identity recognition based on inertial data has higher concealment and is difficult to prevent [7,8]. At the same time, IOS or Android-based smart phones, smart bracelets, smart watches, etc., have integrated the magnetometer and IMU. With the improvement of computing performance, they are capable of acquiring and processing individual motion data, thereby analyzing and identifying the current motor behavior and physiological status of users. Table 1. Summary of typical methods of human identification based on motion characteristics. Method Category Data Sources Feature Extraction Advantages Disadvantages Joint position changes [16] Position of joints in the image Statistics of positions Simple data processing Complex image acquisition method and low accuracy Extract limb angle information from images [17] Image sequence Analyze the change in silhouette width No human body required, high accuracy Still background is required Recognition using area-based metrics [18] Image sequence Description of Research Content and Dataset In order to study the possibility of human identity recognition based on inertial data, the first study is about whether the data itself is distinguishable from human identities. Data dimensionality reduction analysis was performed on the data to observe the data distinguishability from the perspective of data visualization, and a classification algorithm was applied to further verify its distinguishability. The experimental results show that the data itself is indistinguishable, so the second part is to try to find methods to extract and classify the features related to human identities. This part verifies whether the identity recognition is feasible for the selected inertial data set and which feature extraction scheme should be selected. Before the data preprocessing content and the experimental details are introduced, the collected inertial data and the data acquisition scheme are described. In the experiment, 10 subjects were selected to collect walking data using the inertial sensors built into their mobile phones. The mobile phone was tied to the leg as shown in Figure 1a. The collected data includes the X, Y, Z three-dimensional data output by gyroscopes, accelerometers and magnetometers. The gyroscopes output angular velocity information, the accelerometers output acceleration information, and the magnetometers output attitude angle information. The phyphox software [20] was used for data collection with a sampling frequency of 100 Hz and a sampling time of about 30 s. The software is a toolkit for physical phone experiments provided by RWTH Aachen University, and can be downloaded from the application gallery in the phone. Figure 1b,c show the software operation interface and data acquisition interface, respectively. In the experiment, the collected data set contains the inertial data of 10 people in a total of 34,440 groups, with 9 data in each group. Data Preprocessing The inertial data collected by the mobile phone sensor not only contains the feature information of each pedestrian, but also various noise interferences [21]. In order to accurately extract the required features in the subsequent recognition process, the data preprocessing is conducted [22]. The preprocessing steps include noise elimination and the process of normalization. In the experiment, moving average filtering [21] was applied, which is easy to implement and has high robustness. The arithmetic average of the data at each sampling time, the sampling data at the previous two times, and the data at the next two sampling times, is taken as the processed sampling data at that time. The process can be expressed by the following formula: where represents one sample time, represents the number of sampling points and represents the filtered data. Different kinds of inertial data have different data ranges, and the change of the data can better reflect biological characteristics than the data range. The normalization method [23] is used to set all data to the range of [0, 1] to normalize the feature scale and make possible features distinctive. The method can be expressed by the following formula: (2) where is the data at a certain time, is the minimum value of the data in the continuous time range, is the maximum value, and is the data after normalization. All data will be processed for noise elimination and normalization before further processing. Subsequent research and experiments will also be performed on the basis of data preprocessing. Verification of Inertial Data Separability The purpose of data separability research is to confirm that biometric information related to identity may be extracted from the time series of data, but not from data at a certain sampling time. A dimensionality reduction method named Principal Component Analysis (PCA), and visual analysis, were performed on the data directly, and a classification algorithm named the K-Nearest Neighbor (KNN) algorithm was applied, without considering the time series characteristics, to further verify that the features are contained in the time series if the features exist. Data Preprocessing The inertial data collected by the mobile phone sensor not only contains the feature information of each pedestrian, but also various noise interferences [21]. In order to accurately extract the required features in the subsequent recognition process, the data preprocessing is conducted [22]. The preprocessing steps include noise elimination and the process of normalization. In the experiment, moving average filtering [21] was applied, which is easy to implement and has high robustness. The arithmetic average of the data at each sampling time, the sampling data at the previous two times, and the data at the next two sampling times, is taken as the processed sampling data at that time. The process can be expressed by the following formula: where k represents one sample time, N represents the number of sampling points and X(k) represents the filtered data. Different kinds of inertial data have different data ranges, and the change of the data can better reflect biological characteristics than the data range. The normalization method [23] is used to set all data to the range of [0, 1] to normalize the feature scale and make possible features distinctive. The method can be expressed by the following formula: where X is the data at a certain time, X min is the minimum value of the data in the continuous time range, X max is the maximum value, and X s is the data after normalization. All data will be processed for noise elimination and normalization before further processing. Subsequent research and experiments will also be performed on the basis of data preprocessing. Verification of Inertial Data Separability The purpose of data separability research is to confirm that biometric information related to identity may be extracted from the time series of data, but not from data at a certain sampling time. A dimensionality reduction method named Principal Component Analysis (PCA), and visual analysis, were performed on the data directly, and a classification algorithm named the K-Nearest Neighbor Sensors 2020, 20, 3444 5 of 12 (KNN) algorithm was applied, without considering the time series characteristics, to further verify that the features are contained in the time series if the features exist. PCA-Based Data Separability Verification The PCA algorithm is an effective method to reduce the dimension of feature space, which was first proposed by K. Pearson in 1901 [24]. It is an important statistical method to extract fewer features from multiple features, on the premise of accurately expressing the object features [25]. It can project high-dimensional data into low-dimensional space and form a group of new principal components, which are independent of each other and have no redundancy, so as to achieve the purpose of dimension reduction. It is often used for feature extraction and data visualization. The inertial data collected in the experiment contains three-axis acceleration, angle, and attitude angle information, that is, each set of data is a 9-dimensional vector. The data was reduced to two-dimensional data after applying the PCA algorithm, and the data was visualized. If the inertial data are separable, different types of data should be located at different positions on the plane, and different types of data can be distinguished according to the position. The dimensionality reduction process can be simplified into the following expression: where N represents the number of groups of data, each group of raw data contains nine elements. A is the transformation matrix that needs to be calculated in the PCA algorithm, and X and Y represent the data before and after the transformation, respectively. The calculation of the transformation matrix A can refer to the method in [26]. Figure 2 shows the visualization results of the PCA dimensionality reduction on the inertial data of three people, and the visualization results of the inertial data of 10 people. PCA-Based Data Separability Verification The PCA algorithm is an effective method to reduce the dimension of feature space, which was first proposed by K. Pearson in 1901 [24]. It is an important statistical method to extract fewer features from multiple features, on the premise of accurately expressing the object features [25]. It can project high-dimensional data into low-dimensional space and form a group of new principal components, which are independent of each other and have no redundancy, so as to achieve the purpose of dimension reduction. It is often used for feature extraction and data visualization. The inertial data collected in the experiment contains three-axis acceleration, angle, and attitude angle information, that is, each set of data is a 9-dimensional vector. The data was reduced to two-dimensional data after applying the PCA algorithm, and the data was visualized. If the inertial data are separable, different types of data should be located at different positions on the plane, and different types of data can be distinguished according to the position. The dimensionality reduction process can be simplified into the following expression: where represents the number of groups of data, each group of raw data contains nine elements. is the transformation matrix that needs to be calculated in the PCA algorithm, and and represent the data before and after the transformation, respectively. The calculation of the transformation matrix can refer to the method in [26]. Figure 2 shows the visualization results of the PCA dimensionality reduction on the inertial data of three people, and the visualization results of the inertial data of 10 people. The coordinate axis has no actual physical meaning, and different colors belong to different people's data. From the visualization results of PCA dimensionality reduction, the inertial data of different people are mixed after dimensionality reduction, which is difficult to distinguish, so the data itself is not separable according to the data visualization results. KNN-Based Data Separability Verification In order to further verify the indistinguishable characteristics of the data and explain the necessity of feature extraction, the KNN classification algorithm was used for data separability analysis. The principle of KNN is to find the K training samples closest to the testing sample based on a certain distance measurement, and then the category of samples will be predicted based on the information of K "neighbors". Generally, the category with the most frequent occurrence in K samples is recognized as the prediction result [27]. The coordinate axis has no actual physical meaning, and different colors belong to different people's data. From the visualization results of PCA dimensionality reduction, the inertial data of different people are mixed after dimensionality reduction, which is difficult to distinguish, so the data itself is not separable according to the data visualization results. KNN-Based Data Separability Verification In order to further verify the indistinguishable characteristics of the data and explain the necessity of feature extraction, the KNN classification algorithm was used for data separability analysis. The principle of KNN is to find the K training samples closest to the testing sample based on a certain distance measurement, and then the category of samples will be predicted based on the information of K "neighbors". Generally, the category with the most frequent occurrence in K samples is recognized as the prediction result [27]. In the experiment, Euclidean distance is used to describe the distance between two samples. Each set of sample data is a vector with nine elements. The Euclidean distance of samples Y i and Y j is defined as follows: The first 60% of the entire data set is used as the training set and the last 40% is used as the test set. The method of data set division has also been used in subsequent studies. The number of "neighbors" of the KNN algorithm is set from 1 to 100. The classification accuracy is defined as the number of correctly classified test samples divided by the total number of test samples, and the results are shown in Figure 3. Sensors 2020, 20, x FOR PEER REVIEW 6 of 12 In the experiment, Euclidean distance is used to describe the distance between two samples. Each set of sample data is a vector with nine elements. The Euclidean distance of samples and is defined as follows: where , represents the -th element of sample . The first 60% of the entire data set is used as the training set and the last 40% is used as the test set. The method of data set division has also been used in subsequent studies. The number of "neighbors" of the KNN algorithm is set from 1 to 100. The classification accuracy is defined as the number of correctly classified test samples divided by the total number of test samples, and the results are shown in Figure 3. From the experimental results, when the number of "neighbors" is five, the classification accuracy is the highest, and the result is 0.6384. According to the data reduction and visualization results of PCA, the inertial data, without considering the time series changes, is not distinguishable and cannot be used to judge the identity of others. After applying the KNN classification algorithm to the inertial data, the highest accuracy rate obtained is only 63.84%, further verifying the indistinguishability of the data itself. This means that identity-related features should be extracted from the changes of the data if they exist. Classification Experiments Based on Feature Extraction If the changes between the inertial data at different times are not considered and the identification accuracy is based on the data at the sampling time, the obtained classification accuracy is low. This part serves as a control experiment and will show the classification results after extracting information from a period of inertial data. Feature Extraction Based on Statistical Data and Identity Identification Based on SVM Algorithm This section will introduce the experimental results of extracting statistical features and using the SVM algorithm for classification. Since the data collected by the sensor is inertial data in continuous time, the sliding window technique [28] is used to segment the data and extract statistical features from the data in the window. The number of overlapping sampling points when the window slides is set to 1, and the effect of the size of the window on the classification results was studied in the experiment. Each window of data extracts 18 statistical features. The main concern in the experiment is not whether the most suitable statistical features are selected, but to verify the separability of the feature based on the selected statistical features. The statistics in the time domain and frequency domain of the data in each sliding window are calculated as statistical features. In the time domain, the average, variance, standard deviation, maximum, minimum, zero-crossing times, the difference and mode between the maximum and From the experimental results, when the number of "neighbors" is five, the classification accuracy is the highest, and the result is 0.6384. According to the data reduction and visualization results of PCA, the inertial data, without considering the time series changes, is not distinguishable and cannot be used to judge the identity of others. After applying the KNN classification algorithm to the inertial data, the highest accuracy rate obtained is only 63.84%, further verifying the indistinguishability of the data itself. This means that identity-related features should be extracted from the changes of the data if they exist. Classification Experiments Based on Feature Extraction If the changes between the inertial data at different times are not considered and the identification accuracy is based on the data at the sampling time, the obtained classification accuracy is low. This part serves as a control experiment and will show the classification results after extracting information from a period of inertial data. Feature Extraction Based on Statistical Data and Identity Identification Based on SVM Algorithm This section will introduce the experimental results of extracting statistical features and using the SVM algorithm for classification. Since the data collected by the sensor is inertial data in continuous time, the sliding window technique [28] is used to segment the data and extract statistical features from the data in the window. The number of overlapping sampling points when the window slides is set to 1, and the effect of the size of the window on the classification results was studied in the experiment. Each window of data extracts 18 statistical features. The main concern in the experiment is not whether Sensors 2020, 20, 3444 7 of 12 the most suitable statistical features are selected, but to verify the separability of the feature based on the selected statistical features. The statistics in the time domain and frequency domain of the data in each sliding window are calculated as statistical features. In the time domain, the average, variance, standard deviation, maximum, minimum, zero-crossing times, the difference and mode between the maximum and minimum values [29,30] are calculated and used as time-domain statistics features. In the frequency domain, the Fast Fourier Transform (FFT) algorithm was applied to obtain frequency domain information, and to extract DC components, average amplitude, amplitude variance, amplitude standard deviation, amplitude deviation, amplitude kurtosis, shape average, shape variance value, shape standard deviation and shape kurtosis [31,32]. For the extracted statistical features, the SVM algorithm is used for feature classification. SVM is a supervised classifier, and the basic principle is to obtain the separation hyper plane with the largest geometric interval of different types of data [33]. The SVM algorithm is widely used in pattern recognition problems such as portrait recognition and text classification [34]. Kernel function is always used in the SVM algorithm to map linearly inseparable data to another spatial domain, making it linearly separable. Commonly used kernel functions include linear kernel functions, polynomial kernel functions and radial basis kernel functions [35,36]. In the experiment, the above three kernel functions were used to conduct SVM classification experiments, respectively. The experimental results are shown in Figure 4 and the size of the window is set from half to twice the sampling frequency. Sensors 2020, 20, x FOR PEER REVIEW 7 of 12 minimum values [29,30] are calculated and used as time-domain statistics features. In the frequency domain, the Fast Fourier Transform (FFT) algorithm was applied to obtain frequency domain information, and to extract DC components, average amplitude, amplitude variance, amplitude standard deviation, amplitude deviation, amplitude kurtosis, shape average, shape variance value, shape standard deviation and shape kurtosis [31,32]. For the extracted statistical features, the SVM algorithm is used for feature classification. SVM is a supervised classifier, and the basic principle is to obtain the separation hyper plane with the largest geometric interval of different types of data [33]. The SVM algorithm is widely used in pattern recognition problems such as portrait recognition and text classification [34]. Kernel function is always used in the SVM algorithm to map linearly inseparable data to another spatial domain, making it linearly separable. Commonly used kernel functions include linear kernel functions, polynomial kernel functions and radial basis kernel functions [35,36]. In the experiment, the above three kernel functions were used to conduct SVM classification experiments, respectively. The experimental results are shown in Figure 4 and the size of the window is set from half to twice the sampling frequency. The use of different kernel functions is to avoid the effect of whether the statistical features are linearly separable on the results, and pay more attention to the distinguishable features of the statistical features. The maximum accuracy of different kernel functions on the test set is also marked in Figure 4. According to the experimental results, the classification results using linear kernel functions are the best on the test dataset. When the window size is 70, the accuracy rate on the training set is 1, and the accuracy rate on the test set is 0.8402. The result on the test set is better than the result of the KNN algorithm, which shows that the statistical characteristics that contain information about changes in inertial data are, to some extent, more reflective of information related to human identity. Dimensionality reduction and visualization analysis of statistical features were performed by applying the PCA algorithm. The purpose is to intuitively observe whether the separable characteristics of statistical features have improved, to some extent, compared to the inertial data. The results are shown in Figure 5. The use of different kernel functions is to avoid the effect of whether the statistical features are linearly separable on the results, and pay more attention to the distinguishable features of the statistical features. The maximum accuracy of different kernel functions on the test set is also marked in Figure 4. According to the experimental results, the classification results using linear kernel functions are the best on the test dataset. When the window size is 70, the accuracy rate on the training set is 1, and the accuracy rate on the test set is 0.8402. The result on the test set is better than the result of the KNN algorithm, which shows that the statistical characteristics that contain information about changes in inertial data are, to some extent, more reflective of information related to human identity. Dimensionality reduction and visualization analysis of statistical features were performed by applying the PCA algorithm. The purpose is to intuitively observe whether the separable characteristics of statistical features have improved, to some extent, compared to the inertial data. The results are shown in Figure 5. Compared with Figure 2, the visualization results show that the distinguishability between different colors or different people has improved. From the visualization results and the results of the SVM algorithm, statistical characteristics can be used as the basis for human identification on the collected data set, which can achieve 84.02% accuracy. If the parameters in the algorithm are adjusted, this result may be better. To further verify the features related to human identity information present in the inertial data, the next section will introduce another black box-like feature extraction method to obtain even higher accuracy on the test data set, as well as better data dimensionality reduction and visualization results. Machine Learning-Based Identity Recognition This section will introduce the experimental results of training a neural network as a feature extraction function and the classification results based on the extracted features. The neural network used in the study is MLP, and the network parameter values are obtained through training, so that the network serves as the fitting function of the specific input and output system [37]. The model of MLP is shown in Figure 6a, which is composed of neurons, and the neurons form the input layer, hidden layer and output layer. The neuron model is shown in Figure 6b. Assuming that the input layer is represented by a vector , the output of a neuron in the hidden layer is Compared with Figure 2, the visualization results show that the distinguishability between different colors or different people has improved. From the visualization results and the results of the SVM algorithm, statistical characteristics can be used as the basis for human identification on the collected data set, which can achieve 84.02% accuracy. If the parameters in the algorithm are adjusted, this result may be better. To further verify the features related to human identity information present in the inertial data, the next section will introduce another black box-like feature extraction method to obtain even higher accuracy on the test data set, as well as better data dimensionality reduction and visualization results. Machine Learning-Based Identity Recognition This section will introduce the experimental results of training a neural network as a feature extraction function and the classification results based on the extracted features. The neural network used in the study is MLP, and the network parameter values are obtained through training, so that the network serves as the fitting function of the specific input and output system [37]. The model of MLP is shown in Figure 6a, which is composed of neurons, and the neurons form the input layer, hidden layer and output layer. The neuron model is shown in Figure 6b. Assuming that the input layer is represented by a vector X, the output of a neuron in the hidden layer is f (W·X + b), in which W is the weight (also called the connection coefficient), b is the output bias and f is the activation function [37]. Sensors 2020, 20, x FOR PEER REVIEW 9 of 12 • , in which is the weight (also called the connection coefficient), is the output bias and is the activation function [37]. In the experiment, the Rectified Linear Unit (ReLU) [38] was used as the hidden layer activation function, and the softmax function [38] was used as the output layer activation function. The hidden layer acts as a feature extraction function, and the output layer acts as a classifier to participate in network optimization and classification accuracy calculation. All the parameters of MLP are the connection weights and output bias between the various layers. The process of solving these parameters is the training process of the network. The training process is an iterative learning process, which uses the Adam [39] optimization algorithm. In the experiment, the sample data are obtained by sliding window, and the data in the sliding windows was directly fed into the network for parameter fitting, which is different from the classification experiment of extracting statistical features from the data in the sliding window. In the experiment, different sizes of sliding window were selected and the results are shown in Figure 7. According to the experimental results, the accuracy of the larger sliding window on the test set is higher. When the size of the window is 300, the classification accuracy on the test set is 0.9817. In order to compare with the classification results of statistical features, SVM classification based on statistical features was also performed when the window size was 300. When the kernel function is the radial basis function, the classification scheme based on statistical features gets the highest accuracy on the test set, with a value of 0.8199, which is less than the accuracy of the MLP scheme. The output of the MLP hidden layer is regarded as the extracted features related to human identity, and dimensionality reduction and visualization analysis of the extracted features were also performed. The results are shown in Figure 8. In the experiment, the Rectified Linear Unit (ReLU) [38] was used as the hidden layer activation function, and the softmax function [38] was used as the output layer activation function. The hidden layer acts as a feature extraction function, and the output layer acts as a classifier to participate in network optimization and classification accuracy calculation. All the parameters of MLP are the connection weights and output bias between the various layers. The process of solving these parameters is the training process of the network. The training process is an iterative learning process, which uses the Adam [39] optimization algorithm. In the experiment, the sample data are obtained by sliding window, and the data in the sliding windows was directly fed into the network for parameter fitting, which is different from the classification experiment of extracting statistical features from the data in the sliding window. In the experiment, different sizes of sliding window were selected and the results are shown in Figure 7. Sensors 2020, 20, x FOR PEER REVIEW 9 of 12 • , in which is the weight (also called the connection coefficient), is the output bias and is the activation function [37]. In the experiment, the Rectified Linear Unit (ReLU) [38] was used as the hidden layer activation function, and the softmax function [38] was used as the output layer activation function. The hidden layer acts as a feature extraction function, and the output layer acts as a classifier to participate in network optimization and classification accuracy calculation. All the parameters of MLP are the connection weights and output bias between the various layers. The process of solving these parameters is the training process of the network. The training process is an iterative learning process, which uses the Adam [39] optimization algorithm. In the experiment, the sample data are obtained by sliding window, and the data in the sliding windows was directly fed into the network for parameter fitting, which is different from the classification experiment of extracting statistical features from the data in the sliding window. In the experiment, different sizes of sliding window were selected and the results are shown in Figure 7. According to the experimental results, the accuracy of the larger sliding window on the test set is higher. When the size of the window is 300, the classification accuracy on the test set is 0.9817. In order to compare with the classification results of statistical features, SVM classification based on statistical features was also performed when the window size was 300. When the kernel function is the radial basis function, the classification scheme based on statistical features gets the highest accuracy on the test set, with a value of 0.8199, which is less than the accuracy of the MLP scheme. The output of the MLP hidden layer is regarded as the extracted features related to human identity, and dimensionality reduction and visualization analysis of the extracted features were also performed. The results are shown in Figure 8. According to the experimental results, the accuracy of the larger sliding window on the test set is higher. When the size of the window is 300, the classification accuracy on the test set is 0.9817. In order to compare with the classification results of statistical features, SVM classification based on statistical features was also performed when the window size was 300. When the kernel function is the radial basis function, the classification scheme based on statistical features gets the highest accuracy on the test set, with a value of 0.8199, which is less than the accuracy of the MLP scheme. The output of the MLP hidden layer is regarded as the extracted features related to human identity, and dimensionality reduction and visualization analysis of the extracted features were also performed. The results are shown in Figure 8. Compared with Figure 2 and Figure 5, the visualization results show that that after feature extraction, the characteristics between different people can be clearly distinguished. From the visualization results and the accuracy of the algorithm, the feature function fitted using MLP can be used as the basis for human identification on the data set used. The extracted features are classified using the SVM algorithm with a radial basis kernel function. The accuracy on the test set is 0.9881. Excluding the influence of the classifier, the results indicate that the features extracted by MLP can reflect information related to human identity more accurately than statistical features. Conclusions Through analysis and experiments, this paper confirms that the collected data can be used for human identification based on inertial data. According to whether or not to extract features, different experiments were carried out. The results show that the information related to human identity is included in the changes of inertial data, and this information needs to be extracted. Statistical functions and a neural network fitting function (MLP) were used for feature extraction; the classification results based on the latter reached 98.17%, further indicating that the features related to human identity inertial data in the experiment can be extracted through a feature extraction function, and the visual results of the extracted features indicate that the features can be used to distinguish people's identities. Compared with the image processing-based identity recognition studies, the proposed scheme is not affected by the environment, while image processing schemes generally need to remove the background and are affected by the environment. The highest accuracy rate of image-based recognition schemes under laboratory conditions is 95.0% [19], which is less accurate than the proposed method. Freedom from background and environmental interference is an advantage of the proposed scheme, but the lack of a clear feature modeling method is a disadvantage. The paper did not find a clear expression of the most suitable feature extraction function, but verified the feasibility of the neural network on the data set to fit the feature extraction function. From the experimental results, the inertial data contains features related to human identity. In order to further verify the role of features in identifying human identity, it is considered to expand the size of the data set to distinguish more people. When recognizing more human identities, simplifying the recognition as a classification problem will no longer be applicable. Human identity matching and recognition based on feature extraction functions will be the next research content. Compared with Figures 2 and 5, the visualization results show that that after feature extraction, the characteristics between different people can be clearly distinguished. From the visualization results and the accuracy of the algorithm, the feature function fitted using MLP can be used as the basis for human identification on the data set used. The extracted features are classified using the SVM algorithm with a radial basis kernel function. The accuracy on the test set is 0.9881. Excluding the influence of the classifier, the results indicate that the features extracted by MLP can reflect information related to human identity more accurately than statistical features. Conclusions Through analysis and experiments, this paper confirms that the collected data can be used for human identification based on inertial data. According to whether or not to extract features, different experiments were carried out. The results show that the information related to human identity is included in the changes of inertial data, and this information needs to be extracted. Statistical functions and a neural network fitting function (MLP) were used for feature extraction; the classification results based on the latter reached 98.17%, further indicating that the features related to human identity inertial data in the experiment can be extracted through a feature extraction function, and the visual results of the extracted features indicate that the features can be used to distinguish people's identities. Compared with the image processing-based identity recognition studies, the proposed scheme is not affected by the environment, while image processing schemes generally need to remove the background and are affected by the environment. The highest accuracy rate of image-based recognition schemes under laboratory conditions is 95.0% [19], which is less accurate than the proposed method. Freedom from background and environmental interference is an advantage of the proposed scheme, but the lack of a clear feature modeling method is a disadvantage. The paper did not find a clear expression of the most suitable feature extraction function, but verified the feasibility of the neural network on the data set to fit the feature extraction function. From the experimental results, the inertial data contains features related to human identity. In order to further verify the role of features in identifying human identity, it is considered to expand the size of the data set to distinguish more people. When recognizing more human identities, simplifying the recognition as a classification problem will no longer be applicable. Human identity matching and recognition based on feature extraction functions will be the next research content.
8,755.6
2020-06-01T00:00:00.000
[ "Computer Science" ]
Massive Dual Spinless Fields Revisited Massive dual spin zero fields are reconsidered in four spacetime dimensions. A closed-form Lagrangian is presented that describes a field coupled to the gradient of its own energy-momentum tensor. Introduction As indicated in the Abstract, the point of this paper is to find an explicit Lagrangian for the dual form of a massive scalar field self-coupled in a particular way to its own energy-momentum tensor. This boils down to a well-defined mathematical problem whose solution is given here, thereby completing some research initiated and published long ago in this journal [1]. After first presenting a concise mathematical statement of the problem, and then giving a closed-form solution in terms of elementary functions, the field theory that led to the problem is re-examined from a fresh perspective. The net result is a very direct approach that leads to both the problem and its solution. Some History Here I reconsider research first pursued in collaboration with Peter Freund, in an effort to tie up some loose ends. In the spring of 1980, when I was a post-doctoral fellow in Yoichiro Nambu's theory group at The Enrico Fermi Institute, Peter and I were confronted by a pair of partial differential equations (see [1] p 417). where m and g are constants. We noticed in passing that these PDEs imply the secondary condition [3] and we then looked for a solution to (1-3) as a series in g beginning with To simplify the equations to follow, I will rescale g = mκ so that the constant m always appears in (1)(2)(3) only in the combination v/m. Thus I may as well set m = 1, and hence κ = g. I can then restore the parameter m in any subsequent solution for L by the substitution L (u, v) → m 2 L (u, v/m). Clearly, there is a two-parameter family of exact solutions to these PDEs which depends only on v, namely, where a and b are constants. However, for the model field theory that gave rise to the partial differential equations (1,2), this linear function of v amounts to a topological term in the action and therefore gives no contribution to the bulk equations of motion. Moreover, L 0 (v) contributes only a (cosmological) constant term to the canonical energy-momentum tensor. So, in the context of our 1980 paper [1] where solutions of (1,2) were sought which gave more interesting contributions, this L 0 (v) was not worth noting. Nevertheless, it reappeared in another context, somewhat later [2]. Completing Some Unfinished Business It so happened in 1980 that Peter and I did not find an exact L (u, v) to solve the PDEs (1,2). In fact, we reported then only the terms given in (4). Here I wish to present an exact, closed-form solution to all orders in g. The crucial feature leading to this particular solution is that the dependence on v is only through the linear combination v − gu. The result is where as a series Fortunately, the 3 F 2 hypergeometric function in (7) reduces to elementary functions. For real w, Nevertheless, the solution (6) was first obtained in its series form (7) and only afterwards was it expressed as a special case of the hypergeometric 3 F 2 , with its subsequent simplification to elementary functions. More generally, it is not so difficult to establish that solutions to (1-3) necessarily have the form where the function G is differentiable, and H is integrable, but otherwise not yet determined, as befits the general solution of a more easily solvable 1st-order PDE, albeit nonlinear: Note in (9) the return of an explicit term linear in v. This term arises as the particular solution of the inhomogeneous 1st-order PDE that results from integrating (10) and exponentiating, namely, The functions G and H are now constrained by additional conditions that lie hidden within (1) and (2). I will leave it to the reader to flesh out those additional conditions. I will not go through that analysis here. Instead, I will reconsider the model field theory that led to the partial differential equations (1,2) in light of the exact solution (6). That solution provides a good vantage point to view and analyze the model. The Model Revisited Consider a Lagrangian density L (u, v) depending on a vector field V µ through the two scalar variables, This vector field is to be understood in terms of an antisymmetric, rank 3, tensor gauge field, V αβγ , i.e. the four-dimensional spacetime dual of a massive scalar [1], with its corresponding gauge invariant field strength, The bulk field equations that follow from the action of L (u, v) by varying V µ are simply where the partial derivatives of L are designated by L u ≡ ∂L (u, v) /∂u and L v ≡ ∂L (u, v) /∂v. An obvious inference from these field equations is that the on-shell vector V µ is a gradient of a scalar Φ, if and only if L u is a function of L v . For example, if L u has a linear relation to L v with L u = a + bL v for constants a and b, the field equations give But in any case, on-shell the combination U µ = V µ L u is a spacetime gradient. An additional gradient of the field equations then gives Thus the vector V µ is a gradient of a scalar, as in (15), such that if and only if for some scalar function Ω, Simplification Now for simplicity, demand that L u = a + bL v for constants a and b, in accordance with V µ being a gradient, as in (15) and (20). This linear condition is immediately integrated to obtain where L (v + bu) is a differentiable function of the linear combination v + bu. The field equations (14) are now That is to say, the scalar in (21) is Ω = 2ab + 2b 2 L ′ . Energy-momentum tensors In [1] Peter and I say that, given (1-3), the field equations for V µ amount to (20) along with the "simple, indeed elegant" statement where g has units of length, and θ is the trace of the conformally improved energy-momentum tensor. Be that as it may, there is a less oracular method to reach this form for the field equations in light of the simplification (22). As is well-known, there may be two distinct expressions for energy-momentum tensors that result from any Lagrangian. From (22) the canonical results for Θ µν , and its trace Θ = Θ µ µ , are immediately seen to be Although not manifestly symmetric, it is nonetheless true that Θ Surprisingly different results follow from covariantizing (22) with respect to an arbitrary background metric g µν , varying the action for − det g αβ L with respect to that metric, and then taking the flat-space limit. This procedure gives the "gravitational" energy-momentum tensor and its trace: The unusual structure exhibited in this tensor follows because in curved spacetime V µ as defined by (13) is a relative contravariant vector of weight +1 with no dependence on the metric, so ∂ µ V µ is a relative scalar of weight +1 also with no dependence on g µν , and V µ V µ = g µν V µ V ν is a relative scalar of weight +2 where all dependence on the metric is shown explicitly. Hence the absolute scalar version of L (u, v) is given by where again all the metric dependence is shown explicitly. It is straightforward to check on-shell conservation of either (25) or (26), separately. However, it turns out the flat-space equations of motion can now be written in the form (24) provided a linear combination of Θ µν . (28) The trace is then Field equation redux Since various scales have been previously chosen to set m = 1, the field equations (20) and (23) give for the left-hand side of (24) On the other hand, from (29) for any constant c, The choice 2ac = b reconciles the spurious ∂ µ u term to give the desired form provided the function L satisfies the second-order nonlinear equation But note, the constant c can be set to a convenient nonzero value by further rescalings. For example, if (a, L) → ab 2c , aL 2bc , along with the previous choice 2ac = b → a = 1, the equation for L becomes 1 + 1 2b Finally, rescaling z → w/b gives 1 + 1 2 The solution of this equation for L ′ with initial condition L ′ (0) = 0 is Imposing the additional initial condition L (0) = 0, this integrates immediately to Comparison with (8) shows that Given the previous rescalings, namely, L (u, v) = au + L (v + bu) → ab 2c u + 1 2bc L (w = bz) a=1 , the Lagrangian density for the model becomes As before, v = ∂ µ V µ , u = V µ V µ , and z = v + bu. Note that the term linear in z in (39) cancels out upon power series expansion, so the result agrees with (4) up to and including all terms of O V 3 . To comport to the conventions in [1], choose b = −g and c = g, so that z = v − gu, to find Now restore m via the coordinate rescaling x µ → mx µ , hence v → v/m and L (u, v) → m 2 L (u, v/m), thereby converting (32) into the form (24), with θ = m 2 Θ. Discussion The conventional integral equation form of (24), including a free-field term with + m 2 V (0) µ = 0, is given by where Θ (y) depends implicitly on the field V ν (y) and G is the usual isotropic, homogeneous, Dirichlet boundary condition Green function that solves + m 2 G (x − y) = δ 4 (x − y). The free-field term must be a gradient, is also a gradient. Integration by parts followed by an overall integration then gives where now Θ (y) depends implicitly on Φ (y). That is to say, On the one hand, this is not surprising, since there is a long-known construction of an explicit local Lagrangian that leads directly to this form for the scalar field equations [6]. (It amounts to the Goldstone model after scalar field redefinition.) Taking a gradient to reverse the steps above then leads back to (43). On the other hand, it is far from obvious that Θ [Φ (x)] can be re-expressed as a local function of V µ = ∂ µ Φ, and that Θ [V µ (x)] follows in turn from a local, closed-form Lagrangian for V µ . The main point of this paper was to show that, indeed, there is an L such that all this is true. Were Θ due to anything other than V µ , field equations of the form (24) would easily follow from i.e. a simple direct coupling of the vector to the gradient of any other traced energy-momentum tensor. With a pinch of plausibility, this calls to mind the axion coupling, albeit without the group theoretical and topological underpinnings, not to mention the phenomenology. In any case, Peter and I certainly did not have axions in mind in 1980 when we wrote [1]. As best I can recall, we had only some embryonic thoughts about massive gravity. In that context we speculated (see [1] p 418) that g/m ∼ L Hubble L Planck = 4.7 × 10 −5 m 2 = 1/ 4.2 × 10 −3 eV 2 . In retrospect, we were both struck by the fact that this guess is approximately the same as phenomenological lower limits for 1/m 2 axion . There is one more noteworthy piece of unfinished business in [1], namely, a closed-form Lagrangian for a massive spin 2 field coupled to the four-dimensional curl of its own energy-momentum tensor, where the spin 2 field is not the usual symmetric tensor, but rather the rank three tensor T [λµ]ν [7]. For progress on this additional unfinished business, please see [8]. With enough effort, perhaps a complete formulation of this spin 2 model will also be available soon, along with a few other variations on the theme of fields coupled to Θ µν . In closing, so far as I can tell, Peter had little if any interest in totally antisymmetric tensor gauge fields prior to our paper [1]. But he quickly pursued the subject in stellar fashion with his subsequent work on dimensional compactification [2]. While all this work is still conjectural, at the very least it provided and continues to provide fundamental research problems in theoretical physics, especially for doctoral students.
3,017
2019-07-26T00:00:00.000
[ "Physics" ]
Two new species of Hippolyte from the Tropical Central and East Atlantic (Crustacea, Decapoda, Caridea) Two new species of the caridean shrimp genus Hippolyte Leach, 1814 [in Leach, 1813-14] are described from the Tropical Central and East Atlantic. Hippolyte cedrici sp. nov. , from Príncipe and São Tomé, can be distinguished from both the related H. holthuisi Zariquey Álvarez, 1953 and H. varians Leach, 1814 on the basis of rostral dentition, as well as mer-istics of the ambulatory pereiopods. Hippolyte karenae sp. nov. , from St. Helena, is morphologically similar to H. coerulescens (Fabricius, 1775) and H. obliquimanus Dana, 1852, by having a well-developed tooth on the outer angle of the first peduncular article of the antennula. It differs from these species, amongst other characters, primarily in the armature of the ambulatory dactyli. Specimens were collected from hydrozoan, In his 2007 paper, d'Udekem d'Acoz briefly described and illustrated a damaged ovigerous female specimen of Hippolyte collected from the gorgonian Muriceopsis tuberculata (Esper, 1792) (as M. truncata) from São Tomé, as well as two juvenile specimens collected from the antipatharian Antipathella sp. at Príncipe. Although confident it was a new species, the limited and damaged material precluded naming the species. During a 2017 collecting trip to São Tomé by Dr. Peter Wirtz, more material of this Hippolyte from antipatharian hosts became available. On the basis of this new material, the species is herein fully described and illustrated. A further new species of Hippolyte, from antipatharians and hydroids, was collected in January 2014 by Dr. Judith Brown (then at the Environment Management Division of St. Helena Government) and Dr. P. Wirtz from several dive sites on the south-central Atlantic Island of St. Helena. This new species is herein also described and illustrated. Description. Carapace stout. In females ( Fig. 1A-C), rostrum moderately narrow, as long as or slightly longer than carapace, exceeding antennular peduncle, with postrostral tooth, 3 dorsal teeth on rostrum proper of which distalmost close to tip of rostrum; 2 ventral teeth, distalmost just in front of level of distal dorsal tooth, proximal tooth between level of distal two dorsal teeth. Rostrum in males (Fig. 1B) more slender, slightly shorter than in females, with 3 dorsal teeth and usually one subdistal ventral tooth. Hepatic tooth robust, reaching anterior margin of carapace. Antennal tooth small, just below slightly protruding infraorbital angle. Pterygostomial angle slightly protruding. Third pleonite ( Fig. 1) dorsal outline in lateral view distinctly curved. Fifth pleonite without tooth above tergite-pleuron junction. Ratio between dorsal length and height of sixth pleonite: 3.4. Telson apex ( Fig. 6B) with 6 strong terminal cuspidate setae (external ones distinctly shorter than intermediate and median ones); one short cuspidate seta present on each side between intermediate and median ones; usually 4 (sometimes 2 or 3) short setae present between long median ones. Proximal pair of dorsolateral cuspidate setae between proximal third and middle of telson (Fig. 6A); distal pair of dorsolateral cuspidate setae usually between first pair and telson apex. Unpigmented part of eyestalk ( Fig. 1A, C) (measured dorsally from where it begins to broaden to base of cornea) longer than broad and longer than cornea. Cornea overreaching stylocerite. Antennular peduncle (Fig. 1D) reaching 0.7 of scaphocerite in mature females. First joint of antennular peduncle without distal outer tooth. Stylocerite moderately long, reaching 0.7 of first joint of antennular peduncle in mature females. Outer antennular flagellum about as long as inner antennular flagellum. Outer antennular flagellum with 6-9 joints in females: 5-6 thick proximal and 1-3 thin distal joints; first thick joint 1.7 times as long as wide, other thick joints slightly longer than broad or about as long as broad. Inner antennular flagellum with 9-10 joints. Outer antennular flagellum in males usually with more thick joints than in females. Scaphocerite of antenna ( Fig. 1E) 3.6 times as long as wide. Distolateral tooth of scaphocerite far from reaching tip of blade. Distolateral tooth and blade separated by distinct notch. Basicerite with distinct ventrolateral tooth. Carpocerite short, falling short of distal margin of basal segment of antennular peduncle. Mandible ( Fig. 2A) with incisor and molar process, without palp. Incisor process with 5 teeth. Molar process with several bristles of short and robust setae. Maxillula ( Fig. 2B) with upper lacinia broadly rectangular with two rows of stout spines medially and few long plumose setae anteriorly and posteriorly. Lower lacinia slender, curled inwards, with few distal serrulate setae. Palp distally with one long, scarcely plumose seta. Maxilla ( Fig. 2C) with basal endite bilobed; distal lobe medially with slender serrulate setae and few long plumose setae anteriorly; proximal lobe slightly larger than distal lobe, medially with serrulate setae. Coxal endite short, medially with row of long plumose setae. Scaphognathite well developed. Palp short, distally with one plumose seta. Second maxilliped ( Fig. 3A) with dactylar segment of endopod about twice as broad as long, densely fringed medially with long serrulate setae. Propodal segment anteriorly with few long simple and plumose setae. Carpal segment short, unarmed, triangular. Meral segment short, unarmed, triangular. Ischial segment slightly longer than broad, ventromedially with row of simple sort setae and dorsomedial row of longer plumose setae. Basal segment medially with long plumose setae; exopod about twice as long as bent endopod, distally with few plumose setae. Coxal segment fused with basal segment, laterally with bilobed epipod. Third maxilliped ( Fig. 3B) reaching about mid-length of scaphocerite when extended forward. Distal segment medially with few rather short serrulate setae, with about 10 large conical teeth on apex and distal third of medial border. Penultimate segment 0.5 times length of distal segment. Antepenultimate segment about as long as distal two segments together, with small distolateral spine, with simple setae in distal 2/3 rd of medial margin and plumose setae in proximal third of mesial margin; exopod reaching mid-length of antepenultimate segment, distally with few plumose setae. Coxal segment medially expanded with row of plumose setae along medial margin, without epipod nor arthrobranch. First pereiopod (Fig. 3C) short, compact. Mesial side of chela not deeply concave. Fingers about as long as palm, spatulate, cutting edges entire. Carpus as long as chela, tapering proximally, unarmed. Merus as long as carpus, about twice as long as width, unarmed. Ischium short, basal segment slightly longer, ischial and basal segment combined slightly shorter than merus, medially both with several long plumose setae. Coxal segment almost as long as wide, medially with long plumose seta. Second pereiopod (Fig. 3D) long and slender, reaching mid-length of scaphocerite when extended. Chela with fingers slightly longer than palm, with entire cutting edges. First joint of carpus about as long as second and third joints combined; first joint 3.0-3.5 times as long as wide, second joint 1.6-1.7, third joint 1.7-1.8 respectively. Merus slender, slightly shorter than carpus. Ischium about half length of merus, unarmed. Basal segment short, half length of ischium, unarmed. Coxal segment medially with few long simple setae. Ambulatory pereiopods rather long and slender. Third pereiopod (Fig. 3E) almost reaching or slightly overreaching distal margin of scaphocerite when extended forward. Merus in mature females about 7.3 times as long as wide, carpus of third pereiopod 3.6 times as long as wide, propodus 8.7 times as long as wide. Merus with 1 subdistal outer spine. Carpus with 1 proximal outer spine. Propodus with 3 single ventral spinules in proximal 2/ 3 and 3 pairs of ventral spinules in distal third; lateral one in each pair longest. Dactylus (Fig. 5a) about third of propodus length, corpus slightly curved, tapering distally, flexor margin with row of 6 spinules increasing in length distally; unguis slender, twice as long as distalmost spine on corpus (secondary unguis). Eggs small (diameter variable, depending on their developmental stage). First pleopod of male ( Fig. 6D) with endopod less than third length of exopod; medial margin of endopod with row of simple setae; lateral margin of endopod with row of long plumose setae. Distribution. Presently only known from São Tomé and Príncipe in the Gulf of Guinea, tropical East Atlantic. Systematic Remarks. The new species can be easily differentiated from the majority of Atlantic Hippolyte species by the following characters: absence of tooth above tergite-pleuron junction on fifth pleonite (vs. present in H. coerulescens); absence of teeth on outer distal corner of first peduncular article of antennula (vs. Acoz 1996, 2007, García-Raso et al. 1998), but whilst both taxa have previously been considered to be the same species (d'Udekem d'Acoz 1996), they are clearly genetically distinct (Terossi et al. 2017). Hippolyte cedrici sp. nov. can easily be distinguished from both H. holthuisi and H. varians on the basis of rostral dentition, with 2 (3 in a single specimen) well-developed proximal dorsal teeth in addition to the subdistal dorsal tooth in H. cedrici sp. nov., vs. 1 (very rarely 2) less developed teeth in the other two species, as well as the lower number of spinules on the dactyli of the ambulatory pereiopods (5-6 vs. Description. Carapace stout. In females ( Fig. 7A-C), rostrum moderately narrow, shorter than carapace, slightly exceeding antennular peduncle, with 3 dorsal teeth on rostrum proper of which distalmost close to tip of rostrum; 2 ventral teeth, distal tooth subapical, proximal tooth in front of level of distalmost dorsal tooth. Rostrum in males (Fig. 7B) more slender and shorter than in females, with 1-3 dorsal teeth and usually one subdistal ventral tooth. Hepatic tooth robust, reaching anterior margin of carapace. Antennal tooth small, just below slightly protruding infraorbital angle. Pterygostomial angle slightly protruding. Third pleonite of abdomen (Fig. 7A) dorsal outline in lateral view distinctly curved. Ratio between dorsal length and height of sixth pleonite: 2.7. Telson ( Fig. 12B) with proximal pair of dorsolateral cuspidate setae between proximal third and middle of telson length; distal pair of dorsolateral cuspidate setae usually between first pair and telson apex. Telson apex (Fig. 12C) with 6 strong terminal cuspidate setae (external distinctly shorter than intermediate and median ones); 2 short ones present between long median ones. Unpigmented part of eyestalk (Fig. 7A, C) (measured dorsally from point where it begins to broaden to base of cornea) longer than broad and longer than cornea. Cornea not overreaching stylocerite. Antennular peduncle (Fig. 7D) reaching 0.6 of scaphocerite in mature females. First joint of antennular peduncle with strong distal outer tooth. Stylocerite long, reaching 0.8-0.9 of first joint of antennular peduncle in mature females. Outer antennular flagellum about as long as inner antennular flagellum. Outer antennular flagellum with 8-9 joints in females: 5-6 thick proximal and 2-3 thin distal joints; first thick joint 1.6 times as long as wide, other thick joints about as long as broad. Inner antennular flagellum with about 10 joints. Outer antennular flagellum in males usually with sturdier joints than in females. Scaphocerite of antenna (Fig. 7E) 2.7 times as long as wide. Distolateral tooth of scaphocerite far from reaching tip of blade. Distolateral tooth and blade separated by distinct notch. Basicerite with distinct ventrolateral tooth. Carpocerite short, falling short of distal margin of basal segment of antennular peduncle. Second maxilliped (Fig. 9A) with dactylar segment of endopod almost twice as broad as long, medially densely fringed with long serrulate setae. Propodal segment anteriorly with few long plumose setae. Carpal segment short, unarmed, triangular. Meral segment short, unarmed, triangular. Basal and ischial segment fused, slightly longer than broad, medially with row of long plumose setae; exopod about twice as long as endopod, with few plumose setae distally and proximally. Coxal segment partly fused with basal segment, laterally with bilobed epipod. Third maxilliped (Fig. 9B) reaching about mid-length of scaphocerite when extended forward. Distal segment medially with few, rather short serrulate setae, with about 8 large conical teeth on apex and distal third of medial border (Fig. 9C). Penultimate segment 0.3 times length of distal segment. Antepenultimate segment almost as long as distal segment, without distolateral spine, with few plumose setae on mesial margin. Basal segment medially with row of plumose setae; exopod as long as basal segment, with row of few plumose setae in distal half. Coxal segment without epipod nor arthrobranch. First pereiopod (Fig. 9D) short, compact. Fingers about as long as palm, spatulate, with serrate cutting edges. Carpus shorter than chela, tapering proximally, excavate distally, unarmed. Merus as long as chela, about twice as long as wide, mesially with row of long plumose setae. Ischium much shorter than merus, medially with several long plumose setae. Basal segment about as long as wide, medially with long plumose seta. Second pereiopod (Fig. 9E) long and slender, reaching mid-length of scaphocerite when extended. Chela with fingers as long as palm, cutting edges entire. First joint of carpus about as long as third joint, second joint distinctly shorter; first joint 2.0 times as long as wide, second joint 1.1, third joint 1.6 respectively. Merus slender, slightly shorter than carpus. Ischium about half length of merus, unarmed, two-jointed. Basal segment short, half length of ischium, unarmed. Coxal segment medially with few long simple setae. Ambulatory pereiopods rather long and robust. Third pereiopod (Fig. 10A) almost reaching or slightly overreaching distal margin of scaphocerite when extended forward. Merus in mature females about 5.3 times as long as wide, carpus of third pereiopod 2.3 times as long as wide, propodus 6.0 times as long as wide. Merus with 1-3 subdistal outer spines. Carpus with 1-2 proximal outer spines. Propodus with 2 single ventral spinules in proximal half and 3 pairs of ventral spinules in distal third; lateral one in each pair longest; one distolateral spinule. Dactylus (Fig. 11A) about 0.45 of propodus length, corpus slightly curved, tapering distally, flexor margin with row of 9 spinules increasing in size distally except for unguis which is slightly smaller than subdistal spinule. Eggs small (diameter variable, depending on their developmental stage). First pleopod of male with endopod less than half length of exopod; medial margin of endopod with row of simple setae; lateral margin of endopod with row of long plumose setae. Second pleopod of male with endopod slightly shorter than exopod; appendix masculina and appendix interna subequal; appendix masculina distally with 6 finely serrulate long setae. Ambulatory pereiopods in males distoventrally broadened with series of paired ventral serrulate spinules. Colour. Not known. Etymology. This species is named in honour of Karen van Dorp, who for more than 10 years was the exemplary collection manager of the crustacean collection of Naturalis Biodiversity Center in Leiden, the Netherlands. Host. All known specimens were collected from the hydroid Macrorhynchia filamentosa (Lamarck, 1816) and the antipatharian Plumapathes pennacea (Pallas, 1766). It is not know at this stage whether these records represent obligate or facultative associations. Distribution. Presently only known from St. Helena in the tropical South-Central Atlantic. Systematic remarks. The new species differs from all previously described Atlantic species of the genus, except H. coerulescens and H. obliquimanus, by having a well-developed tooth on the outer angle of the first peduncular article of the antennula. Hippolyte karenae sp. nov. can be easily distinguished from H. coerulescens by the absence of a postero-dorsal tooth on the fifth pleonite (vs. present in H. coerulescens), as well as by the shape of the rostrum, dorsal outline of the third pleonite and the different armature of the dactyli of the ambulatory pereiopods. In general morphology, the new species is somewhat reminiscent of the western Atlantic H. obliquimanus, but can be distinguished from that species by having one tooth on the outer angle of the first peduncular article of the antennula in adults (vs. 2-3 in H. obliquimanus, but sometimes only 1 in juveniles, see d'Udekem d'Acoz 1997). A further clear difference between both species is the armature and shape of the dactyli of the ambulatory pereiopods, with the dactyl being robust in H. karenae sp. nov. (vs. gracile in H. obliquimanus), the much more stronger developed accessory spinules in H. karenae sp. nov., the presence of two secondary ungui in H. obliquimanus (vs. absent in H. karenae sp. nov., although with several distal accessory spinules strongly developed).
3,721.2
2019-01-24T00:00:00.000
[ "Biology" ]
A comparison of amplification methods to detect Avian Influenza viruses in California wetlands targeted via remote sensing of waterfowl Abstract Migratory waterfowl, including geese and ducks, are indicated as the primary reservoir of avian influenza viruses (AIv) which can be subsequently spread to commercial poultry. The US Department of Agriculture's (USDA) surveillance efforts of waterfowl for AIv have been largely discontinued in the contiguous United States. Consequently, the use of technologies to identify areas of high waterfowl density and detect the presence of AIv in habitat such as wetlands has become imperative. Here we identified two high waterfowl density areas in California using processed NEXt generation RADar (NEXRAD) and collected water samples to test the efficacy of two tangential flow ultrafiltration methods and two nucleic acid based AIv detection assays. Whole‐segment amplification and long‐read sequencing yielded more positive samples than standard M‐segment qPCR methods (57.6% versus 3.0%, p < .0001). We determined that this difference in positivity was due to mismatches in published primers to our samples and that these mismatches would result in failing to detect in the vast majority of currently sequenced AIv genomes in public databases. The whole segment sequences were subsequently used to provide subtype and potential host information of the AIv environmental reservoir. There was no statistically significant difference in sequencing reads recovered from the RexeedTM filtration compared to the unfiltered surface water. This overall approach combining remote sensing, filtration and sequencing provides a novel and potentially more effective, surveillance approach for AIv. surveillance of AIv's in waterfowl habitat play a vital role in the transmission of AIv (Ito et al., 1995;Keeler, Berghaus, & Stallknecht, 2012;Lang, Kelly, & Runstadler, 2008;Markwell & Shortridge, 1982). While current national surveillance of AIv in commercial and backyard poultry is rather extensive temporally and spatially, the source waterfowl population, remains relatively under-surveilled. Specifically, in 2018 the USDA discontinued the interagency HPAI Wild Bird Early Detection System and currently only a minimal level of active surveillance in Alaska and a few isolated regions is being implemented (Liberto, 2019). When wetland habitat maintains the optimal conditions of low temperatures (<17°C), slightly basic pH (7.4-8.2), and low salinity (0-20,000 parts per million (ppm)), the potential for AIv to persist and remain infectious in the environment exists (Brown, Goekjian, Poulson, Valeika, & Stallknecht, 2009). Specifically, the faecal/oral excretion of AIv into the environment leads to heavy contamination and seeds the pathway to indirect transmission of AIv to susceptible birds from water and sediment (Lang et al., 2008;Nazir, Haumacher, Ike, & Marschang, 2011). Surveillance efforts in aquatic environments are necessary to understand the environmental persistence of AIv (Pepin et al., 2019), but sampling efforts must consider complex factors when analysing natural water samples such as accessibility to large volumes of water and maintaining conditions of water (e.g. pH, temperature) to ensure virus is not degraded in transport (Keeler et al., 2012). Viral particles within aquatic environments are thought to set the patterns of transmission within waterfowl (Roche et al., 2009) indicating a need for high surveillance of water and sediment of these wetland habitats (Ronnqvist et al., 2012). Detection of virus in these aquatic environments could offer a complementary surveillance approach and provide a novel predictive level of AIv molecular ecology (Pepin et al., 2019). However, current detection methods for AIv in water lack sensitivity, are very limited, and are not representative of AIv ecology within whole habitats (Stallknecht, Goekjian, Wilcox, Poulson, & Brown, 2010). Specifically, detection of AIv in wetlands is typically done via the collection and PCR based analysis of multiple small (~1 ml or less) surface water samples with no concentration methods (Henaux, Samuel, Dusek, Fleskes, & Ip, 2012). For example, to quantify the prevalence of AIv in California's Central Valley wetlands, Henaux et al. (2012) collected a total of 597 surface water samples and performed RNA extractions on a 50 μl aliquot from each 45 ml sample (Henaux et al., 2012). LPAI was detected in 2% of their samples by matrix gene real time Reverse Transcription-Polymerase Chain Reaction (RT-qPCR) (Spackman et al., 2002) and no virus was isolated from surface water samples (Henaux et al., 2012). Although this experimental design yields a higher sample size, small volume samples may not be representative of the entire wetland ecosystem. Combining a more sensitive environmental AIv sampling technique with more targeted sampling of wetlands where waterfowl occur in high densities could lead to greater efficiency and effectiveness of surveillance. NEXt generation RADar (NEXRAD) is a remote sensing tool that offers the ability to quantify waterfowl density and distribution (Buler et al., 2012). Specifically, NEXRAD provides an instantaneous measure of radar reflectivity at the onset of highly synchronized flights of waterfowl departing their daytime roosting locations as they fly to their night-time feeding locations (Buler et al., 2012). The goal of this study was to develop a sensitive and targeted detection method for AIv in wetlands with high waterfowl density as a foundational step to improving environmental surveillance. To reach this goal, we: (a) used NEXRAD observations to identify two wetlands with high waterfowl density, (b) tested two filtration methods to concentrate AIv in water samples from those wetlands, (c) tested two nucleic acid detection methods (e.g. Whole-segment amplification and long-read sequencing versus matrix segment RT-qPCR), and (d) provided sequence information detailing the molecular viral ecology of AIV in sampled wetlands. | NEXRAD wetland selection Historic (i.e. 2014) radar reflectivity from three NEXt generation RADar (NEXRAD) stations (KBBX, KDAX and KHNX) in the Central Valley of California were used to identify wetlands with high waterfowl density and distribution ( Figure 1). NEXRAD is a remote sensing tool proven to quantify waterfowl density and distribution near the ground using an instantaneous measure of radar reflectivity at the onset of highly synchronized flights of waterfowl departing their daytime roosting locations as they fly to night-time feeding locations (Buler et al., 2012). Data between 7.5 and 100 km range from the radar were considered for analysis. For each sampling night free of precipitation and anomalous propagation of the radar beam, we interpolated reflectivity measurements to the instant when the onset of bird flight reaches its peak rate of increase (i.e. typically near the end of evening civil twilight) and estimated the vertically integrated reflectivity from 0-2 km above the ground for each sample volume following (Buler et al., 2012). This approach produces a continuous surface map of the relative density of birds aloft across the radar domain at the peak of flight exodus to maximize the spatial correlation with their diurnal ground roosting density. The Yolo Bypass Wildlife Area in Yolo County and a private hunting club in Butte County were selected as representative wetland habitats. The wetland in Butte County operates as a private hunting club with adjacent rice and agricultural fields. Yolo Bypass Wildlife Area is public land with private agricultural lands surrounding it. Permission to collect water from each wetland was obtained by land managers prior to sampling. Both locations are used for seasonal hunting due to the abundance of birds that utilize the habitat throughout fall and winter. | Sample collection Water samples were collected between June 2018 and September 2018. During each sampling interval, five locations were chosen randomly with GPS marking. Samples were collected between the surface and approximately 1m of depth. Measurements of pH, temperature, and salinity were recorded with the YSI Professional Plus sensor at each of the five locations within a wetland. Due to equipment error, the pH, temperature salinity were not recorded at the first sampling interval in Butte County. At each of the five locations, a 10-litre water sample was collected according to the lower limit of large volumes considered to be adequate for determining pathogen presence in water (Morales-Morales et al., 2003). At the third sampling location within a wetland, a second 10-L carboy was collected for a total of six 10-litre carboys to be filtered with ultrafiltration ( Figure 2). A single 45 ml surface water sample was collected from each of the five locations to compare with previous sampling methods. A total of five 45 ml sediment samples were collected at each wetland interval to compare the presence and persistence of AIv in sediment to water samples (Nazir et al., 2011). Water samples were stored on ice and taken back to the lab for same day filtration. | Tangential flow ultrafiltration The rationale of ultrafiltration is to concentrate large volumes of wetland water for a more representative sample that is indicative of overall AIv presence in the environment. Conventional tangential flow ultrafiltration separates solutes that differ by tenfold in size through membrane pore size, qualifying this method of filtration as an appropriate approach for AIv detection in larger volumes of water ( Figure 3) (Christy, Adams, Kuriyel, Bolton, & Seilly, 2002). Viral particles were retained by molecular weight cut-offs and concentrated in the retentate while molecules smaller than the filter's pore size flowed through the membrane (Figure 3) (Hill et al., 2005). The single use Asahi Kasei Rexeed TM 25s filter with a 30 kDa molecular weight cut-off and membrane area of 2.5 m 2 was compared with the GE TM UFP-3-C-4X2MA autoclavable column with a 3 kDa molecular cut-off and membrane area of 0.14 m 2 (Partyka, Bond, Chase, Kiger, & Atwill, 2016) (Figure 3). Prior to filtration, each filter was primed with 1litre of blocking solution of NaPP and deionized water. Five of the 10-litre carboys were filtered using individual Asahi Kasei Rexeed TM 25s columns, and the sixth 10-litre carboy collected at the third sampling interval was filtered using the GE TM F I G U R E 1 Location of NEXRAD radar stations (100 km radius coverage areas), KBBX, KDAX, KHNX, in relation to California poultry facilities F I G U R E 2 Radar-observed mean daily waterfowl density from November 2014 to February 2015 in conjunction with sampling locations in Colusa County and Yolo County, California, USA UFP-3-C-4X2MA column. Each 10-litre carboy was filtered down to a 45 ml retentate to be comparable with the 45 ml unfiltered surface water sample. Flow rates were observed based on each of the manufacture's recommendations. Pressure of the filtration system did not exceed 20 psi. Upon completion, each filter was eluted with a 500 ml solution of NaPP, Tween, Antifoam, and deionized water. | PCR and sequencing RNA from water and soil samples were extracted using the QIAamp Viral RNA Mini Kit (QIAgen) on a QIAcube and the PowerViral Environmental DNA/RNA Isolation kit (QIAgen), respectively. Following extraction, two methods were used to detect the presence of AIv in samples: reverse-transcriptase quantitative polymerase chain reaction (RT-qPCR) and whole segment amplification followed by sequencing. With respect to the RT-qPCR, a conserved ~100 bp fragment of the matrix protein is amplified according to Spackman et al. (2003). RT-qPCR was performed using this method in order to determine the limit of detection ( Figure S1). We reference this method as RT-qPCR hereafter. Whole-segment amplification was attempted using multi-segment RT-PCR (Zhou et al., 2009). This procedure uses primers that are complementary to genome segment packaging regions (uni12 and uni13), which are conserved among all influenza A viruses, including AIv. Thus, this procedure amplifies entire gene segments if they are present in the sample. We conducted gel electrophoresis to assess genome segment amplification and completed multiplexed sequencing of amplicons using the Oxford Nanopore MinION sequencer (Oxford Nanopore Technologies). The primers included overhangs with 5′ 22 bp barcodes (shared among all samples) and 3′ 8 bp barcodes that were unique for each sample. The MinION sequenced single DNA molecules and allowed for the recovery of entire influenza genome segments (Imai et al., 2018;Wang, Moore, Deng, Eccles, & Hall, 2015). We reference this method as amplification/sequencing hereafter. | Bioinformatics analyses Output from the MinION sequencer was analysed using a custom pipeline that is openly available online. Briefly, raw signal files (.fast5 format) were base-called using Guppy in high accuracy base calling mode (HAC). After quality filtering using Nanofilt (De Coster, D'Hert, Schultz, Cruts, & Van Broeckhoven, 2018), reads were demultiplexed (i.e. assigned to a sample) and primers trimmed using cutadapt (Martin, 2011) with exact matches for sample-identifying barcodes. We used a single brand-new flow cell and included negative and positive controls throughout the sample workflow. NCBI command line BLAST using GNU Parallel (Tange, 2011) was used to search demultiplexed files against all avian influenza whole genome sequences available in the NIAID Influenza Research Database (IRD) (Zhang et al., 2016). Sample metadata from IRD was used to annotate likely subtypes and hosts of AIv sequences detected in each collecting location, based on the closest match in the IRD. | In-silico evaluation of M segment RT-qPCR To evaluate the efficiency of the specific M protein RT-qPCR procedure we used herein (Spackman et al., 2003), we conducted two computational analyses on all fully sequenced avian influenza genomes in IRD as of August 12, 2019. First, we searched for exact matches for primers and probes and used this as an initial indicator of the probability of a successful assay. Second, to conduct a more realistic analysis of assay success (as PCR often tolerates primer mismatches), we conducted a thermodynamic simulation of the TaqMan assay using ThermonucleotideBLAST (Gans & Wolinsky, 2008) which outputs whether an amplicon was generated (together with its length and sequence) for each sequence under the specified conditions. To verify discrepancies between positive samples detected with RT-qPCR versus amplification/sequencing, we employed the same two analyses to examine results for our field-collected samples. F I G U R E 3 As water is fed through the membrane the lumen retains the viral particles circulating within the system. Simplified tangential flow schematic extracted from GE's Hollow Fiber Filter Cartridge's Operating Handbook | Statistical analysis To compare the efficiency of RT-qPCR with amplification/sequencing in detecting AIv from samples, we calculated the number of positive samples under each method and compared them using a proportion test. To test whether filtration methods increased the proportion of AIv positive samples or the number of AIv sequences detected in samples, we conducted a proportion t-test for each filtration method. All statistical analyses were conducted using R. | NEXRAD data The NEXRAD generated surface map allowed for the selection of wetland environments where we would expect to see viral presence due to the waterfowl population density. Using the NEXRAD data collected and analysed, two watersheds in the California Central Valley were selected as our study sites (Figure 4). | Water quality measurements aligned with stability threshold Water quality variables taken from sample intervals B and C indicated that pH and temperature was at the upper limit of the thresholds while salinity was within lower range of the threshold (Table 1). Previous studies suggest that virus stability can be observed between neutral and pH of 8.5, lower temperatures around 17°C and low saline conditions (Brown et al., 2009;Stallknecht, Shane, Kearney, & Zwank, 1990). The pH and temperature conditions we | Recovery of avian influenza sequences from water samples with and without filtration We did not detect influenza virus in sediment samples thus, we focus on water samples hereafter. We compared two filtration methods to the Yolo Bypass area, which had over an order of magnitude more reads than any other unfiltered sample ( Figure 5). We removed this outlier sample and while average reads from Rexeed filtration were higher on average (23.6 ± 39.2, n = 8) than unfiltered surface water (4.11 ± 4.31, n = 9), a t-test indicated this difference wasn't statistically significant (t = 1.401, df = 7.151, p = .203) ( Figure 6). Based on exact matches to the first 3′ 16nt of the primers/probe, the overall success rate increases to 51.16%. If exact matches are required for amplification, at best, half of known fully sequenced avian influenza viruses would be detectable. However, the success rate could be higher given that PCR is able to tolerate mismatches. | Whole-segment amplification/sequencing yielded more positive samples than M-segment RT-qPCR A thermodynamic assay simulation suggests no amplification based on published PCR conditions (60ºC). Under more permissive conditions (50ºC annealing), the analysis suggests an overall success rate of 52.20% and at the very low 40ºC annealing temperature 91.83% of samples should generate positives under the TaqMan assay. | Sequencing data and AIv database match summary The R 9.4.1 MinION flow cell yielded 222,316 total reads, of which 143,716 passed quality control (q > 9). After removing non-target sequences (achieved through trimming of known primer sequences), we obtained a total of 47,962 reads, of which 4,782 matches to all sequenced AIv genomes in the IRD. Samples that yielded electrophoretic bands corresponded well with the number of sequencing reads obtained, providing further evidence confirming the specificity of our primers ( Figure S2). We verified that the pipeline correctly assigned reads by (a) monitoring that multiple negative controls (n = 4) obtained no reads and by (b) aligning reads from the positive control to its reference genome, of which >92% aligned to the reference (see code for we had 71 sequences >2,300 base pairs with an average of 90.83% identity. The majority of segments matched in the database (Table 2) were M1/2 (segment 7 = 392) and NA (segment 6 = 395 reads). An additional 4 segments (PB2, PA, HA, NS1) had at least >10 database matches. We found no matches to PB2 (segment 1) and NP (segment 5). | California Wetlands harbour avian viruses from multiple potential host origins and subtypes We determined the likely subtype of HA and NA segments (n = 454) based on the annotated subtype of the genome segment matched in the database (Figure 7). We found that N6 (n = 392) and H7 (n = 59) composed all the confirmed subtype matches; 3 NA matches were annotated in IRD as 'mixed' subtype. In order to understand potential avian hosts of specific AIv's, we examined the host of the virus in the IRD that our sequences matched to gain insight into the potential hosts of the AIv sequences found in samples (Figure 8). Among the database matches that were | D ISCUSS I ON The results of this study provide evidence for the feasibility of an AIv monitoring approach that combines various methods and technologies with respect to waterfowl surveillance and AIv detection. Specifically, the ability to remotely identify targeted wetlands for wetland water sampling (filtered versus. unfiltered) linked to PCR based AIv analyses (matrix segment RT-qPCR versus whole-segment amplification/sequencing) is a unique approach that should be considered for AIv surveillance. Initial tests suggested AIv sequence recovery was not different in filtered samples compared to unfiltered surface water. However, our data set included an unfiltered sample that yielded over an order of magnitude more sequencing reads than any other sample. Excluding this outlier sample, filtration of environmental water samples by Asahi Kasei Rexeed TM 25s yielded more sequence data, on average, compared to unfiltered surface water samples, presumably by retaining more viral particles. However, this difference was not statistically significant, likely due to the small number of samples. The outlier sample possibly represents a 'jackpot' scenario of sampling an area of high AIv. While more sampling is needed to better establish the effect of filtration on viral recovery over multiple seasons, the results point to the potential efficacy of targeted wetland surveillance without filtration. The two filters likely performed differently due to the molecular weight cut-off (MWCO) and surface area/flow rate differences of the two filters. The larger pore size and overall design of the Asahi Kasei Rexeed TM 25s was proven to be better fit at retaining viral particles. While filtration did not improve read recovery, the whole-segment amplification/sequencing approach yielded sensitive detection from unfiltered surface water samples that were missed using the standard, published M-segment RT-qPCR approach (Spackman et al., 2003). Regardless of filtration method, RT-qPCR yielded one positive sample (out of 33 or 3.0%), versus 19 samples (57.6%) using sequencing. Thus, this amplification/sequencing approach could be a powerful alternative to RT-qPCR, whether filtration is used or not. While the comparison is not completely appropriate due to differences in sample location and time, a previous efforts using RT-qPCR in California wetlands detected LPAI in 2% of the 597 samples collected (Henaux et al., 2012). An additional advantage of the amplification/sequencing approach is that it permits the sequence data to be used to produce a more detailed characterization of the AIv environmental reservoir. For instance, we were able to determine likely subtypes of HA and NA segments (Figure 7) without additional tests. However, it is important to understand the nature of the sequencing data and its limitations for drawing conclusions. This approach sequences entire genome segments, but the linkage between these segments (i.e. which segments occupied the same capsid) is lost during RNA amplification. This means that inferences from database annotations for subtype cannot use the M-segment, but must be restricted only to HA and NA segments as we have done here. We also used available data in FluDB to gather a list of potential hosts for the AIv sequences we characterized ( Figure 8). We note that many of these host species are present in these California wetlands and correspond to typical reservoir species for AIv. It is important to emphasize that many AIv's can have a wide host range among avian and nonavian species, so these are just potential hosts. To that point, the most prevalent species noted in Figure 8 bring MinION sequencing per-base accuracy up to Illumina levels (Karst et al. 2020). In sum, the relevance of the advantages and disadvantages of any given detection method depend on the research goals and it is important to have an ample toolkit for AIv surveillance. One potential implication of the amplification/sequencing results compared to the RT-qPCR results is that AIv prevalence in the wetlands is higher than previously supposed (Henaux et al., 2012) and hence the role of wetlands in seeding new infections may also be larger than previously supposed. While we did not do virus isolation in order to confirm infectivity, the higher prevalence of positives in amplification/sequencing compared to RT-qPCR in addition to our pH, temperature and salinity data (Table 1) suggests that conditions for infectivity of the viruses are largely met (Brown et al., 2009). This would support our current understanding of one route of transmission where the excretion of infected faeces in the environment leads to ingestion from susceptible birds completing the faecal environmental transmission route (Breban, Drake, Stallknecht, & Rohani, 2009 Khalenkov, Laver, & Webster, 2008;Ronnqvist et al., 2012;Stallknecht et al., 2010;Zhang, Li, Chen, Chen, & Chen, 2014). This sequence data provides information on the genetic diversity and composition of influenza viruses in the water column. Sequence data could be used to determine influenza virus subtypes present, infer time-space influenza virus sequence dynamics, and relate these sequence patterns to larger scale, ongoing influenza virus dynamics using independent surveillance data. Furthermore, assessing the strength of the link between remotely sensed waterfowl density and viral load in the environment is needed. This could be done by testing spatio-temporal correlations of AI prevalence in environmental samples with concurrent radar-observed bird density at multiple sampling locations repeated over short time intervals (e.g., bi-weekly or monthly). Such analyses will enhance our knowledge on the nature of the AIv waterfowl reservoir and allow us to couple remotely sensed patterns of bird movements to the risk of specific AIv groups for a strong surveillance tool. CO N FLI C T O F I NTE R E S T The authors declare no conflict of interest. F I G U R E 8 Potential hosts for AIv sequences recovered by location (inset). The y axis denotes the number of sequences from our study that matched a genome associated with a particular host in a database including all the AIv in FluDB. Positive reads that match one segment in >500 bp from one fully sequenced viral genome in FluDB were kept for analysis. Filtered and unfiltered samples were pooled for site breakdown. E TH I C A L S TATEM ENT No samples were collected from animals and no surveys were gathered from human subjects for this study. DATA AVA I L A B I L I T Y S TAT E M E N T The data and code that support the findings of this study are openly available. Sequence data was deposited in the NCBI Sequence Read Archive (Accession SRX7014890) and is available at https://www. ncbi.nlm.nih.gov/sra/SRX70 14890. Data and code for analysis is available on GitHub https://github.com/socio virol ogy/aiv_detec tion_envir onment
5,702.2
2020-06-27T00:00:00.000
[ "Biology" ]
Spacetime Metrics and Ringdown Waveforms for Galactic Black Holes Surrounded by a Dark Matter Spike Theoretical models suggest the existence of a dark matter spike surrounding the supermassive black holes at the core of galaxies. The spike density is thought to obey a power law that starts at a few times the black hole horizon radius and extends to a distance, R sp, of the order of a kiloparsec. We use the Tolman–Oppenheimer–Volkoff equations to construct the spacetime metric representing a black hole surrounded by such a dark matter spike. We consider the dark matter to be a perfect fluid, but make no other assumption about its nature. The assumed power-law density provides in principle three parameters with which to work: the power-law exponent γ sp, the external radius R sp, and the spike density ρDMsp at R sp. These in turn determine the total mass of the spike. We focus on Sagittarius A* and M87 for which some theoretical and observational bounds exist on the spike parameters. Using these bounds in conjunction with the metric obtained from the Tolman–Oppenheimer–Volkoff equations, we investigate the possibility of detecting the dark matter spikes surrounding these black holes via the gravitational waves emitted at the ringdown phase of black hole perturbations. Our results suggest that if the spike to black hole mass ratio is roughly constant, greater mass black holes require relatively smaller spike densities to yield potentially observable signals. We find that is unlikely for the spike in M87 to be detected via the ringdown waveform with currently available techniques unless its mass is roughly an order of magnitude larger than existing observational estimates. However, given that the signal increases with black hole mass, dark matter spikes might be observable for more massive galactic black holes in the not too distant future. In [1], we used the Tolman-Oppenheimer-Volkoff (TOV) equations to calculate the effects of an isotropic dark matter spike on the ringdown waveform and shadow of the supermassive black hole at the core of M87. The assumption of isotropy in this context, also used in earlier work [2], is suspect in the case of non-interacting dust since near the photon sphere, where the motion is highly relativistic, non-zero radial pressure necessarily implies a flow of matter into the black hole and renders the solution non-static 1 .While the isotropic TOV equations used in [1] imply that the radial pressure has negligible impact on the spacetime geometry for physical parameters relevant to galactic black holes, this result is not strictly justified for the noninteracting dust.However, the validity of neglecting the pressure has been confirmed in a separate calculation that starts from non-isotropic pressure [3].The assumption of isotropy can be viable in certain regions of the dark matter halo [4,5] and in certain scenarios such as self-interacting dark matter spikes [6]. While the numerical results and overall conclusions about observability of the dark matter spikes were qualitatively correct, the calculation was done in a frame in which the metric of the vacuum inside the spike was Schwarzschild.In reality, the effect is measured in the frame of an asymptotic observer and was therefore underestimated.There exists an overall redshift for the asymptotic observer which can increase the effect by anywhere from 3% to 40%, depending on the magnitude of the density of the spike.Details of the redshift calculation can be found in [3].We also note a typographical error in Eqs. ( 30) and (31) of [1].The term (1 + r 0 /r) should be (1 + r/r 0 ).The numerical results were not affected. Introduction There is solid observational evidence from the spiral galaxy rotation curves and massluminosity ratios of elliptical galaxies that a dark matter (DM) halo encompasses every galaxy and fills the intergalactic medium.The shape of the DM density profile of this halo is less known, but could play a vital role in determining the geometry of spacetime near the galactic center.Multiple models exist for the spacetime metric around a static and spherically symmetric black hole with a DM halo based on the Newtonian approximation, including the Navarro-Frenk-White (NFW) profile (Navarro et al. 1996;Navarro et al. 1997) and Burkert-Salucci profile (Burkart 1995;Burkart & Salucci 2000).See also Xu et al. (2018), Jusufi et al. (2019), Xu et al. (2020), Jusufi et al. (2020), and Konoplya & Zhidenko (2022). It has been argued that the adiabatic growth of a black hole immersed in cold DM can lead to the formation of high density regions of DM known as "spikes" around supermassive (Quinlan et al. 1995;Gondolo & Silk 1999;Ullio et al. 2001) and intermediate mass (Bertone & Merritt 2005;Zhao & Silk 2005;Bertone 2006) black holes.The first DM spike model was described by Gondolo & Silk (1999), who proposed a power law density distribution for the DM.Sadeghian et al. (2013) included general relativistic corrections to the model of Gondolo and Silk (G-S) and found that the density distribution of the DM around Schwarzschild black holes would begin at around twice the horizon radius instead of four times as proposed initially by G-S.Sadeghian et al. (2013) also found that the peak density of the DM spike was 15 percent higher as compared to the Newtonian approximation used by G-S.This suggests that the DM spike may have important implications for observations. The relativistic corrections to the spacetime metric for a black hole surrounded by a DM spike was constructed in Xu et al. (2021) and Nampalliwar et al. (2021) starting from the power law density profile proposed by G-S.Xu et al. (2021) assume g tt = −g −1 rr and use perturbative approximations while Nampalliwar et al. (2021) calculate the metric components to leading order of spike density at the outside edge. The prospects of detecting the DM spike using gravitational waves have been investigated in Eda et al. (2013), Eda et al. (2015), Yue & Han (2018), Yue et al. (2019), Hannuksela et al. (2020), andKavanagh et al. (2020), which focus mainly on the waveform of extreme and/or intermediate mass ratio inspirals.The ringdown waveform and quasinormal modes (QNMs) of a black hole in a cold DM halo were studied in Zhang et al. (2021), Liu et al. (2021), Cardoso et al. (2022), andKonoplya (2021).In the present paper, we focus on the impact of DM spikes on the metric, ringdown waveforms, and QNMs of supermassive black holes, specifically those at the center of Sagittarius A* (Sgr A*) and M87.The hope is to discover signals detectable at least in principle.There may not exist a mechanism, such as extreme mass ratio inspiral or galaxy collision, to emit detectable gravitational waves from these two galaxies.However, this issue is not a deterrent since it is inevitable that some of the many galaxies in the universe have the right conditions to produce waves that can be detected by the current or next generation of gravitational wave experiments. Following previous work, we assume a power law density for the DM spike.The assumed power law density provides three parameters with which to work, namely, the power law exponent γ sp , as well as the radius R sp and spike density ρ sp DM at one of the spike boundaries normally taken to be the outside edge.These in turn determine the total mass of the spike.If one assumes a power law density profile, the pressure and metric components must be derived from the full Tolman-Oppenheimer-Volkoff (TOV) equations.The main features that emerge from our analysis are: • It turns out that the pressure is small enough to be neglected in the TOV equations.This allows us to obtain a self-consistent2 analytic expression for the metric components.• For Sgr A*, the parameters have to be pushed well beyond the accepted ranges in order to produce significant differences from the Schwarzschild ringdown waveform. • For M87, the parameters are less known, but there is an observational bound on the total mass within 50 kpc of the center, which in turn provides an upper bound on the spike mass.We show that there exist values for the spike parameters, consistent qualitatively with those of Sgr A* and producing a total spike mass within the bound for M87, that significantly enhances the differences from the Schwarzschild ringdown waveform in comparison to Sgr A*. • Assuming that the ratio of the DM spike mass grows roughly linearly with the black hole mass, the relative effect on the ringdown waveforms increases with total mass. • One might wonder about the impact of the regular mass, in the region near the black hole, on the ringdown waveform.The lowest estimate for the radius of the galactic bulge, surrounding Sgr A*, is approximately 2 kpc and the highest estimate for the bulge mass is 2 × 10 10 solar masses (see Zoccali & Valenti 2016).Using these values, we can find an upper bound for the average density of the bulge, which is approximately 4.0×10 −23 g/cm3 .This is an order of magnitude less than the average density of the spike, surrounding Sgr A*, in the region r < R sp .In addition, the dark matter density at the inner edge of the spike, i.e. near the black hole horizon, is approximately 10 19 times higher than the average spike density.As can be seen in Figs. 4 and 5 of this paper, the effective potential that determines the ringdown waveform drops rapidly to zero for large r.Therefore, in the region that produces the dominant effect on the ringdown waveform, one can safely ignore the bulge.We assume this is also true for M87, for which less is known about the mass distribution. We structure the paper as follows.In Sec. 2, we set up the problem by reviewing the relevant TOV equations and associated boundary conditions.We then solve for the pressure and metric assuming a power law density for the DM spike.In Sec. 3, we briefly review the wave equation for scalar field perturbations in the black hole background.Sec. 4 calculates the ringdown waveform and the lowest QNM for the multipole number l = 2 for the SgrA* DM spike, while Sec. 5 does the same for M87.We conclude in Sec.6 with a summary of the results. Solving the Tolman-Oppenheimer-Volkoff Equations We start with the most general 4-D spherically symmetric static metric (up to coordinate transformations) and assume a perfect fluid stress tensor for the DM spike This yields the TOV equations (Carroll 2019) in the spike region: We have three equations in four unknowns [µ(r), M (r), ρ(r), p(r)] so they need to be supplemented by a fourth equation.Normally this is taken to be the equation of state relating ρ to p.In the present case, we wish to assume a particular density profile for the DM spike, which provides the extra equation.There is no freedom left to specify the equation of state.We show, however, that one can assume the pressure is negligible when solving for µ(r) in Eq. ( 4).We now introduce the density profile for the DM spike.Given a black hole with a mass M BH at a galactic center surrounded by a DM halo with an initial power law density profile where γ is the power law index and ρ 0 and r 0 are the halo parameters, it has been shown (Gondolo & Silk 1999) that a DM spike will form adiabatically with a density profile where Here, ρ sp and R sp are the density and radius of the spike, respectively, at the outer edge.Instead of ρ sp and R sp , one can use ρ b and r b , which are the density and radius of the spike at its inner edge.In this paper, we will use the former.Using Eq. ( 8), one can show that α γ is related to the spike parameters according to We can substitute the DM spike density profile (7) into Eq.( 3) to solve for the metric or mass function M (r) in the spike region (r b ≤ r ≤ R sp ).The overall mass function at different regions can be summarized as (Nampalliwar et al. 2021) where is the mass function of the DM spike.M DM is the combined mass of the spike and the DM halo surrounding the spike within a radius r > R sp .The impact of this region on ringdown waveforms is negligible.See Sections 4 and 5 for more details.Here, we use geometrized unit system where c = G = 1.Note that the total mass of the spike, M sp total = M sp DM (R sp ), can be increased by increasing R sp , ρ sp , or both.The total mass is proportional to ρ sp and R 3 sp so that, according to Eq. ( 9), increasing the mass of the spike requires α γ to increase.In this paper, we increase M sp total by increasing ρ sp and keeping R sp fixed. Given the large variety of different parameters used to describe the spike and halo in the literature, we summarize our general framework as follows: In addition to the mass M BH of the black hole, four parameters are required.We take these to be the exponent γ sp , the location r b of the inner edge of the spike, the density ρ sp at the outer edge, and α γ .These are sufficient to determine all other spike parameters, including the location R sp of the outer edge via Eq.( 9) and the total mass of the spike via Eq.( 11).Experiment provides an upper bound on the total mass of the spike plus halo, but not on the other parameters. 4ext, we need to solve Eq. ( 5) for p(r).This is not possible analytically, but we have solved it numerically using the built-in Mathematica commands for solving differential equations.It turns out that the term 4πr 3 p(r) can be neglected compared to M (r) in Eq. ( 4).Using this approximation, Eq. ( 4) can be written as where a = We first take the case where γ sp = 7/3 (γ = 1). 5We can now integrate Eq. ( 12) to get where we have used the change of variable y = r 1/3 .Here, y 0 is the real root of the equation y 3 −2(M BH +ay 2 −b) and y 1 and y 2 are the two complex conjugate roots.After integration, the final result for the metric function, f (r) = e µ(r) , is We have chosen the constant of integration, C, so that We also want to show that pressure is negligible in the spike region.Assuming p(r)/ρ(r) ≪ 1 and 4πr 3 p(r)/M (r) ≪ 1, one can rewrite Eq. ( 5) as where we have replaced ρ and M with the spike parameters.We then find the approximate pressure by integrating Eq. ( 15), For this approximation to be valid/consistent, the pressure from the above equation should satisfy the same conditions, i.e. p(r)/ρ(r) ≪ 1 and 4πr 3 p(r)/M (r) ≪ 1.The integration in Eq. ( 16) can be handled analytically by writing the integrand as a sum of terms with minimal denominators similar to what we do in Eq. ( 13).We plot the pressure, density, and the ratio of the two as a function of the radial coordinate, in Figure 1, to show p(r) ≪ ρ sp DM (r).In Figure 1, we also plot the DM spike pressure obtained numerically, using built-in Mathematica commands for differential equations, by solving Eq. ( 5) with no approximation.Our numerical and analytical solutions are more or less the same.In Figure 2, we plot 4πr 3 p(r)/M (r) as a function of the radial coordinate in the spike region to show that this term is also negligibly small.Therefore, the spike pressure can be ignored in the TOV equations.Figure 1: On the left, we plot the DM spike pressure p(r) obtained analytically using Eq. ( 16) in dashed blue and numerically using Eq. ( 5) in dotted green.For comparison, we include the DM spike density ρ sp DM (r) in solid red.We take γ sp = 7/3, r b = 2r BH , and use the Sgr A* data where R sp = 0.235 kpc and ρ sp = 6.7 × 10 −22 g cm −3 (≈ 8 times the expected value).On the right, for the same spike parameters, we plot pressure [from Eq. ( 16)] divided by the density of the DM spike to show that pressure stays negligible everywhere.All our variables are expressed in terms of black hole parameters (r BH and ρ BH defined in Sec.4.) 4) is valid.We take γ sp = 7/3, r b = 2r BH , and use the Sgr A* data where R sp = 0.235 kpc and ρ sp = 6.7×10 −22 g cm −3 (≈ 8 times the expected value).The radius r is in units of the black hole horizon radius. We can also consider the case where γ sp = 9/4 (γ = 0).We integrate Eq. ( 12) to get where we have used the change of variable y = r 1/4 .This integration can be handled analytically by writing the integrand as a sum of terms with minimal denominators similar to what we do in Eq. ( 13).The final result for the metric function, f (r) = e µ(r) , is exp 4 where y 0 and y 3 are the real roots of the equation y 4 − 2(M BH + ay 3 − b) and y 1 and y 2 are the two complex conjugate roots.We have chosen the constant of integration, C, so that f (r b ) = 1 − 2M BH /r b . Wave Equation We wish to investigate the ringdown waveform emitted from a black hole surrounded by a DM spike.For simplicity, we look at scalar perturbations with the assumption that the graviton modes will have similar behavior.This assumption is based on the similarity of the Regge-Wheeler potential for scalar and gravitational perturbations.See the explicit form of the potentials provided in, for example, Leaver (1985).A massless scalar field in the background of a black hole spacetime obeys the Klein-Gordon equation where g µν is the metric and g is its determinant.In a completely general spherically symmetric and spacetime with a line element we apply the separation of variables where Y l (θ, ϕ) are spherical harmonics with the multipole number l = 0, 1, 2, . . ., to obtain the QNM wave equation In the above equation, r * is the tortoise coordinate linked to the radial coordinate according to and is the Regge-Wheeler or QNM potential.Since the fundamental QNM of geometric perturbations in a black hole spacetime has the multipole number l = 2, in the rest of the paper, we will focus on scalar perturbations with l = 2. Sagittarius A* Supermassive Black Hole As it is pointed out by Nampalliwar et al. (2021), a realistic model supported by the observational data for the Sgr A* supermassive black hole at the center of the Milky Way galaxy leads us to the following information.The mass of this black hole is M BH = 4.1 × 10 6 M ⊙ .For γ sp = 9/4 (γ = 0), we have R sp ≈ 0.91 kpc and ρ sp ≈ 1.39 × 10 −24 g cm −3 .In terms of black hole parameters, R sp ≈ 2.32 × 10 9 r BH and ρ sp ≈ 1.26 × 10 −27 ρ BH , where is the horizon radius and is the mass density of the black hole.In the case of γ sp = 7/3 (γ = 1), we have R sp ≈ 0.235 kpc and ρ sp ≈ 8.00 × 10 −23 g cm −3 .In terms of black hole parameters, R sp ≈ 6.00 × 10 8 r BH and ρ sp ≈ 7.27 × 10 −26 ρ BH .We also present this information in the table below where we include the values for α γ and the total mass of the the spike, M sp total . Table I: DM Spike surrounding Sgr A* Supermassive Black Hole 2021) also obtain upper bounds on ρ sp using the conditions that have to be satisfied everywhere outside the black hole horizon.These conditions are 1. the metric determinant is always negative 2. g ϕϕ is always greater than zero, and 3. g rr remains finite.For r b = 2r BH , these upper bounds are calculated numerically in Nampalliwar et al. (2021) for the two cases in Table I.These bounds are: γ sp = 7/3, R sp = 0.235 kpc : ρ sp < 2.37 × 10 −18 g cm −3 . (27) To compare the metric function f (r) obtained in this paper with the one provided by Nampalliwar et al. (2021), we plot both functions [Eq.( 16) of Nampalliwar et al. (2021) and our function provided in Eq. ( 14)] in Figure 3, where all parameters are expressed in units of black hole parameters.The two functions differ significantly from each other for larger values of ρ sp .This is presumably related to the fact that the authors in Nampalliwar et al. (2021) derive an approximate metric function from Eq. ( 12), whereas ours is exact.In solid blue, we plot f (r) given in Eq. ( 14).In dashed green, we plot the function f (r) given in Eq. ( 16) of Nampalliwar et al. (2021).For comparison, we include the Schwarzschild metric function in dotted red. To see how the DM spike influences the shape of the QNM potential given in Eq. ( 24), we plot the potential for the case of γ sp = 7/3 in Figure 4.A noticeable difference begins to appear when ρ sp is roughly 840 times bigger than the expected value presented in Table I.We also plot the the potential for 6000 times bigger than the expected value of ρ sp .All these density values are far less than the upper bound presented in Eq. ( 27). In Figure 5, we plot the potential (24) for the case of γ sp = 9/4 .A noticeable difference begins to appear when ρ sp is roughly 8400 times bigger than the expected value presented in Table I.We also plot the the potential for 84000 times bigger than the expected value of ρ sp .As one can see, higher values of ρ sp is required to observe noticeable change in the potential for γ sp = 9/4 in comparison to the case of γ sp = 7/3.All these ρ sp values are still less than the upper bound presented in Eq. ( 28).To generate the ringdown waveform, we numerically solve the time-dependent wave equation ( 22) using the initial data where we use σ = 1 r BH , r * = −40 r BH , and A = 10 r −2 BH .We choose the observer to be located at r * = 90 r BH .In all the cases studied here, the height of the QNM potential at r * = 90 r BH , which is inside the spike region, is small (⪅ 10 −3 r −2 BH ) compared to the peak.Therefore, we do not expect a significant difference in the results if the observer is further away. To carry out the calculations, we use the built-in Mathematica commands for solving partial differential equations.We check the accuracy of our results by computing the waveforms for Schwarzschild and comparing them to known results.The resulting ringdown waveforms for the potentials shown in Figure 4 are plotted in Figures 6 and 7. We also extract the first (n = 0) QNM frequency from these waveforms using the Prony method (de Prony 1987), a numerical procedure that fits N data points by as many purely damped exponentials as necessary.To test the Prony method, we first calculate the QNM of scalar perturbations for the Schwarzschild case for l = 2.The result is 0.967442−0.193137i,which is in good agreement with the value 0.967284 − 0.193532i found using the continued fraction method (Daghigh et al. 2020).We find 0.966059 − 0.193151i for the case presented in Figure 6 and 0.956060 − 0.194191i for the case in Figure 7.It is clear that as we increase ρ sp (and consequently M sp total ), the real part (oscillation frequency) of the QNM decreases and the imaginary part (damping) increases. M87 Supermassive Black Hole In this section, we use the data provided by Lacroix et al. (2017).The authors use M BH = 6.4 × 10 9 M ⊙ (r BH = 6 × 10 −4 pc) and fix the initial halo power law parameter r 0 to be 20 kpc (as for the Milky Way).They assume α γ = 0.1.The authors then determine ρ 0 ≈ 2.5 GeV cm −3 for γ = 1 (γ sp = 7/4) based on the observational data provided in Merritt & Tremblay (1993).With these values, we can use Eq. ( 8) to evaluate R sp ≈ 0.219 kpc and ρ sp ≈ 4.10 × 10 −22 g cm −3 .In terms of black hole parameters, we have R sp ≈ 3.59 × 10 5 r BH and ρ sp = 9.1 × 10 −19 ρ BH .The DM density profile in different regions around the M87 black hole is summarized in Lacroix et al. (2017) It is important to note that in Nampalliwar et al. (2021) α γ ≈ 1.94.If we choose this value for α γ , together with the r 0 and ρ 0 values mentioned above, we can use Eqs.( 8) and ( 11) to calculate the total mass of the DM spike to be M sp total = 4.54 × 10 11 M ⊙ .We can also use Eq. ( 30) to calculate the mass of the DM halo outside the spike region (r ≥ R sp ) to obtain Adding the masses of the DM spike and black hole to the mass of the halo, we find the total mass of 3.94 × 10 12 M ⊙ .This mass is within an acceptable range based on the observational data that estimate the total mass of M87 within 50 kpc radius to be 6 × 10 12 M ⊙ (Merritt & Tremblay 1993).Also note that the spike mass, for the α γ ≈ 1.94 case, is 100 times bigger than the mass of the M87 black hole.A similar DM spike to black hole mass ratio holds for the values presented in Nampalliwar et al. (2021) for the Sgr A* case.Therefore, to have a sensible comparison between the Sgr A* and M87 cases, one should use the same α γ , and consequently the same spike to black hole mass ratio, for both.We summarize the parameters for M87 in Table II, where we also include the parameters for α γ = 1.94.To the best of our knowledge there is no observational bound on α γ .Both values that we use in Table II are consistent with the data provided in Merritt & Tremblay (1993).To compare the shape of the black hole QNM potential in the presence of the DM spike with the Schwarzschild case, in Figure 8, we plot the potential for the case of α γ = 0.1.A noticeable difference begins to appear when ρ sp is roughly 840 times bigger than the expected value presented in Table II.We also plot the potential for 6000 times bigger density than the expected value.Figure 9: Scalar QNM potential as a function of radial coordinate for l = 2 and γ sp = 7/3 for the M87 black hole surrounded by a DM spike.In dashed red, ρ sp = 1.8 × 10 −21 g cm −3 (≈ 84 times the expected value) and in solid blue, ρ sp = 6.8 × 10 −21 g cm −3 (≈ 320 times the expected value).In both cases, R sp = 4.26 kpc.For comparison, we include the Schwarzschild potential in dotted green.All our variables are expressed in terms of black hole parameters. As it was discussed, for a sensible comparison between the Sgr A* and M87 cases, one should use the same value for α γ .Therefore, in Figure 9, we plot a similar graph to Figure 4 for the M87 case when α γ = 1.94.A noticeable difference in the potential begins to appear when ρ sp is roughly 84 times bigger than the expected value presented in Table II.Note that in Figure 4, a noticeable difference in the potential appears only when ρ sp is roughly 840 times bigger.In Figure 9, we also plot the potential for ρ sp with a value of 320 times bigger than the expected value, which is more or less has the same impact on the potential as the 6000 times bigger ρ sp in the Sgr A* case.This shows it is easier to detect the DM spike in M87 using the ringdown waveform in comparison to the Sgr A* black hole assuming the spike to black hole mass ratio is roughly constant for both galaxies.We generate the ringdown waveforms for the potentials shown in Figure 9.These waveforms are plotted in Figures 10 and 11.We use the Prony method (de Prony 1987) to extract the first (n = 0) QNM frequency from the waveforms shown in Figures 10 and 11.We find 0.964669 − 0.193195i for the case presented in Figure 10 and 0.955183 − 0.194375i for the case in Figure 11.It is clear that as we increase ρ sp , the real part (oscillation frequency) of the QNM decreases and the imaginary part (damping) increases. Summary and Conclusion We have used the TOV equations to construct the spacetime metric representing a black hole surrounded by a perfect fluid DM spike.Following previous work, we assumed a power law density for the DM spike, which was therefore completely specified by three independent parameters: the power law exponent γ sp , the radius R sp and spike density ρ sp DM , the latter two chosen to lie at the outer edge of the spike.These in turn determine the total mass of the spike.Given the black hole mass, the TOV equations then determined uniquely the metric of the spacetime containing the spike.With this metric, we were able to calculate the ringdown waveform of the gravitational waves associated with black hole perturbations, as well as the real and imaginary parts of the lowest damping QNM. The main features that emerge from our analysis were: • The pressure inside the spike is negligible in all the cases we studied. • The presence of the DM spike modifies the ringdown waveform.More specifically, it decreases the real part (oscillation frequency) of the least damped QNM and increases its imaginary part (damping). • For Sgr A* the parameters have to be pushed well beyond the accepted ranges in order to produce significant differences from Schwarzschild ringdown waveform.The prospects of detection are therefore remote. • For M87, the parameters are less known, but there is an observational bound on the total mass within 50 kpc of the center, which in turn provides an upper bound on the spike mass.We find that while the departures from Schwarzschild for the ringdown waveforms are significantly greater for M87 than for Sgr A*, the spike mass needs to be an order of magnitude or two above the proposed upper bound in order to have hopes of detecting it with current gravitational wave technology. • Our results also suggest that if the ratio of DM spike mass to black hole mass is roughly constant for galactic black holes, greater mass black holes require smaller spike densities in order to yield potentially observable signals. We conclude that a significant gravitational wave detection associated with perturbations of a supermassive black hole more massive than the M87 black hole might provide the means to detect the presence of a DM spike or at least put a model dependent bound on its parameters.This suggests that the effects of DM spikes on the ringdown waveforms of supermassive black holes are worthy of further study. In this paper, we focused on static spherically symmetric black holes.It would be interesting to see what happens if spin is introduced.Ferrer et al. (2017) argued that the spike will be enhanced by the presence of spin.We therefore expect that the inclusion of the black hole spin will improve our results in terms of observational viability.This is currently under investigation. Figure 2 : Figure 2:We plot 4πr 3 p(r)/M (r) in the DM spike region to show that neglecting pressure in Eq. (4) is Figure 4 : Figure4: Scalar QNM potential as a function of radial coordinate for l = 2 and γ sp = 7/3 for the Sgr A* black hole surrounded by a DM spike.In dashed red, ρ sp = 6.7 × 10 −20 g cm −3 (≈ 840 times the expected value) and in solid blue, ρ sp = 4.7 × 10 −19 g cm −3 (≈ 6000 times the expected value).In both cases, r b = 2r BH and R sp = 0.235 kpc.For comparison, we include the Schwarzschild potential in dotted green.All our variables are expressed in terms of black hole parameters. Figure 5 : Figure5: Scalar QNM potential as a function of radial coordinate for l = 2 and γ sp = 9/4 for the Sgr A* black hole surrounded by a DM spike.In dashed red, ρ sp = 1.2 × 10 −20 g cm −3 (≈ 8400 times the expected value) and in solid blue, ρ sp = 1.2 × 10 −19 g cm −3 (≈ 84000 times the expected value).In both cases, r b = 2r BH and R sp = 0.91 kpc.For comparison, we include the Schwarzschild potential in dotted green.All our variables are expressed in terms of black hole parameters. Figure 6 : Figure 6: In solid blue, ringdown waveform Ψ (left) and ln |Ψ| (right) as a function of time for l = 2, γ sp = 7/3, R sp = 0.235 kpc, and ρ sp = 6.7 × 10 −20 g cm −3 (≈ 840 times the expected value) for the Sgr A* black hole surrounded by a DM spike.For comparison, we include the Schwarzschild ringdown waveform in dotted red.All our variables are expressed in terms of black hole parameters. Figure 7 : Figure 7: In solid blue, ringdown waveform Ψ (left) and ln |Ψ| (right) as a function of time for l = 2, γ sp = 7/3, R sp = 0.235 kpc, and ρ sp = 4.7 × 10 −19 g cm −3 (≈ 6000 times the expected value) for the Sgr A* black hole surrounded by a DM spike.For comparison, we include the Schwarzschild ringdown waveform in dotted red.All our variables are expressed in terms of black hole parameters. Figure 8 : Figure8: Scalar QNM potential as a function of radial coordinate for l = 2 and γ sp = 7/3 for the M87 black hole surrounded by a DM spike.In dashed red, ρ sp = 3.4 × 10 −19 g cm −3 (≈ 840 times the expected value) and in solid blue, ρ sp = 2.4 × 10 −18 g cm −3 (≈ 6000 times the expected value).In both cases, r b = 2r BH and R sp = 0.219 kpc.For comparison, we include the Schwarzschild potential in dotted green.All our variables are expressed in terms of black hole parameters. Figure 10 : Figure 10: In solid blue, ringdown waveform Ψ (left) and ln |Ψ| (right) as a function of time for l = 2, γ sp = 7/3, R sp = 4.26 kpc, and ρ sp = 1.8 × 10 −21 g cm −3 (≈ 84 times the expected value) for the M87 black hole surrounded by a DM spike.For comparison, we include the Schwarzschild ringdown waveform in dotted red.All our variables are expressed in terms of black hole parameters. Figure 11 : Figure 11: In solid blue, ringdown waveform Ψ (left) and ln |Ψ| (right) as a function of time for l = 2, γ sp = 7/3, R sp = 4.26 kpc, and ρ sp = 6.8 × 10 −21 g cm −3 (≈ 320 times the expected value) for the M87 black hole surrounded by a DM spike.For comparison, we include the Schwarzschild ringdown waveform in dotted red.All our variables are expressed in terms of black hole parameters.
8,426.6
2022-06-08T00:00:00.000
[ "Physics" ]
Activation of AMPK inhibits cervical cancer cell growth through AKT/FOXO3a/FOXM1 signaling cascade Background Although advanced-stage cervical cancer can benefit from current treatments, approximately 30% patients may fail after definitive treatment eventually. Therefore, exploring alternative molecular therapeutic approaches is imperatively needed for this disease. We have recently shown that activation of AMP-activated protein kinase (AMPK), a metabolic sensor, hampers cervical cancer cell growth through blocking the Wnt/β-catenin signaling activity. Here, we report that activated AMPK (p-AMPK) also inhibits cervical cancer cell growth by counteracting FOXM1 function. Methods Effect of the activation of AMPK on FOXM1 expression was examined by hypoxia and glucose deprivation, as well as pharmacological AMPK activators such as A23187, AICAR and metformin. RT Q-PCR and Western blot analysis were employed to investigate the activities of AMPK, FOXM1 and AKT/FOXO3a signaling. Results Consistent with our previous findings, the activation of AMPK by either AMPK activators such as AICAR, A23187, metformin, glucose deprivation or hypoxia significantly inhibited the cervical cancer cell growth. Importantly, we found that activated AMPK activity was concomitantly associated with the reduction of both the mRNA and protein levels of FOXM1. Mechanistically, we showed that activated AMPK was able to reduce AKT mediated phosphorylation of p-FOXO3a (Ser253). Interestingly, activated AMPK could not cause any significant changes in FOXM1 in cervical cancer cells in which endogenous FOXO3a levels were knocked down using siRNAs, suggesting that FOXO3a is involved in the suppression of FOXM1. Conclusion Taken together, our results suggest the activated AMPK impedes cervical cancer cell growth through reducing the expression of FOXM1. Background Cervical cancer results from uncontrolled growth of malignant cells started within the uterine cervix and is one of the most common malignancies in women worldwide [1][2][3]. Although this disease is almost preventable with routine genetic screening and vaccination, more than 80% of cervical cancers with a majority in the advanced stage are currently found in developing countries including China, leading to a high risk of recurrence and poor survival [2,4]. Thus, there is a compelling need to explore novel therapeutic interventions for this disease. Emerging evidence suggests that targeting cancer cell metabolism is a promising therapeutic approach in human cancers. AMP-activated protein kinase (AMPK) is a known cellular metabolic sensor and plays an important role in the control of energy homeostasis in response to external stresses [5][6][7][8]. Recent studies have documented that pharmacological activation of AMPK is able to block cancer cell growth in various human cancers [8][9][10][11]. Indeed, we have previously reported that pharmaceutical AMPK activators such as AICAR (ATP-dependent) and A23187 (ATP-independent) could suppress cervical cancer cell growth in the presence or absence of LKB1, an upstream kinase of AMPK [10]. We also proposed mechanistic evidence showing that metformin, AICAR and A23187 suppress cervical cancer cell growth through reducing DVL3, a positive effector of Wnt/β-catenin signaling cascade which has been shown to be constitutively active during cervical cancer development [12]. Yet, it is still believed that there are other molecular mechanisms by which these pharmaceutical AMPK activators suppress cancer cell growth. The understanding of these mechanisms will assist in exploring better therapeutic regimes when using these drugs. Forkhead Box M1 (FOXM1) is a member of the Forkhead Box transcription factors which is essential for cell proliferation and apoptosis in the development and function of many organs [13][14][15][16][17]. We previously reported that aberrant upregulation of FOXM1 is associated with the progression and development of human cervical squamous cell carcinoma (SCC) [18]. Biochemical and functional studies confirmed that FOXM1 is critically involved in cervical cancer cell growth through upregulating cyclin B1, cyclin D1 and cdc25B and downregulating p27 and p21 expressions. These findings suggest that FOXM1 plays a vital role in cervical cancer cell growth and oncogenesis. In this study, we reported that the activated AMPK inhibits the cell growth by reducing FOXM1 expression in human cervical cancer cells upon treatments with hypoxia, glucose deprivation and pharmaceutical AMPK activators. We provided both biochemical and functional evidence to support our findings that the repression of FOXM1 expression of AMPK is dependent on the AKT/FOXO3a/FOXM1 signaling cascade. Cell lines and reagents Cervical cancer cell lines HeLa, CaSki, C33A and SiHa (American Type Culture Collection, Rockville, Md., USA) (cell line authentication was done by in-house STR DNA profiling analysis) were employed in this study. They were maintained in Dulbecco's Modified Eagle Medium (DMEM) (Invitrogen, Carlsbad, CA) supplemented with 10% (v/v) fetal bovine serum (Gibco), 100 units/ml penicillin/streptomycin (Gibco) at 37°C in an incubator with humidified atmosphere of 5% CO 2 and 95% air. AMPK activators AICAR, A23187 and metformin and AKT inhibitor LY294002 were obtained from Tocris Bioscience (Bristol, UK). FOXM1 inhibitor Thiostrepton was purchased from Calbiochem (La Jolla, CA, USA). Plasmids and cell transfection To study the effects of enforced FOXM1 expression, the FOXM1c-expressing plasmid pcDNA3-FOXM1c was used because the c isoform has higher transactivating activity and is expressed dominantly in cells as well as tissues. Whereas, the pcDNA3 empty vector was used in mock transfections as control. Besides, the vector-based shRNA plasmid pTER-FOXM1 was used to knockdown endogenous FOXM1. All of these plasmids had been described previously [18]. As controls in knockdown assays, the p-super GFP and pcDNA3 vectors were used in mock transactions. To knockdown human FOXO3a, the TriFECTa RNAi Kit which contains three siRNAs targeting human FOXO3a was purchased from IDT (Integrated DNA Technologies, Inc., Iowa, USA). Cell transfection was carried out using LipofectAMINETM 2000 (Invitrogen) according to the manufacturer's instructions. Expression patterns were analyzed by Western blotting. The parental vector pEGFP-C1 was used as empty vector control. Cell proliferation assay Cell proliferation kit (XTT) (Roche, Basel, Switzerland) was used to measure cell viability according to the manufacturer's protocol. Three independent experiments were performed in triplicates. RNA extraction and quantitative reverse transcriptase-PCR (Q-PCR) According to the instruction of the manufacturer, total RNA was extracted using TRIzol reagent (Invitrogen). Complementary DNA (cDNA) was subsequently synthesized using a reverse transcription reagent kit (Applied Biosystems, Foster City). The expression level of FOXM1 was then evaluated by q-PCR in an ABI PRISM™ 7500 system (Applied Biosystems) using Taqman® Gene expression Assays; human FOXM1 (Assay ID: Hs00153543_m1). The human 18S rRNA (Assay ID: Hs99999901_m1) was used as an internal control. Western blot analysis Proteins in cell lysates were separated by 10% SDS-PAGE and transferred to polyvinylidene-difluoride (PVDF) membranes. The membranes were blotted with 5% skimmed milk and subsequently probed overnight at 4°C with primary antibodies specific for p-AMPKα, AMPKα, p-AKT, AKT, p-FOXO3a, FOXO3a (Cell Signaling, Beverly, MA, USA), FOXM1 (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) and β-actin (Sigma-Aldrich, St. Louis, MO, USA) and then incubated with horseradish peroxidase conjugated goat anti-rabbit or anti-mouse secondary antibody (Amersham, Uppsala, Sweden). Immunodetection was performed with enhanced chemiluminescent reagent solution (Amersham TM ECL TM ) and visualized using medical X-ray film. Data analysis Student's t test was applied to the data analysis. All data were expressed as mean ± SEM. P-value of less than 0.05 was considered as significant. Results Increased AMPK activity inhibits cervical cancer cell growth by suppressing FOXM1 expression Consistent with our previous findings [12], this study also showed that the cell growth of cervical cancer cell lines such as Caski and SiHa was significantly inhibited by the AMPK activator metformin on a time and dose dependent manner ( Figure 1A). Similarly, other AMPK activators such as AICAR and A23187 displayed remarkable inhibitory effect on cervical cancer cells (data not shown), confirming that activation of AMPK is able to reduce cervical cancer cell growth. As FOXM1 is a master regulator of cancer cell growth, it is of interest to examine whether increased AMPK activity has any functional impact on FOXM1 in cervical cancer oncogenesis. Upon treatment of AICAR (1 mM), A23187 (2 μM), metformin (25 mM), glucose deprivation and hypoxia on the cervical cancer cell lines Caski, C33A and HeLa, we found that FOXM1 expression was drastically decreased while AMPK activity [p-AMPK (Thr172)]was elevated concomitantly ( Figure 1B and Figure 2). Interestingly, Q-PCR analysis also demonstrated that the mRNA level of FOXM1 in C33A and SiHa cells was remarkably reduced upon treatment of metformin (25 mM) and A23187 (2 μM) in a time dependent manner ( Figure 1C). This finding indicates that AMPK activated by these pharmaceutical activators could suppress FOXM1 expression at both protein and mRNA levels. Activated AMPK represses FOXM1 expression through blocking the AKT/FOXO3a signaling pathway FOXO3a is well known to be one of the FOXO transcription factors that functions downstream of PI3K-PTEN-AKT (PKB) signaling in modulating cell growth [19]. Along the signaling cascade, FOXO3a is a negative regulator of FOXM1 expression [20]. To investigate whether the suppressive effect of AMPK on FOXM1 is mediated via FOXO3a, we first examined the intensity of FOXO3a dephosphorylation by metformin. Upon treatment of metformin (25 mM) on the cervical cancer cell line C33A, not only FOXM1 expression diminished remarkably but also the phosphorylation of AKT and the AKT-specific phosphorylation of FOXO3a (Ser253) ( Figure 3A). We then examined whether the PI3K/AKT inhibitor LY294002 could reduce FOXM1 expression in cervical cancer cells. As expected, C33A cells exhibited decreased intensity of AKT-specific phosphorylation of FOXO3a (Ser253) as well as FOXM1 expression upon the treatment of LY294002 (10 μM) ( Figure 3B). To further assess whether FOXO3a is primarily involved in the reduction of FOXM1 induced by AMPK activation but not an off-target pharmaceutical effect, siRNA-based knockdown of FOXO3a in C33A cells was carried out. Western blot analysis revealed that cervical cancer cells with depletion of endogenous FOXO3a did not show altered FOXM1 expression even when AMPK was activated subsequently by metformin (25 mM) ( Figure 3C). Taken together, our data support that inhibition of FOXM1 by AMPK activation is attributed to the repression on AKT and its downstream, AKT-specific phosphorylation of FOXO3a (Ser253) in cervical cancer cells. Ectopic expression of FOXM1 rescues AMPK-mediated cell growth inhibition Given that activation of AMPK leads to growth inhibition of cervical cancer cells through reduction of both the mRNA and protein levels of FOXM1, we sought to determine whether enforced expression of exogenous FOXM1 could counteract the AMPK-induced suppressive effect. C33A and SiHa cells were transiently transfected with FOXM1c-expressing plasmid and treated with metformin (20 mM). Consistent with our previous findings [18], XTT cell proliferation analysis showed that enforced expression of FOXM1c significantly promoted cell proliferation in both C33A (P = 0.00007) and SiHa (P = 0.0004) cells as compared with the vector control ( Figure 4A). Importantly, C33A and SiHa cells with ectopic expression of FOXM1c could significantly reduce the effect of AMPK-mediated cell proliferation inhibition as compared with their vector controls upon treatment of metformin (20 mM) ( Figure 4A). Such counteracting effect of ectopic FOXM1c was particularly evident in SiHa cells ( Figure 4A). Indeed, Western blot analysis confirmed that there was no reduction in the expression of FOXM1 in FOXM1c-transfected SiHa cells upon treatment of metformin (25 mM) for 24 hrs ( Figure 4B). Collectively, these findings confirm that activation of AMPK by hypoxia and glucose deprivation, as well as pharmacological AMPK activators inhibits cervical cancer cell growth, and this effect is dependent on the endogenous expression level of FOXM1. FOXM1 acts as an AMPK downstream effector Previous experiments have demonstrated that reduction of FOXM1 is a common scenario when AMPK is activated in cervical cancer cells. As FOXM1 is a key transcription factor, we sought to determine whether alteration of FOXM1 levels causes a feedback control on AMPK activity. To this end, we treated C33A cells with the FOXM1 specific inhibitor thiostrepton to suppress FOXM1 expression. Upon treatment of thiostrepton (5 μM), FOXM1 expression was significantly suppressed, whereas the expression of p-AMPKα (Thr172) was unchanged ( Figure 4C). Similarly, knockdown of endogenous FOXM1 using shRNAs did not reveal any discernible change on the expression level of p-AMPKα (Thr172) in C33A cells ( Figure 4D). Taken together, p-AMPKα activity per se is not altered by FOXM1 suppression induced by thiostrepton treatment or shRNA knockdown, implying that AMPK is acting upstream of FOXM1 and there is no feedback loop. Our analysis strongly supports that increased AMPK activity down-regulates FOXM1 through the AKT/FOXO3a/FOXM1 signaling cascade ( Figure 5). Discussion Recent studies have suggested that targeting cancer cell metabolism is an alternative therapeutic approach in cancer treatment. AMPK is a pivotal energy sensor governing normal and cancer cell metabolism. Our previous research has shown that pharmaceutical AMPK activators are able to repress cervical cancer cell growth through targeting DVL3 in the Wnt/β-catenin signaling pathway [12]. In this study, we report another molecular mechanism by which AMPK can retard cervical cancer cell growth: inhibition of FOXM1 function via AMPK/AKT/ FOXO3a signaling. We demonstrated that AMPK activated by either micro-environmental stresses or pharmaceutical AMPK activators could reduce FOXM1 expression through blocking the AKT/FOXO3a signaling pathway, that in turn impaired cervical cancer cell growth. The Forkhead box transcription factor FOXM1 regulates a number of key cell cycle regulators that control the G 1 to S and the G 2 to M transitions [21][22][23][24][25][26]. Accumulating evidences have shown that the upregulation of FOXM1 is often involved in the development of various human cancers [27][28][29][30][31][32]. We previously reported that there is a progressive increase in FOXM1 level in the progression of human cervical cancer [18]. The inhibition of FOXM1 by genetic or pharmaceutical approach significantly impairs tumor growth of this cancer in vitro and in vivo [18,25,27,33], suggesting that targeting FOXM1 is an appealing and potential approach for anticancer therapeutics. On the other hand, we and others have proved that activation of AMPK is able to inhibit the growth of various human cancers including cervical cancer [10,[34][35][36]. The pharmacological activation of AMPK using AICAR or metformin has been shown to inhibit cell growth and induce cell apoptosis of a wide spectrum of cancer cells through modulation of p53 [37], p27 [38,39], or p21 [18,40], or DVL3 in Wnt/β-catenin signaling in cervical cancer [12]. Herein, we demonstrated that the activation of AMPK by various AMPK activators or hypoxia and glucose deprivation stresses induces a remarkable reduction of FOXM1 which in turn leads to a remarkable decrease of cervical cancer cell growth in both HPV positive (Caski, Hela and SiHa) and HPV negative (C33A) cell lines. On the other hand, ectopic expression of FOXM1c could counteract the suppressive effect of activated AMPK. These findings indicate that FOXM1 is a key oncogenic factor associated with cervical cancer cell growth, while activated AMPK inhibits cervical cancer cell growth through downregulation of endogenous FOXM1. In fact, we demonstrated that reduction of FOXM1 occurred at both the mRNA but and protein levels in cervical cancer cells. This is suggestive of a transcriptional suppression of FOXM1 by its upstream effectors. Previous studies have reported that FOXM1 is transcriptionally suppressed by FOXO3a, which is a critical downstream effector of the PI3K/AKT/FOXO signaling pathway [19,20]. For example, it has been reported that FOXO3a represses estrogen receptors α (ERα) activity in breast cancer cells through an alternative mechanism by which FOXO3a interacts and downregulates the expression of FOXM1 [29,41]. These evidences imply that the reduction of FOXM1 at both the mRNA and protein levels is due to the presence of FOXO3a in cervical cancer cells. In fact, our data using siRNA-mediated FOXO3a knockdown showed that FOXO3a expression is required for FOXM1 reduction upon activation of AMPK. How do activated AMPK leads to FOXO3a accumulating in the nucleus and blocking the transcription of FOXM1 mRNA? FOXO3a, which belongs to the class O of Forkhead/winged helix box (FOXO) transcription factors, is a key tumor suppressor involved in different cellular processes [42]. FOXO3a is modified by phosphorylation, acetylation and ubiquitination, which in turn affect its nuclear/cytoplasm shuttling, transcriptional activity and stability [43][44][45][46]. It is known that the PI3K/AKT signaling is the main regulatory pathway of FOXO3a [44,47,48]. When PI3K/AKT signaling is activated, FOXO3a is not only inactivated and phosphorylated at Thr32, Ser253 and Ser315 residues, but is also exported out from the nucleus to the cytoplasm where it is ubiquitinated and subjected to proteasome-dependent degradation [43,48]. Therefore, nuclear FOXO3a functions as transcriptional regulator, whereas cytoplasmic FOXO3a is considered inactive [46]. On the other hand, AKT is a signaling kinase known to be inactivated by activated AMPK [49,50]. In our study, treatment of either AMPK activator (metformin) or PI3K/AKT inhibitor (LY294002) showed significant inhibition of p-AKT and a remarkable reduction of p-FOXO3a (Ser253), an AKT-specific phosphorylation site, suggesting that suppression of FOXO3a is reduced. As a result, FOXO3a will be more nuclear-localized and activated to inhibit FOXM1 mRNA expression in cervical cancer cells. Aforementioned, AMPK activation can commonly inhibit FOXM1 expression in cervical cancer cells. However, whether there exists a feedback loop on the activity of AMPK is still unknown. To test this notion, cervical cancer cells were treated with the FOXM1 inhibitor thiostrepton to investigate the effect on AMPK activation. Results showed that treatment of thiostrepton only reduced expression of FOXM1 but not the activity of AMPK. In addition, depletion of endogenous FOXM1 using shRNAs gave similar findings as the treatment of thiostrepton, implying that FOXM1 is acting downstream of AMPK without any feedback regulation. Conclusion In summary, our findings here prove that activation of AMPK frequently inhibits cervical cancer cells. More importantly, we demonstrated that activated AMPK reduces FOXM1 by counteracting the AKT/ FOXO3a/FOXM1 signaling axis. Our findings shed light on the application of AMPK activators in the treatment of human cervical cancer.
4,023.6
2013-07-03T00:00:00.000
[ "Biology", "Medicine", "Chemistry" ]
Foot-and-mouth disease virus O/ME-SA/Ind 2001 lineage outbreak in vaccinated Holstein Friesian cattle in Saudi Arabia in 2016 Abstract Background: Foot-and-mouth disease virus (FMDV) is a highly contagious viral infection of large ruminants. Despite the massive application of vaccines against FMDV, several outbreaks are still being reported in Africa and Asia. Aim: To perform molecular characterization of FMDV in an outbreak among a cattle herd Saudi Arabia in 2016. This herd had been vaccinated with a polyvalent FMDV vaccine. Methods: To investigate this outbreak, we collected specimens from 77 animals showing typical clinical signs of FMDV. Specimens including sera, nasal swabs, and tissues (tongue, coronary bands, hooves, and hearts) were collected. We tested the collected cattle sera for the presence of FMDV antibodies with commercial ELISA kits. In addition, we tested the swabs for the presence of the most common FMDV strains (O, A, Asia-1 and SAT-2) with RT-PCR using serotype-specific oligonucleotides. Results: Serology showed that 22% of the tested sera were positive. Molecular testing of the examined swabs confirmed that 24% of the tested animals were positive. Our sequencing analysis confirmed that the circulating strains of FMDV belonged to FMDV serotype O. The phylogenetic tree based on the FMDV-VP-1 gene revealed high nucleotide identity between the circulating strains and the Bangladesh strain (99%). These strains were distinct (shared 89% nucleotide identity) from the FMDV-O strains used for the preparation of the vaccine administered to the animals in this herd. Moreover, they had 7% nucleotide difference between the FMDV-O strains reported in Saudi Arabian in 2013. Conclusion: More in-depth molecular characterization of these FMDV strains is warranted. Introduction Foot-and-mouth disease virus (FMDV) is one of the most devastating viral infections of cloven-hooved animals . FMDV infection usually results in high economic losses for the animal industry for many reasons including a sharp drop in milk yield, decrease in the feed conversion rate, lameness of the affected animals and death, particularly in young animals . FMDV belongs to the genus Aphthovirus and the family Picornavirdae. The viral genome is a single-strand positive sense RNA. The viral genome ranges from 6.9 to 8.3 kilobases in size and is flanked by two untranslated regions at its 5 0 and 3 0 ends. There is a polyadenylation tail downstream of the 3 0 UTR region (Fry et al. 2005). The 5 0 end is a large fragment of the viral genome, which is divided into several fragments with various functions during the viral replication (Fry et al. 2005). The virion is icosahedral in symmetry and consists of 60 capsomeres. These capsomeres have four structural proteins in each (VP-1 through VP-4). The first three proteins, VP-1 through VP-3, are expressed on the outer service of the virus, while VP-4 is located inside the virion (Fry et al. 2005). There are seven distinct immunologic strains of FMDV called South African Territories, 1, 2, and 3, in addition to serotypes A, O, and Asia 1 (Carrillo et al. 2005). There is no crossprotection between the different serotypes (Brito et al. 2014;Cao et al. 2014); thus, infection or vaccination with one FMDV strain does not provide protection against other strains (Kitching et al. 1989). Based on the VP-1-based phylogenetic trees, the FMDV A serotype is divided into 10 major genotypes (I-X). Furthermore, there are 10 topotypes of the FMDV O serotype. Moreover, there are six genotypes of the FMDV Asia 1 serotype (I-VI) (Valarcher et al. 2009). Although various FMDV vaccines are available in most countries, ongoing outbreaks are still reported in some of these animal populations (Eble et al. 2015). Also in Saudi Arabia, several FMDV outbreaks have been reported (Samuel et al. 1988;Abd El-Rahim et al. 2016;Mahmoud and Galbat 2017). One study during 1986One study during -1987 reported the detection of serotype A in some animals from Saudi Arabia and Iran (Samuel et al. 1988). The circulating strains were closely related to each other and were classified as FMDV A22 variant (Samuel et al. 1988). Another study analyzed the available data on the vaccination regimes of some dairy farms in Saudi Arabia (Woolhouse et al. 1996). This study developed a mathematical model for predicting the protective efficacy of those FMDV vaccines and their intervals of administration. This study revealed that the neither the vaccines used nor the vaccination interval used provided a high degree of protection for the herds against FMDV field infection (Woolhouse et al. 1996 (Mahmoud and Galbat 2017). The major goals of the current study were to perform molecular characterization of an FMDV outbreak among a cattle herd in Eastern Saudi Arabia in 2016 as the currently circulating FMDV strains in Saudi Arabia are not well characterized at the molecular level. Outbreak description We investigated an FMDV outbreak in a cattle herd located in Eastern Saudi Arabia during the winter of 2016. We approached this outbreak by examining 77 cows out of 780 and collected serum and nasal swabs from these animals for further testing. Ethical animal research assessment All animal utilization and sample collection was carried out as per The King Abdul-Aziz City of Science and Technology, Royal Decree No. M/59, (http:// www.kfsh.med.sa/KFSH_WebSite/usersuploadedfiles %5CNCBE%20Regulations%20ENGLISH.pdf). This animal utilization protocol was amended by the King Faisal's University Animal Ethics and the National Committee of Bioethics (NCBE). Herd description All animals under study were Holstein Friesian cattle. Both adult cattle and young calves were housed within the same barn with some partitions. The herd consisted of 4000 cows including 1250 calves. Lactating animals were milked four times per day. Animals showing obvious FMDV clinical signs were housed in a quarantine area. We examined the quarantined animals and reported the clinical signs ( Figure 1). Samples were collected from 77 animals before slaughtering. During carcass inspection after slaughtering, we selected the organs showing the typical FMDV lesions from each animal. These organs were subjected to further processing for the histopathology technique as described in the M&M section. Sera We collected 77 whole blood samples from the selected animals in this outbreak. In addition, we tested 92 archived cattle serum samples from our laboratory from 1993. These specimens were collected as part of large nationwide surveillance for Rift Valley fever across Saudi Arabia (Al-Afaleq et al. 2003). We placed the collected blood samples at 4 C overnight and centrifuged these samples at 5000 RPM for 5 min. We separated the sera with pipettes and then transferred them to sterile tubes. We heated the sera at 56 C for 30 min to inactivate the nonspecific inhibitors. The collected sera were stored at -20 C for further testing. Swabs We collected 77 nasal swabs from the investigated cattle herd. Swabs were collected on a transport medium containing Dulbecco-modified Eagle's medium, 10% fetal bovine serum, and an antibiotic cocktail (100 U/mL penicillin and 100 lg/mL streptomycin). We processed the collected swabs by centrifugation at 5000 RPM for 5 min on a cooling centrifuge. The supernatants were separated and stored at -80 C for further testing. Tissues We examined 45 animals at necropsy and selected the animals showing typical FMDV lesions. We collected tissue specimens from affected organs (tongue, lips, dental pad, and the skin of the coronary bands) of the suspected FMDV-infected animals and processed these tissues for histopathological examination. Briefly, these tissue specimens were immediately immersed in 10% neutral buffered formalin. Fixed tissues were processed in increasing gradients of ethyl alcohol and xylene and then embedded in paraffin blocks. Five micrometer paraffin sections were cut and stained with H&E stain for histopathological examination. Enzyme-linked immunosorbent assay (ELISA) We tested the collected sera for the presence of FMDV antibodies by using the ID Screen V R FMD NSP Competition kits (FMDNSPC-10P) (ID Vet Genetics, Grabels, France). The ELISA technique was carried out according to the kit's instructions and has been previously described (OIE 2012). RNA extraction We extracted the total viral RNA from the collected swabs by using QIA-Amp RNA extraction kits (Qiagen, Hilden, Germany) according to the kit's instructions. The RNA concentration was measured with the Nanodrop machine (Thermo Scientific NanoDrop 2000, Applied Biosystems, Foster City, CA), and then the RNA samples were stored at -80 C until testing. Oligonucleotides We used partial FMDV VP-1 gene oligonucleotides to test the collected specimens for the common FMDV strains (O, A, Asia 1, and SAT 2) previously reported in in Eastern Saudi Arabia in 2016. We used the previously published specific FMDV oligonucleotides (Le et al. 2011). The details of these primers are listed in Table 2. Synthesis of the cDNAs and PCR reactions The extracted RNA samples were subjected to twostep RT-PCR. The technique was carried out as previously described (Brunner et al. 2014) with some modifications. The RT-PCR reactions were performed in 20 mL reactions including 2 mL of the dsRNA samples, 1 mL of the sense FMDV primer (Brunner et al. was amplified by RT-PCR. Fifty microliter reactions were prepared, containing 1 mL each of the template cDNA, both FMDV sense and antisense primers, and the PCR master mix and 1 mL of Taq DNA polymerase (TakaRa, Beijing, China). We used the following parameters: initial denaturation for 5 min at 95 C; then 94 C for 1 min; annealing at 55 C for 30 s repeated for 30 cycles; and a final extension at 72 C for 10 min. Gel electrophoresis Ten microliter of the amplified RT-PCR reactions were separated by 1% agarose gels containing SYBR V R Safe DNA Gel Stain (Invitrogen, Thermo Fisher Scientific, Waltham, MA). Amplified reactions were visualized under ultraviolet light. The gel pictures were photographed with the gel documentation system (Bio-Rad Laboratories, Inc., Hercules, CA). Purification of the amplified PCR amplicons The target amplified PCR bands were excised from the gel and purified using the QIA-quick Gel Extraction Kit (Cat No/ID: 28704), according to the kit's instructions. The purified reactions were eluted in a 50-lL elution buffer. Sequencing and sequencing analysis We selected some positive RT-PCR specimens and sequenced them with a Sanger approach. Sequencing of the amplified PCR products was performed using the Applied Biosystems V R 3500 sequencing machine. The purified PCR products were sequenced in both directions using the original oligonucleotides used in the PCR amplification. We assembled the obtained sequences into one contig by using the Sequencher 5.4.6 sequencing analysis software ( # 2017 Gene Codes Corporation, Ann Arbor, MI) and performed nucleotide blast in NCBI (https://blast.ncbi.nlm.nih.gov/Blast.cgi?CMD¼Web &PAGE_TYPE¼BlastHome). These sequences were aligned and compared to other FMDV sequences available in GenBank. Phylogenetic analysis We constructed phylogenetic trees (maximum likelihood) based on the obtained sequences. Multiple alignments of these sequences with other sequences from GenBank were performed using the Mega-7 package, and phylogeny was performed using the neighbor-joining method with 1000 bootstrap replicates, as previously described (Kumar et al. 2016). Statistical analysis We applied a nonprobability sampling strategy for our sample collection with an incidental assignment approach, as previously described (Smith 1983). Results considered significant when the P value is less than 0.05. Outbreak description We observed a recent FMDV outbreak in a dairy herd in Eastern Saudi Arabia. The examined animals showed typical signs of FMDV infection. The affected cattle population showed a high morbidity (85%) with minimal rates of mortality (<1%). The inspected animals had high fever (above 39.5 C), increased respiratory rates, inappetence, recumbency, and profuse salivation. Postmortem investigation Gross and postmortem examinations of the affected animals revealed the presence of lesions in different parts of the body, such as the external nares, muzzle, lips, dental pad, gums, hard palate, tongue, and coronary bands. The lesions first appeared as hyperemic shallow eroded areas, and then became pale and blanched. Vesicle formation was usually noticed in many locations, especially on the dorsum of the tongue, and vesicles ranged from 0.5 to 2 cm in diameter. Vesicles were ruptured leaving an FMDV-VP1-SAT-R b ACAGCGGCCATGCACGACAG a Reverse primer used for the amplification of FMDV VP-1 of three serotypes (O, A, and Asia 1). b Primers used for the amplification of the three FMDV SAT groups (SAT-1, SAT-2, and SAT-3). ulcerated surface that was covered with a whitish pseudomembrane, representing the remnant of the vesicle wall. Occasionally, in severe cases, the hooves were sloughed from the digits exposing the underlying surface. Cross-sections of the heart revealed a moderate amount of clear straw yellow fluid in the pericardial sac and the presence of an irregular grayish-white area of necrosis within the myocardium. Histopathology of tissues from FMDVinfected animals Various histopathologic changes were observed in the tissue specimens (tongue, lips, dental pad, and skin of the coronary bands) collected from animals showing typical clinical FMDV infections. These lesions were found separately or collectively in the same specimen. The stratified squamous epithelium was moderately thickened and irregular because of hyperkeratosis and acanthosis, with anastomosing rete ridges (Figure 2(A)). Many cells of stratum spinosum had clear vacuoles within their cytoplasm and hydropic degeneration, indicating intracellular edema (Figure 2(B)). Intercellular edema was also noticed as prominent intercellular bridges and spongiosis (Figure 2(C)). It was severe enough to dissociate keratinocytes from each other through keratinolysis. Microvesicles were seen multifocally within the stratum spinosum as small empty spaces that were sometimes filled with acellular homogenous eosinophilic fluid (Figure 2(D)). Keratinocytes were randomly necrotic, as evidenced by a hypereosinophilic cytoplasm with pyknotic nuclei (Figure 2(E)). The epithelium was eroded and ulcerated in several locations and was overlaid with a serocellular crust composed of cellular and karyorrhectic debris, neutrophils, and fibrin (Figure 2(F)). The dermis/submucosa was slightly edematous and was infiltrated with many inflammatory cells including lymphocytes, macrophages, and neutrophils ( Figure 2(G)). Moreover, neutrophils were observed transmigrating across the stratified epithelium or forming aggregations of intracorneal pustules. The skeletal myocytes of the tongue were occasionally infiltrated with few inflammatory cells, and they showed a variable degree of degeneration and necrosis. The myocardium exhibited multifocal areas of cardiomyocyte degeneration and necrosis that was associated with fragmentation, a hypereosinophilic cytoplasm, pyknotic nuclei and a loss of striation (Figure 2(H)). The lost fibers were replaced by lymphocytes and histiocytes (Figure 2(I)). Molecular surveillance of FMDV We tested the 77 collected nasal swabs for the presence of the FMDV nucleic acids with RT-PCR using the VP-1 oligonucleotides shown in Table 1. We tested these specimens for the most common FMDV serotypes previously reported in Saudi Arabia including O, A, Asia 1, and SAT 2. Figure 3 shows an example of the gel-based PCR testing for some of the tested nasal swabs of cattle. The amplified amplicons were 641 nucleotides in length (Figure 3). Our results clearly showed that 13 of the 77 animals tested were positive (24%) ( Table 2). Serological surveillance of FMDV We tested the collected animal sera for the presence of FMDV antibodies and found that 17 animals were positive (22%) ( Table 2). In addition, we tested 92 archived cattle serum samples from 1993 and found that 48 of these samples were positive (52%) ( Table 2). Discussion FMDV is still one of the most important threats to the bovine industry. Several FMDV strains are currently circulating in many parts in the world, especially in Africa, Asia, and Latin America (Nsamba et al. 2015;Dhikusooka et al. 2016;Brito et al. 2017;Mahapatra et al. 2017;Ali et al. 2018;Hayer et al. 2018;Siddique et al. 2018;Souley Kouato et al. 2018). FMDV infection usually produces lesions in many organs especially the mouths and feet of affected animals. We reported one recent outbreak that occurred late in 2016 in a dairy herd in Saudi Arabia. The affected population showed a high morbidity rate and low mortality. Affected animals showed typical clinical signs of FMDV such as the presence of vesicles on the mouth, tongue, and interdigital spaces. Erosions and ulcerations of the tongue, muzzle, palate, and coronary bands were also reported. These clinical and necropsy findings are very typical and in accord to those previously reported in FMDV infections in cattle (Arzt et al. 2011). We reported the histological progression of FMDV infection in a cattle population during the active course of the viral infection in this particular herd. Lesions started in the form of small papules, which then progressed to vesicles that ruptured and lead to erosions and ulcerations on the mucosa of different organs, especially the lips, muzzle, tongue and coronary bands similar as previously reported in many FMDV outbreaks under both natural and experimental settings (Pacheco et al. 2016;Arzt et al. 2017). Several viral infections are associated with such lesions in the oral cavities of cattle including bovine viral diarrhea/mucosal disease, blue tongue, rinderpest, malignant catarrhal fever, vesicular stomatitis, and FMDV. Unlike other diseases, lesions associated with vesicular stomatitis and FMD usually start as vesicles that subsequently rupture leaving an eroded and ulcerative surface. Our results revealed the presence of intact vesicles or at least the remnants of ruptured ones (Uzal and Hostetter 2016;Gelberg 2017). Both the clinical signs and the histological changes reported in the infected animals were quite similar to other previously described infections with FMDV serotype O in cattle and pigs (Oem et al. 2008). The presence of multifocal lymphocytic and necrotizing myocarditis is a characteristic finding in the hearts of calves and lambs infected with FMDV. Death is mostly attributed to myocarditis, which is usually not accompanied by vesicular lesions (Alexandersen et al. 2003;Gulbahar et al. 2007). To our knowledge, this is the first study to describe the histological changes in FMDVinfected animals in Saudi Arabia in detail. Our serology data showed that 22% of the animals had antibodies against the FMDV 3ABC protein. This indicates that those animals were exposed to a recent natural FMDV infection. Our ELISA testing of the archived cattle sera revealed that 52% of the samples were positive for FMDV 3ABC antibodies. This was in accordance to other FMDV serological surveillances performed in Saudi Arabia in 1988 (Samuel et al. 1988;Bronsvoort et al. 2004;Brito et al. 2017;Mahmoud and Galbat 2017). This suggests that many FMDV strains have circulated in the Saudi Arabia for several decades. This highlights the importance of careful monitoring of FMDV strains by conducting regular molecular and serological surveillance. This will help to monitor the emergence of new strains and fine-tuning the vaccination campaigns across the country. Despite the massive application of FMDV vaccines in endemic regions, several outbreaks have still been reported (Arzt et al. 2011;Stenfeldt et al. 2015;Pacheco et al. 2016). Several FMDV outbreaks have been previously reported in wild and domestic ruminants in the Gulf area and the surrounding countries such as Pakistan, Iraq, Turkey, Iran, and Bangladesh (Klein et al. 2006;Knowles et al. 2009;Baba Sheikh et al. 2017;Mahapatra et al. 2017;Hayer et al. 2018;Siddique et al. 2018). The detection of FMDV in Saudi Arabia began in 1988 when a surveillance study was conducted. Several FMDV outbreaks with different serotypes had been reported previously (Samuel et al. 1988 ). This subtype was modified to generate a new sublineage called A-Iran-05 (ARD-07). It was reported in Turkey and Georgia and became the most predominant subtype in Turkey during 2008 ). One study reported the presence of a mixed serotype infection in some dairy herds in Saudi Arabia between 1988 and 1991 (Woodbury et al. 1994). This study revealed the circulation of the FMDV serotypes O and Asia 1 among the infected cattle population (Woodbury et al. 1994). The Saudi isolates of the Asia 1 serotype were closely related to Asia 1/ Tadzhikistan/64, which was reported in Russia, and Asia 1/TUR/15/73, which was reported in Turkey (Woodbury et al. 1994). Furthermore, the FMDV SAT 2 serotype was detected in Saudi Arabia during 2001 and was closely related to the isolated strains from Eretria (Bronsvoort et al. 2004 Arabia and the vaccine strains used may have contributed to this recent outbreak. We believe the preparation FMDV vaccines should be performed with the most recent circulating homologue strains to achieve maximum protection among vaccinated animals. The occurrence of FMDV outbreak in vaccinated animals could be related to many reasons. This animal herd belongs to a group of large dairy farms. They share the common source of ration as well as employee and feed trucks. One possible route of transmission of the virus from one premise to another mechanically is the freely moving the feeding trucks between herds. Veterinarians and other employee could transmit the virus from one group of animals to another mechanically as well. Introduction of new animals during the viral incubation period to a new herd may be another potential source of FMDV transmission. Similar studies reported the mechanical transmission of FMDV mechanically by air, feeding trucks (Paton et al. 2018). The presence of specific antibodies against the FMDV serotype O in dromedary camels in the central region of Saudi Arabia has been described (Yousef et al. 2012). About 6.3% of the tested camel sera were positive for FMDV-O by commercial ELISA (Yousef et al. 2012). Dromedary camels are free moving animals across the desert, which may come in close contact of the FMDV infected cattle or sheep population. This may contributed at least in part to the sustaining and spread of FMDV among certain regions. However, the exact roles of the dromedary camels in the epidemiology of FMDV still need further clarification. We believe the occurrence of FMDV infection in vaccinated animals may be related to the type of the applied vaccine. Presumably, animals were vaccinated with nonhomologous strain of the currently circulating field strains. However, this requires further investigation. Continuous monitoring of the circulating FMDV strains at the molecular level is highly recommended to ensure the selection of the right strain for the preparation of effective vaccines.
4,996.6
2018-01-01T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
A consistency result on long cardinal sequences For any regular cardinal $\kappa$ and ordinal $\eta<\kappa^{++}$ it is consistent that $2^{\kappa}$ is as large as you wish, and every function $f:\eta \to [\kappa,2^{\kappa}]\cap Card$ with $f(\alpha)=\kappa$ for $cf(\alpha)<\kappa$ is the cardinal sequence of some locally compact scattered space. Introduction If X is a locally compact, scattered Hausdorff (in short: LCS) space and α is an ordinal, we let I α (X) denote the α th Cantor-Bendixson level of X. The cardinal sequence of X, CS(X), is the sequence of the cardinalities of the infinite Cantor-Bendixson levels of X, i.e. CS(X) = |I α (X)| : α < ht -(X) , where ht -(X), the reduced height of X, is the minimal ordinal β such that I β (X) is finite. The height of X, denoted by ht(X), is defined as the minimal ordinal β such that I β (X) = ∅. Clearly ht -(X) ≤ ht(X) ≤ ht -(X) + 1. Let κ α denote the constant κ-valued sequence of length α. In [2] it was shown that the class C(α) is described if the classes C κ (β) are characterized for every infinite cardinal κ and ordinal β ≤ α. Then, under GCH, a full description of the classes C κ (α) for infinite cardinals κ and ordinals α < ω 2 was given. 2. Proof of theorem 1.6 Graded posets. In [3], [6], [9] and in many other papers, the existence of an LCS space is proved in such a way that instead of constructing the space directly, a certain "graded poset" is produced which guaranteed the existence of the wanted LCS-space. From these results, Bagaria, [1], extracted the notion of s-posets and established the formal connection between graded posets and LCS-spaces. For technical reasons, we will use a reformulation of Bagaria's result introduced in [10]. If is an arbitrary partial order on a set X then define the topology τ on X generated by the family {U (x), X \ U (x) : x ∈ X} as a subbase, where U (x) = {y ∈ X : y x}. So, instead of Theorem 1.6, it is enough to prove Theorem 2.4 below. We say that I is an ordinal interval iff there are ordinals α and β with I = [α, β). Write I − = α and I + = β. Note that I is a cofinal tree of intervals in the sense defined in [6]. So, the following conditions are satisfied: (i) For every I, J ∈ I, I ⊂ J or J ⊂ I or I ∩ J = ∅. (ii) If I, J are different elements of I with I ⊂ J and J + is a limit ordinal, then I + < J + . (iv) I n+1 refines I n for each n < ω. Now if ζ < δ, we define the basic orbit of ζ (with respect to I) as We refer the reader to [6,Section1] for some fundamental facts and examples on basic orbits. In particular, we have that The underlying set of our poset will consist of blocks. The following set B below serves as the index set of our blocks: The underlying set of our poset will be To obtain a (κ, λ, δ, L δ κ )-good poset we take Y = B S and Define the functions π : X −→ δ and ρ : X −→ λ by the formulas π( α, ν ) = α and ρ( α, ν ) = ν. . Finally we define the orbits of the elements of X as follows: To simplify our notation, we will write o(x) = o(π(x)) and o(x) = o(π(x)). Forcing construction. Let Λ ∈ I and {x, y} ∈ X 2 . We say that Λ separates x from y if Definition 2.6. Now, we define the poset P = P, ≤ as follows: To complete the proof of Theorem 2.4 we will use the following lemmas which will be proved later: is dense in P Since λ <κ = λ , the cardinality of P is λ. Thus, Lemma 2.7 and Lemma 2.8 above guarantee that forcing with P preserves cardinals and 2 κ = λ in the generic extension. Let G ⊂ P be a generic filter. Put A = {A p : p ∈ G}, i = {i p : p ∈ G} and = { p : p ∈ G}. Then A = X by Lemma 2.9(a). Finally condition 2.1(c) holds by Lemma 2.9(b). So to complete the proof of Theorem 2.4 we need to prove Lemmas 2.7, 2.8 and 2.9. Since κ is regular, Lemma 2.7 clearly holds. Proof of Lemma 2.9. (a) Let p ∈ P be arbitrary. We can assume that (b) Let p ∈ P be arbitrary. By (a) we can assume that x ∈ A p . Write β = π(x). Let K be a finite subset of [α, β) such that α ∈ K and I(γ, n) + ∈ K ∪ [β, δ) for γ ∈ K and n < ω. Let q = A q , q , i q . Next we check q ∈ P . Clearly (P 1), (P 2), (P 3) and (P 5) hold for q. (P4) also holds because if y ∈ A p and γ ∈ K then either b γ q y or they are q -incompatible. To check (P6) it is enough to observe that if Λ separates b γ and y, then z = b Λ + meets the requirements of (P6). By the construction, q ≤ p. The rest of the paper is devoted to the proof of Lemma 2.8. In the first part of the proof, till Claim 2.16, we will find ν < µ < κ + such that r ν and r µ are twins in a strong sense, and r ν and r µ form a good pair (see Definition 2.15). Then, in the second part of the proof, we will show that if {r ν , r µ } is a good pair, then r ν and r µ are compatible in P. For i ∈ σ let We put Z 0 = {δ i : i ∈ σ}. Since π ′′ A △ = {δ i : i ∈ K} we have π ′′ A △ ⊂ Z 0 . Then, we define Z as the closure of Z 0 with respect to I: By Claim 2.10(a), the sequence π(x ν,i ) : ν < κ + is strictly increasing for i ∈ D ∪M. Since |Z| < κ, and | o * (x ν,k )| ≤ κ for x ν,k ∈ B S ∩A △ , we can assume that Our aim is to prove that there are ν < µ < κ + such that the forcing conditions r ν and r µ are compatible. However, since we are dealing with infinite forcing conditions, we will need to add new elements to A ν ∪ A µ in order to be able to define the infimum of pairs of elements {x, y} where x ∈ A ν \ A µ and y ∈ A µ \ A ν . The following definitions will be useful to provide the room we need to insert the required new elements. Let Assume that i ∈ σ 1 ∪ σ 2 . Let For i ∈ σ 2 , since γ(δ i ) < δ i and δ i = lim{π(x ν,i ) : ν < κ + } by Claim 2.10(a) for all i ∈ D ∪ M, we can assume that (H) π(x ν,i ) ∈ J(δ i ) \ γ(δ i ), and so π(x ν,i ) / ∈ Z, for all i ∈ D ∪ M. We will use the following fundamental facts. Claim 2.11. If x ν,i ν x ν,j then δ i ≤ δ j . Proof. Assume that i, j ∈ K and δ i = δ j . By Claim 2.11, we have δ i < δ j . Since i ∈ F ∪ M and x ν,i ν x ν,j imply x ν,i = x ν,j and so δ i = δ j , we have that i ∈ D, and so π(x ν,i ) < δ i , cf(δ i ) = κ + and J(δ i ) + = δ i by Proposition 2.5 . Since x ν,k = x ν,j , we have x ν,k ∈ B S , and so k ∈ K ∪ D. But as π(x ν,k ) = δ i ∈ Z we obtain k / ∈ D by (H), and so k ∈ K, which implies , and x ν,i ∈ A △ ∩ B S , we have k / ∈ D ∪ M by (G). Thus k ∈ K, and so x ν,k ∈ A △ . Hence Claim 2.14. Assume that x ν,i and x ν,j are compatible but incomparable in r ν . Let x ν,k = i ν {x ν,i , x ν,j }. Then either x ν,k ∈ A △ or δ i = δ j = δ k . To finish the proof of Lemma 2.8 we will show that ( †) If {r ν , r µ } is a good pair, then r ν and r µ are compatible. So, assume that {r ν , r µ } is a good pair. Write In order to amalgamate conditions r ν and r µ , we will use a refinement of the notion of amalgamation given in [6,Definition 2.4]. Let be an order-preserving injective function for some ordinal θ < κ, and for x ∈ A ′ let Since cf(γ(δ x )) = κ and |A ′ | < κ we have So, for every x ∈ A ′ , y x ∈ B S with π(y x ) < π(x). Define functions g : Y −→ A ν andḡ : Y −→ A µ as follows: Now, we are ready to start to define the common extension r = A, , i of r ν and r µ . First, we define the universe A as Clearly, A satisfies (P1). Now, our purpose is to define . Extend the definition of g as follows: g : A −→ A ν is a function, We introduce two relations on A p ∪ A q ∪ Y as follows: Then, we put The following claim is well-known and straightforward. The following straightforward claim will be used several times in our arguments. Sublemma 2.19. is a partial order on Proof. We should check that ν is transitive, because it is trivially reflexive and antisymmetric. So let s t u. We should show that s u. Since x z implies g(x) ν g(z), we have g(s) ν g(t) ν g(u) and so (⋆) g(s) ν g(u). , then (⋆) implies s R1 u or s ν u or s µ u, which implies s u by (⋆). So we can assume that s ∈ A ν (the case s ∈ A µ is similar), and so u ∈ Y or u ∈ A µ . Assume that t ∈ Y . Then s R2 t, and so there is a ∈ A △ such that g(s) ν a ν g(t). Since t u implies g(t) ν g(u), we have g(s) ν a ν g(u), and so s R2 u. Thus s u. If t ∈ Y , then s R2 t, and so there is a ∈ A △ such that g(s) ν a ν g(t). Since t u implies g(t) ν g(u), we have g(s) ν a ν g(u), and so s R2 u. Thus s u. Assume that t ∈ A ν ∪ A µ . Then t R2 u, and so there is a ∈ A △ such that g(t) ν a ν g(u). Then g(s) ν a ν g(u), and so s R2 u. Thus s u. So, by the previous Sublemma 2.19 and by the construction, (P2) and (P3) hold for . Next define the function i : A 2 −→ A ∪ {undef} as follows: Let i{s, t} = undef if s and t are not -compatible. If s and t are compatible, then so are g(s) and g(t) because x y implies g(x) ν g(y) by Claim 2.18. Moreover i ν {s, t} = i µ {s, t} for {s, t} ∈ A △ 2 by condition (C)(e), so the definition above is meaningful, and gives a function i. If x / ∈ B S then x ∈ M and γ(δ x ) < π(x) < δ x by (H), and so Then x ∈ F and so Proof. We should distinguish two cases. To check (P4) we should prove that i{s, t} is the greatest common lower bound of s and t in A, . Assume first that s and t are not twins. Note that by Claim 2.18, g(s) and g(t) are ν -compatible. Write v = i ν {g(s), g(t)}. Since v = g(v) ν g(s) and v ∈ A △ , we have v R2 s. Similarly v R2 t. Thus v is a common lower bound of s and t. To check that v is the greatest lower bound of s, t in A, let w ∈ A, w s, t. Then g(w) ν g(s), g(t). Thus g(w) ν i ν {g(s), g(t)} = v. To check (P5) observe that g(s) and g(t) are incomparable in A ν . Indeed, g(s) ν g(t) implies v = g(s) ∈ A △ and so g(s) ν g(t) implies s R2 t, which contradicts our assumption that s and t are -incomparable. Thus, by applying (P5) in r ν , π(v) ∈ f{g(s), g(t)}. If g(s) and g(t) are ν -comparable then δ g(s) = δ g(t) , because otherwise we would infer from Claim 2.12 that s, t are -comparable, which is impossible. Now assume that g(s) and g(t) are ν -incomparable. If δ v < δ g(s) , then there is a ∈ A △ ∩ B S with v ν a ν g(s) by Claim 2.12. Thus v = i ν {a, g(t)} and so v ∈ A △ by Claim 2.13. Thus δ v = δ g(s) , and similarly δ v = δ g(t) . To check (P4) first we show that y v s, t. Indeed g(v) ν g(s) implies y v R1 s. We obtain y v R1 t similarly. Let w s, t. Assume first that δ g(w) < δ v . Since w s, t we have g(w) ν g(s), g(t) by Claim 2.18 and hence g(w) ν i ν {g(s), g(t)} = v. By Claim 2.12 there is a ∈ A △ such that g(w) ν a ν v. Thus w R2 y v . If s and t are twins, then s ∈ A ′ implies that i{s, t} = y s and we can proceed as above in Case 2.2. We should find v ∈ A such that s v t and π(v) = Λ + . Note that since s t, we have δ g(s) ≤ δ g(t) by Claim 2.11. We can assume that {s, t} / ∈ A ν 2 ∪ A µ 2 because r ν and r µ satisfy (P6). Thus Λ separates a from g(t). Applying (P6) in r ν for a and g(t) and Λ we obtain b ∈ A ν such that a ν b ν g(t) and π(b) = Λ + . Thus g(s) ν b ν g(t) implies s R2 b R2 t, and so s b t. If Λ + = π(a), then we are done because g(s) ν a ν g(t) implies s a t. We will see that this case is not possible. As s t and [γ(δ s ), J(δ s ) + ) ∩ Z = ∅ we have that t / ∈ A µ . Since s ∈ A ν , s t and δ s = δ g(t) we have t / ∈ Y , and so t ∈ A ν , which was excluded. By means of a similar argument, we can show that s ∈ A µ is also impossible. Thus we proved that r is a common extension of r ν and r µ . This completes the proof of Lemma 2.8, i.e. P satisfies κ + -c.c.
3,855.6
2019-01-25T00:00:00.000
[ "Mathematics" ]
Deep Regularized Discriminative Network Traditional linear discriminant analysis (LDA) approach discards the eigenvalues which are very small or equivalent to zero, but quite often eigenvectors corresponding to zero eigenvalues are the important dimensions for discriminant analysis. We propose an objective function which would utilize both the principal as well as nullspace eigenvalues and simultaneously inherit the class separability information onto its latent space representation. The idea is to build a convolutional neural network (CNN) and perform the regularized discriminant analysis on top of this and train it in an end-to-end fashion. The backpropagation is performed with a suitable optimizer to update the parameters so that the whole CNN approach minimizes the within class variance and maximizes the total class variance information suitable for both multi-class and binary class classification problems. Experimental results on four databases for multiple computer vision classification tasks show the efficacy of our proposed approach as compared to other popular methods. Introduction Linear discriminant analysis (LDA) is a method from multivariate statistics which attempts to find a linear projection of high-dimensional observations onto a lower-dimensional space [10]. It finds the optimal decision boundaries in the resulting lower dimensional subspace. LDA is an efficient way to separate the features on the basis of class information, but since it requires inverse operation it often becomes problematic if the dimension becomes very high as compared to the number of available training samples. Thereby it ignores the eigenvectors corresponding to zero eigenvalues so as to have the within class scatter matrix non-singular. In Sharma et al. [29], an improved regularized LDA is proposed which is carried out by adding a perturbation term to the diagonal elements of within class matrix to make it non-singular and invertible. However, the eigenvectors corresponding to zero eigenvalues also contain the important class discriminatory information as reported in [6,17,19,27]. Thus, we aim to utilize both the principal as well as nullspace eigenvalues and extend the beneficial properties of the proposed regularized fisher method (low intra-class variability, high totalclass variability, optimal decision boundaries). This is done by reformulating its objective to learn linearly separable representations based on a deep neural network (DNN) for both binary as well as multi-class problem. LDA is used widely as a supervised dimensionality reduction method in computer vision and pattern recognition. Its recent generalization to non-Euclidean Grassmann manifolds can be found in [33]. This aims to impose the highest possible variance among classes, by maximizing the between-class distances, whilst minimizing the within-class scattering. Recently, deep learning combined with various multivariate statistics methods have achieved great success [12]. Andrew et al. [4] introduced a deep canonical correlation analysis (DCCA) which can be viewed as a nonlinear extension of CCA . In their evaluations, they argued that DCCA learns representations with significantly higher correlation than those learned by CCA and Kernel (nonlinear) CCA. They experimented using the MNIST handwritten data and simultaneous recording of articulatory and acoustic data. Ghassabeh et al. [13] presents new adaptive algorithms for online feature extraction using principal component analysis (PCA) and LDA for classification purpose. In Al-Waisy et al. [2], they have merged the advantages of local handcrafted feature descriptors with the Deep Belief Networks for the face recognition problem in unconstrained conditions and have obtained better performances. PCANet proposed by Chan et al. [5] which includes cascading of PCA, binary hashing and block histogram computations. This can be seen as an unsupervised convolutional deep learning approach. Due to computational complexity these multi-stage filter banks are limited to two stages but can be extended to any number. They also experimented further modifications on PCANet as RandNet and LDANet. RandNet and LDANet share the same methodology like PCANet, but their cascaded filters are either selected randomly as in RandNet or learned from LDA in case of LDANet. Lifkooee et al. [24] combines regular deep convolutional neural network with the Laplacian of Gaussian filter (LoG) right before fully connected layer and they have shown that the proposed feature descriptor along with LoG introduced in CNN further improves the performance of deep learning. Stuhlsatz et al. [31] initially proposed the idea of combining LDA with neural networks. In their proposed approach, they pre-train a stack of restricted Boltzmann machines and this pre-trained model is finetuned with respect to a linear discriminant criterion. LDA has the disadvantage that it overemphasizes large distances at the cost of confusing neighbouring classes. Thus, to tackle this problem, they introduced a heuristic weighing scheme for computing the within-class scatter matrix required for LDA optimization. The LDA based objective function proposed by Dorfer et al. [9] is a non-linear extension of classic LDA where the objective function is obtained from the general LDA eigenvalue problem while still allowing to train the CNN architecture with stochastic gradient descent and back-propagation. In this paper, we propose to modify the LDA based objective function which would utilize both the principal as well as nullspace eigenvalues onto its latent space representation for both multi-class as well as binary class problem. Extensive experimental results on multiple computer vision classification tasks illustrates the superiority of our proposed approach as compared to other popular methods. Below, we describe our proposed method in details. Proposed Approach The approaches mentioned so far are based on the study of multi-variate statistics. In our work, we propose to train a CNN architecture in an end-to-end fashion with a new objective function which would enable the network to inherit the property of maximizing the total variation and minimizing the within class variation. Deep Learning has become state-of-the-art for many image based applications of classification, object recognition, segmentation, image captioning and natural language processing [14,26]. The mathematical model of Convolutional Neural Network (CNN) is explained by Kuo et al. [22] where the fundamental questions about the structure of the convolutional neural networks is explained. There are many variations of deep convolutional neural networks for various vision tasks. The intuition behind our approach is to use the proposed regularized Fisher method as the objective function on top of a powerful feature learning model. The optimization of parameters is carried by backpropagating the error of the proposed objective function through the entire network. One of our objectives in this work is to come up with a CNN architecture that can be generically applied to many computer vision classification tasks. For experimental evaluation, we evaluated our proposed objective function on various benchmark databases like MNIST (handwritten digit recognition), CIFAR-10 (natural image classification) and ISBI (skin cancer detection into melanoma and non-melnoma cases) to show that the objective function is effective for both multi-class as well as binary class classification problems. Deep Regularized Discriminative Network over simple ConvNet Deep learning networks are different from the simple single-hidden-layer neural networks by their depth. Deeplearning networks effectively learn the features automatically without human intervention, unlike most traditional machine-learning algorithms. A neural network with P hidden layers is represented as a non-linear function f (Θ) , where Θ = {Θ 1 , … , Θ P } . In supervised learning for N number of samples, we have x = {x 1 , … , x N } as training data and y = {y 1 , … , y N } ∈ 1, … , C , where C is the number of classes. In the last layer, we have softmax as the classifier which gives the normalized probability of the data that belongs to a particular class. The output, . The network is optimized using stochastic gradient descent or any other optimizer like Adam with the goal of finding optimal model parameters Θ by minimizing the objective function l i (Θ): where l i (Θ) = f ((x i , Θ), y i ) . For categorical cross entropy (CCE), the loss function is defined as SN Computer Science where p i,j is the network output probability and y i,j is 1 if observation x i belongs to class y i for (j = y i ) and 0 otherwise. Figure 1 shows the deep regularized network where the objective is different from the CCE in maximizing the total scatter matrix eigenvalues and minimizing the within class scatter matrix eigenvalues. In the following subsections, detail description of the proposed objective function and the related analysis are discussed. Proposed Objective Function Linear discriminant analysis tries to find out the axes which maximize the between-class scatter matrix S b , while minimizing the within-class scatter matrix S w in the projective subspace A ∈ ℝ l×d . The projective subspace is a lower dimensional subspace, i.e., l = C − 1 where C is the number of classes. The resulting projection matrix onto this subspace x i A T are maximally separated in this space [10]. Fisher criterion is defined as the ratio of betweenclass and within-class variances, given by Here, W is the weight vector. To compute the within class scatter matrix: The total scatter matrix is computed using where X is the input data matrix; in our case it would be the output of the CNN model and N c is the sample numbers in that particular class. N is the total samples and X c = X c − m c , m c is the mean of that class, X = X − m where m is the total mean of the samples. The output predicted values from the CNN model (y_pred) is used as X values for the computation of S w , as in (5). To extract discriminative features, at first we perform eigen decomposition of the within-class scatter matrix S w , given by Here, Φ contains the eigenvectors and Λ are the eigenvalues of S w . Then the eigenvectors are sorted according to the eigenvalues in descending order. Matrix Φ is then split into W 1 and W 2 , where W 1 is the matrix which contains the eigenvectors corresponding to those eigenvalues which are greater than a certain minimum variance. For our experimentation, we took minimum variance value as 1e−2. W 2 matrix are the eigenvectors corresponding to those eigenvalues whose variance are less than the minimum variance. W 1 matrix is divided with the square root of the corresponding eigenvalues and W 2 matrix is divided with the square root of the minimum eigenvalues. These two matrices are concatenated to form Ψ as shown in (8) and it is multiplied with the y_pred to form the model output y: Then, we compute the total scatter matrix S t using (6). After computing the covariance matrix, the projection matrix Ω is selected by eigen decomposition of S t and selecting the eigenvectors in Φ wy according to the most significant eigenvalues Λ wy . Eigen decomposition of S t is given by Using the eigenvalues of S t matrix, we formulate the objective as, Fig. 1 Schematic sketch of deep regularized discriminative network which learns the linear separability property in the latent representation. Here the objective is to maximize the eigenvalues so that the class separability also increases The objective of combining this with the deep neural net is that of maximization of the individual eigenvalues of S t and minimization of the eigenvalues of S w . In particular we expect maximization (minimization) of the eigenvalues of S t ( S w ) leads to maximizing (minimizing) separation in the respective eigenvector direction. Thus we would achieve the target of minimizing the within-class variation and maximizing the total variation. Deep neural network with categorical cross entropy (CCE) or binary cross entropy loss function does not take into account this aspect of discriminatory power. CCE main objective is to maximize the likelihood of the class labels according to the target labels. Here the objective function is designed to consider only the k eigenvalues that do not exceed a certain threshold for variance maximization: where for symbol easiness we have considered Λ wy as v and n is the rank of the covariance matrix which is equal to one less than the number of samples ( n − 1 ). This formulation of objective function allows to train the deep networks with backpropagation in end-to-end fashion. This is similar to the classic LDA but it lifts the constraint that generally occurs for binary classification where C (number of classes) is 2 and the l-dimensional projection matrix with classic LDA method will be l = C − 1 , i.e., 2 − 1 = 1 . The above proposed objective function can be used for both multi-class as well as binary class classification problems. Experimental Results One of the key objectives of our work is to propose a CNN architecture that can be generically applied to many vision tasks. For our experimental evaluation we considered four publicly available databases, namely MNIST (hand written digits recognition), CIFAR-10 (natural scenes classification), ISBI 2016 (skin cancer classification) and ISBI 2017 (skin cancer classification). We compare our results with various other similar approaches available for vision classification. Experimental Setup The general structure of the CNN model is based on VGG model using 3 × 3 convolutions [30]. We experimented with and without including the BatchNormalization layer after each convolutional layer [18]. This layer helps in increasing the convergence speed and also the performance of the model. For non-linearity RELU is used, since it greatly accelerate the convergence rate of stochastic gradient descent or any other optimizer as compared to the sigmoid/tanh functions [20]. All the networks are trained using Adam optimizer, but the learning rate is decreased to half after every 200 epochs. The batch size for MNIST data and CIFAR-10 is 1000 and for ISBI 2016 and ISBI 2017, the batch size is 400, as the training data is quite small in case of ISBI databases. Related methods show that mini-batch learning on distribution parameters (in this case covariance matrices) is feasible if the batch-size is sufficiently large to be representative for the entire population [32]. Even though a large batch size is required to have stable estimates, it is limited by the data availability, image size and memory available on the GPU. Table 1 shows detail CNN model specifications for SN Computer Science the CIFAR-10 and MNIST databases. The total number of trainable parameters for CIFAR-10 model is 5,752,414 and MNIST is 467,486. In all our experiments, the proposed method is validated with the existing ones using the same corresponding datasets and protocols. They are implemented on a system with Intel Core i7 processor, 16GB RAM, and NVIDIA GeForce GTX-1050Ti GPU card. MNIST The MNIST database consists of 28 × 28 gray scale image with labels as 0 to 9. The data structure consists of 60,000 samples of which 50,000 is training data and 10,000 is validation data. The test sample consists of 10,000 images, same protocol as that in [9]. Since the proposed method requires large batch size, thus for MNIST we took 1000 as the batch size. The optimizer is the Adam optimizer and the initial learning rate is reduced to half for every 200 epochs. For final classification, we use the linear support vector machine (SVM) classifier. Table 2 shows the comparison of our proposed approach as compared to various relevant methods on MNIST database. From the results, it can be seen that our proposed method with new cost function is second best and comparable with the other state of-the-art reported performances. Therefore, it is evident that adding the latent space representation into the cost function, by maximizing the betweenclass and minimizing the within-class eigen representation efficiently learns the features required for classification. Thus the training is done in an unsupervised manner and using linear SVM, we do the final classification using the testing data. Figure 2a shows the evolution of mean eigenvalues of the total scatter matrix with varying epochs during the training. Figure 2b shows the eigenvalues of within class scatter matrix with respect to varying epochs, which initially increases but later decreases; thus achieving our objective of minimizing the within class and maximizing the total variation among different classes, as shown in Fig. 2a, b. CIFAR-10 The CIFAR-10 database consists of 32 × 32 size image containing 10 different classes. The database structure consists of 50,000 training samples and 10,000 testing samples, same as that in [9]. We normalize the pixel values between 0 and 1. Table 1 describes the network structure, and similar to MNIST approach described above the initial learning rate is reduced to half for every 200 epochs. Table 3 summarizes the comparison of our proposed approach and various relevant methods on this database. It can be seen that our proposed methodology has achieved second best accuracy for this natural image classification task. Table 2 Comparison of test errors (%) on MNIST database using our proposed approach and other relevant methodologies ISBI 2016 and ISBI 2017 To show the efficacy of the proposed objective function, we have conducted experimentation on both multi-class (MNIST and CIFAR-10) and binary class classification databases (ISBI 2016 and 2017). ISBI databases consist of dermoscopic lesion images for the diagnosis of skin cancer melanoma from the non-melanoma cases. ISBI 2016 database consists of 900 training set and 379 testing set. The database is unbalanced with 727 benign images and 173 melanoma images. Similarly, ISBI 2017 database consists of 2000 training samples and 600 testing samples. As stated by Wang et al. [32], minibatch learning with covariance estimates requires large batch size such that it could represent the entire population. Thus to overcome the batch size problem due to limited availability of ISBI training and testing data as well as due to large size of these images ( 224 × 224 ) and limited amount of memory available in GPU, we first performed fine-tuning of pretrained ResNet-50 model which has 25,636,712 parameters and then extracted the features from the last convolutional layer. We used these 2-dimensional features as inputs to train MLP (multi-layer perceptron) or fully connected layers. The fully connected layers used for training with the proposed objective function can be represented as Sigmoid activation function is the most favoured activation function for shallow networks. We experimented using RELU and tanh as well, but there was no significant improvement using them. Activation function adds nonlinearity to the existing nodes of the network. For deeper networks, RELU is the best activation function since RELU increases the convergence rate. Disadvantage of RELU is that ReLU units can be fragile during training and can erode easily [1]. The following performance criteria are used for comparison of the proposed approach with the existing methodologies: -Accuracy: The ratio of correct prediction to that of total predictions, mathematical formulation as where TP is the True Positive, TN is the True Negative, FP is the False Positive, FN is the Flase Negative. [15] 88.32 PCANet-2 [5] 78.67 DeepLDA [9] 92.42 Proposed method 90.04 SN Computer Science -Sensitivity: The ability of the algorithm to correctly predict the diseased cases (i.e., malignant): -Specificity: It is the ability of the algorithm to correctly predict the non-diseased cases (i.e., benign): -AUC: Area under receiver operating characteristic curve. It is the graph between true positive rate against the false positive rate. -Average precision: Average precision (AP) is the area under the precision-recall curve. The detailed explanation can be found in [16]. Since DeepLDA approach uses the traditional LDA where we could get at most, number of classes minus (15) SE = TP TP + FN . one as the principal eigenvalues which in this database would be (2 − 1) = 1. Thus, at the end there would be only one eigenvalue to maximize so as to have maximum inter class separation and minimum within class separation. In our approach, we use total class scatter matrix variance information to find the optimal projection among all the training data samples. This has enabled us to select up to n − 1 , where n is the total number of training samples. The model loss plot with respect to varying epochs are shown in Fig. 3a for ISBI 2016 and Fig. 3b for ISBI 2017 databases, respectively. The plot shows that in both the cases the loss decreases evenly with increase in number of epochs and finally converges. Tables 4 and 5 show the various comparison of this approach with the existing ones on ISBI 2016 and 2017 databases, respectively. The results obtained on these databases do not exceed the best accuracy so far obtained, but show a new approach to proceed by inheriting the class separability into the deep neural net as a result of changing the objective function. We implemented DeepLDA Conclusions In this paper, we have proposed an objective function which would work for both binary as well as multi-class classification problems. The proposed loss function minimizes the within class variance and maximizes the total class variance. We experimented our method on popular databases for various applications like MNIST (hand written digit recognition) and CIFAR-10 (natural image classification), and we have shown that the proposed approach achieves competitive performances on these databases as compared to other methods. For the application of melanoma detection (skin cancer detection into melanoma and non-melanoma cases), since the number of images are few we trained the network using multi-layer perceptron and are able to achieve an accuracy of 84.9% on ISBI 2016 and 83.3% on ISBI 2017 databases. These experimental results show the efficacy of our proposed approach as compared to other methods for many computer vision classification tasks. Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
5,209.4
2021-04-24T00:00:00.000
[ "Computer Science" ]
Sensing Coherent Phonons with Two-photon Interference Detecting coherent phonons pose different challenges compared to coherent photons due to the much stronger interaction between phonons and matter. This is especially true for high frequency heat carrying phonons, which are intrinsic lattice vibrations experiencing many decoherence events with the environment, and are thus generally assumed to be incoherent. Two photon interference techniques, especially coherent population trapping (CPT) and electromagnetically induced transparency (EIT), have led to extremely sensitive detection, spectroscopy and metrology. Here, we propose the use of two photon interference in a three level system to sense coherent phonons. Unlike prior works which have treated phonon coupling as damping, we account for coherent phonon coupling using a full quantum-mechanical treatment. We observe strong asymmetry in absorption spectrum in CPT and negative dispersion in EIT susceptibility in the presence of coherent phonon coupling which cannot be accounted for if only pure phonon damping is considered. Our proposal has application in sensing heat carrying coherent phonons effects and understanding coherent bosonic multi-pathway interference effects in three coupled oscillator systems. Phonons are packets of vibrational energy that shares many similarity with its bosonic cousin photons. Advances in nanofabrication has enabled many parallels between the development of photon and phonon control. Parallel developmets in passive control techniques include photonic 1 versus phonoic crystals 2 , optical 3 versus acoustic metamaterials 4 etc. Development in active manipulation of electromagnetic waves through light-matter interaction have led to creation of nanoscale optical emitters 5 and gates 6 and similar progress have been made in controlling phonons using their interaction with matter especially in the realms of optomechanics 7 and phononic devices 8,9 . Phonons span a vast frequency range and while techniques to control and sense lower frequency coherent phonons have been well-developed [10][11][12][13][14][15][16][17][18][19] , heat carrying coherent terahertz acoustic phonons have been harder to measure directly due to the small wavelength and numerous scattering mechanism at these small wavelengths 20 . In the past, THz crystal phonons have been generated and detected in low temperature experiments with defect doped crystals [21][22][23][24] , with experimental evidence of coherent phonon generation [25][26][27] . At the same time, interpretation of non-equilibrium phonon transport, with the advancement of nanoscale electrical heating and ultrafast optical pump-probe techniques, have allowed us to infer phonon coherence from broadband thermal conductivity measurements [28][29][30][31][32] . There have been also interest of using defectbased techniques as a thermal probe using perturbation to energy levels due to changes in temperature 33 . Furthermore, surface deflection techniques with ultrafast optics have also been used to generate phonons close to THz frequencies in materials [34][35][36][37] . Defect-based techniques are attractive compared to both thermal conductivity measurement and deflection techniques due to its ability to directly access atomic length scales where THz phonon wavelength resides. Also, the energy levels in the excited state electron manifold of these defects can match the phonon energy precisely 21,38,39 , resulting in a narrow band phonon detector. In light of the success of defect-based optical absorption techniques in coupling directly to high frequency phonons, we propose the use of two photon interference to measure the coherence properties of these phonons. Two photon interference techniques, with the most famous being coherent population trapping (CPT) 40 and electromagnetically induced transparency (EIT) 41 , have been widely adopted in spectroscopy and metrology in atomic 42,43 and defect-based systems [44][45][46][47][48] . However, CPT and EIT usually excludes the possibility of a ground state coupling 49 or merely treating the ground state coupling as thermal bath 47 . In this paper, we propose the possibility of using the presence of coherent coupling of two ground states in a Λ system by THz acoustic phonons of the host material as a coherent phonon sensor. We show two experimentally observable effects, namely an asymmetric excited state population lineshape in CPT and an anomalous dispersion profiles in EIT measurements, which only occurs in the presence of coherent phonon coupling to a lattice phonon mode. Our proposal has the potential for direct implementation in defect-based phonon detection experiments mentioned earlier 21,38,39 and extends traditional two couple oscillator models in two photon interference to a three-coupled-oscillator models 50,51 . Our result will also be applicable for three-way coupled system such as microwave driven quantumbeat lasers 52,53 , designed opto/electro-mechanical schemes 54,55 or phonon-based quantum memories 14,56,57 . In the schematic of our proposal in Fig. 1(a), a two-photon interference is created in a localized region of a medium that carries an ensemble of identical emitters with electronic energy level resembling a typical Λ system used in CPT or EIT. The optical fields driving the |2 − |1 and |3 − |2 transitions have detunings δ a and δ b with respect to the electronic energy levels of the emitters. The total Hamiltonian of the system can be written as where the electronic part satisfies the eigenvalue Eq. 1b of electronic eigenstate |m , the field part (Eq. 1c) is the usual expression that now comprises the sum of the photon modes indexed as λ with raising and lowering operators c † λ , c λ and the phonon modes indexed as k with raising and lowering operators b † k , b k . The interaction Hamiltonian in Eq. 1d has two parts, the first part being the original two photon interference Hamiltonian which realizes effects of CPT and EIT, and the other portion responsible for phonon interaction. Using the procedure outlined in Supplementary Information (SI) similar to the method by Whitney and Stroud 49 , one arrives at the set of Eqs. S13 which specifies the equations of motion for elements of the density matrix. Note that in Eqs. S13, we are able to obtain spontaneous Γ (Eqs. S11,S12) and stimulated rates G i , W Eq. S15 directly from the equations of motion Eq. S4 without having to add damping terms unlike semi-classical approaches and this is the merit of the approach by Whitley and Stroud 49 . The spontaneous damping terms are defined as sum over all mode contributions in both optical (Eq. S11) and phonon cases (Eq. S12) while the coherent optical coupling terms G a,b are defined for coupling to a specific mode α, β (Eqs. S15a and S15b) and W for the specific phonon mode γ. A very important feature of our system is that we have now included the possibility for a coherent phonon coupling of strength W that couples to the |3 − |1 transition instead of a pure phonon damping term, and examining this feature will be the main theme of subsequent results and discussions. We would especially like to bring your attention to the definition of W in Eq. S15c where ensemble average of the phonon annihilation operator will only yield a non-zero value if the detected phonons are coherent 49 . This is because an incoherent or thermal ensemble will yield a zero ensemble average 58 . Thus, our proposed technique offer a rigorous detection of phonons rather than indirect evidence using thermal conductivity measurements. The diagonal terms ρ 11 , ρ 22 and ρ 33 are the population of each energy level. We first solve for the steady state solution to Eq. S13 which allows us to obtain ρ 11 , ρ 22 and ρ 33 in the longtime limit. We first consider CPT where the optical field for |2 − |1 transition is tunable while transition |3 − |2 is fixed, and that both fields are of equal strength G a = G b = G. Under the condition of no phonon damping Γ p = 0, unity optical damping Γ a = Γ b = Γ 0 and coupling W = 0, we can obtain the expression of ρ 11 , ρ 22 and ρ 33 as The dashed lines in Fig. 1 (b) plots the population of level |1 (Eq. 2a) and level |3 (Eq. 2c) which are in the ground state manifold. There is a broad resonance that peaks at zero detuning where almost half of the population is in each of the ground state. The excited state population of level |2 in Eq. 2b in Fig. 1 (c) is small for all detuning, where the dashed line also shows a broad resonance peak. However, there exist a sudden dip at δ a = 0 to zero population, a feature of complete two photon resonance in CPT 40,42,59 . Now, let us add some phonon damping Γ p = 0.1Γ 0 but assume no phonon coupling i.e. W = 0. The solid lines in Fig. 1(b) shows the population of level |1 and level |3 again where adding phonon damping reduces the population transfer between |1 and |3 at δ a = 0, leaving only 10% of population in level |3 on resonance. Fig. 1(c) show that two photon interference effect in the excited state |2 with (solid line) phonon damping is reduced on resonance. This is physically expected as Γ p is a source of decoherence which reduces the ideal result in CPT or EIT. Next, we introduce coherent phonon coupling W and ignore phonon damping Γ p for the excited state level |2 given by Eq. S19. Figure linearly decreasing trend for W < ∼ 0.1Γ 0 (shown in red solid line in Figs. 2(b,c)). However, when W is increased further, then the higher order terms in Eq. S20 starts to dominate, increasing the positive maximum value and decreasing the negative maximum, consistent with the observation of the shift in detuning as W increases in Fig. 2(b,c). Next, we examine how phonon coupling W creates asymmetry in the peak heights in Fig. 2(b). We substitute the linear term in Eq. S20 into the steady state solution for ρ 22 (Eq. S19) to obtain difference between the positive and negative detuning as . Equation 3 is plotted as a function of W in Fig. 2(d) to show that the linear regime agrees well with the actual data from Fig. 2(a) for small values of W . Experimentally, this linearity allows direct retrieval of the value of phonon coupling W from experimental measurements of excited state population ρ 22 if optical fields couplings are much stronger than phonon The third observation is the preservation of the resonance dip to zero occupation in Fig. 2 for all W , indicating that the dark state is preserved just like in the CPT case in Fig. 1(c). The dressed state picture allows us to identify the eigenstates by diagonalizing the where the dressed states can be obtained by taking the eigenvector and eigenvalues of Eq. 4. In the absence of phonon coupling where W = 0, we obtain the familiar dressed state result of a CPT system 41 where the eigenvalues are (0, ± √ 2G) and the eigenvectors are . Equation 5a is the dark state as it does not contain any excited state |2 . Physically, this means that the ground states are mixed with no population in the excited state when the system is in a dark state. When W is non-zero, the eigenvalues are modified to (−W, 1/2(W ± √ 8G 2 + W 2 )) and the eigenvectors become . Equation 6 shows that the dark state |a 0 is preserved even when W is non-zero. This as demonstrated in Fig. 3(a). The populations ρ 11 (t), ρ 33 (t) tend to 0.5 which is the steady state value in Fig. 1, likewise for ρ 22 (t) in after t = 300/Γ 0 . The Fourier transform of ρ 11 (t) (blue solid line in in Fig. 3(c)) shows a peak at ∼ 0.28Γ 0 . The peak matches almost the value of √ 2G where G = 0.2Γ 0 as expected in CPT 40 and from Eq. 5 41 . However, with non zero phonon term W = 0.01G a , ρ 11 (t) and ρ 33 both have a slower modulation on top of the faster optical oscillation as shown by the blue and yellow lines of population in levels |1 and |3 in Fig. 3(b). If we take the Fourier transform of ρ 11 (t) again, we obtain the red dashed spectrum in Fig. 3(c) where the first peak now shows a splitting of frequency with respect to the undisturbed case. The splitting into two frequencies at ω + ∼ 0.27Γ 0 and ω + ∼ 0.29Γ 0 resembles the splitting in eigenvalues 1/2(W ± √ 8G 2 + W 2 ) of eigenvectors in Eq. 6. Physically, phonon coupling W results in non-degenerate eigenvalue magnitudes such that |a + and |a − oscillate at different eigenfrequencies. This in turn modulates population ρ 11 (t) and ρ 33 (t), causing a splitting of the frequency compared to the case where phonon coupling W = 0. Having looked at the CPT case, one wonders if we can use EIT technique to sense coherent phonons. In EIT, the condition for the optical fields becomes G a ≪ G b , where the |2 − |1 optical field is a now a weak probe with detuning δ a compared to a strong resonant driving field for the |3 − |1 transition. The quantity of interest in EIT is the susceptibility of the medium 41 under the incidence of the probe beam which is related to the off-diagonal steady state solution to the density matrix term χ 21 in Eq. S14. Under the condition of no phonon field and damping W = 0, Γ p = 0, we can obtain the linear susceptibility X by Taylor expansion of the steady state solution for Eq. S17 for χ 21 for small G a to obtain . Figure 4 (a,b) plots the real and imaginary susceptibility for different values of W . The shape of the real and imaginary susceptibility for W = 0 in Eq. 7 are typical EIT susceptibility 41 showing a sharp inflection at zero detuning δ a = 0 for the real part and a sharp dip for the imaginary part. The dip to zero for the imaginary part (blue solid line in Fig. 4(b)) physically indicates zero absorption where the transparency window in EIT refers to. When we have phonon coupling W > 0, we see changes in dispersion in Fig. 4(a,b). The change in the real part in Fig. 4(a) follows a decrease in the sharpness of the inflection which can also be due to effects of damping. However the negative anomalous imaginary part on resonance in Fig. 4(a) cannot be caused by damping. Damping will only reduce the size of the dip similar to the result of excited state population |2 in Fig. 1(c). Thus, the presence of anomalous imaginary susceptibility at resonance is another good measure for the strength of phonon coupling W . Physically, negative anomalous imaginary susceptibility should indicate gain rather than loss, which means that we not only have transparency, but possibly amplification. The details of this possibility will be discussed in a future study. Experimentally, this scheme offers a rigorous way to detect coherent phonons in the THz frequency range which is responsible for heat condition. As mentioned earlier, these defectbased detection techniques have the characteristic of being narrow band and yet tunable 38,39 and has been employed successfully in understanding many aspects of phonon transport in crystals 23 and interfaces 60 . These crystals can be interfaced with other materials phonon detectors 61 , making our proposed method directly applicable to detecting coherent phonons in thermal transport. To experimentally realize our proposal, three challenges need to be addressed. Firstly, CPT or EIT have yet been experimentally demonstrated with THz energy separation between the ground state manifold to our knowledge. However, we believe that with the advent of frequency combs, locking two laser in the THz range is certainly possible 62 and we may soon see such an experiment being performed. Secondly, phase fluctuation in any of the optical or phonon fields will affect the quality of the photon-phonon interference. Experimental demonstrations of CPT and EIT typically use the same laser source to generate two frequencies 40,41 , leading to the same phase fluctuations in both optical fields. Dalton and Knight 63 specifically addressed this issue for two photon interference where Λ will be spared of any decoherence but not in a ladder system. Here, our two-photon-phonon interference is a composite of Λ and ladder systems and the net effect will be a reduced interference effect. Lastly, due to phase fluctuation, the coherent phonon field must carry the same phase fluctuation as the optical field, so we must generate the phonons in a coherent manner with the same laser field for the |2 − |1 and |3 − |1 transitions. This is possible with the advent of coherent phonon sources in defect-based systems [25][26][27] , material systems 34-37 and nanofabricated systems 11,[13][14][15][16][17][18][19] . Our work differs from the field optomechanics and non-linear coherent phonon control 64 . Optomechanics primarily relies on coupling a mechanical mode to a designed optical cavity for coherent phonon control. It is remarkable that quantum coherence of phonons has been and characterizing phonon coherence in thermal transport using correlation functions 31 . It is thus evident that characterizing high frequency coherent acoustic phonons in materials using quantum mechanical description are only starting to be explored. Lastly, we would like to mention the relevance of our work not limited to phonon sensing, but also to three-way interference problems 54,55 and coupled oscillator systems 50,51 . Our theory is not limited to just phonon coupling of the ground state manifold but any bosonic field. Thus, the predicted asymmetry in the excited state population, modulation in population time dynamics and the anomalous EIT dispersion will also be observable in any of the above systems, paving way to understanding and engineering multiple interference pathways in more complex multilevel systems. In conclusion, we have proposed a coherent phonon sensing scheme that utilized the existing two photon interference techniques to rigorously test the presence of coherent phonons. This derivation follows closely the work of Whitley and Stround [49]. Consider a Λ system described in the main text where the frequency difference between the |2 −|1 transition is expressed as Ω a = (E 2 − E 1 )/h and the |2 − |3 transition as Ω b = (E 2 − E 3 )/h. The photon field driving the |2 − |1 transition has a detuning δ a = Ω a − ω a and the field driving the |2 − |3 transition has a detuning of δ b = Ω b − ω b . The total Hamiltonian of the system can be written as Eqs. 1a-d where the atomic part satisfies the eigenvalue Eq. 1b, the field part (Eq. 1c) is the usual expression that now comprises the sum of the photon modes indexed as λ with raising and lowering operators c † λ , c λ and the phonon modes indexed as k with raising and lowering operators b † k , b k . The interaction Hamiltonian in Eq. 1d has two parts, the first part being the original two photon interference Hamiltonian which realizes effects of CPT and EIT, and the other portion responsible for phonon coupling here. Notice that we did not use the rotating wave approximation for the phonon part. The coupling coefficient g λ a and g λ b are stands for interaction of the photon dipole interaction for the |2 − |1 and |2 − |3 transition in Fig. 1(a) respectively and the coupling coefficient ζ k stands for electron phonon interaction. The magnitude of the coupling constants are given by In Eqs. S1 and S2,ǫ λ is the unit polarization vector of the λ mode, d a = 2|r|1 and d b = 2|r|3 are the dipole moments with 3|r|1 = 0. V l stands for the quantization volume for photons. However, we allow for electron phonon coupling between |1 and |3 and the coupling coefficient is defined by Eq. S3 [IYS64, Toy03] where Ξ is the deformation potential, ρ is the density v k is the group velocity of mode k, V p stands for the quantization volume for phonons. Using the equation of motion for a single time operator given byȮ(t) = (ih) −1 [O, H], and using the Hamiltonian in Eqs. 1a-1d, we find the atom-field system evolves according to the following equatioṅ Eqs. S4f and S4g can be integrated from initial time t = 0 to give Eqs. S5a and S5b can be substituted into Eqs. S4a-S4e to express electronic density matrix operators as initial conditions of field operators. We first make the harmonic approximation to Eqs. S5a and S5b to obtain The validity of this approximation is discussed in Whitley and Stroud [49] where we also assume that the time intervals are much shorter than the Rabi or natural lifetime for both the photon and phonon field. Using the harmonic approximation in Eqs. S6, we simplify the photon field in Eq. S5a to since only the values of t ′ within a few optical periods of t are important to the integral. Likewise, we apply the harmonic approximation to the phonon operator in Eq. S4g to obtain As we did not adopt rotating wave approximation for the phonon field, the combined operator Note that Eq. S9 is invariant with respect to its conjugate. We can thus ignore conjugation considerations on the phonon operators later in our derivation. RELATIONSHIP BETWEEN TURNING POINTS AND DETUNING Solving the steady state solution to Eq. S17 for χ 11 , χ 22 , χ 33 , one obtains the population in each level ρ 11 , ρ 22 and ρ 33 . Here, we examine the coherent population trapping (CPT) case where G a = G b = G and that Γ a = Γ b = Γ 0 which is the scaling factor for the entire system [49]. Thus, all units in our calculations are subjected to scaling factor Γ 0 . In the CPT calculations, we assume that δ b = 0 and δ a is varied as described in the main text. In CPT experiments, ρ 22 is usually the main indicator in experiments and we also focus on this observable here. The solution for the excited state population ρ 22 under the condition W = 0, Γ p = 0 is given in Eq. 2b in the main text. Here, we show the solution under which Γ p = 0 which gives The turning points for Eq. S19 can be obtained by taking the equating first order derivative to zero. One of the roots will be at δ a = 0 which is the same as CPT. Two roots are imaginary and two roots are real, one for the turning point for δ a > 0 and one for δ a < 0. Here, we Taylor expand the roots for small W to the fourth power to obtain δ b,max ≃ ± 2 3/4 G − 1 2 where when W → 0, we obtain the turning points at ± ± 2 3/4 G as described in Fig. 1(c) of the main text. The linear shift term in Eq. S20 in − 1 2 1 2 + 1 √ 2 W is independent of the sign of the root and is valid for small W as shown in Fig. 2. TIME DEPENDENCE OF POPULATION The time dependence of the population is obtained by solving the eigenvalues s m , eigenvectors ν(m) of matrix A in Eq. S18 and its reciprocal eigenvector w(m). The time dependent solution of Eq. S17 is then given by [49]
5,511.8
2017-04-03T00:00:00.000
[ "Physics" ]