text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
The Oomycete Pythium oligandrum Can Suppress and Kill the Causative Agents of Dermatophytoses Pythium oligandrum (Oomycota) is known for its strong mycoparasitism against more than 50 fungal and oomycete species. However, the ability of this oomycete to suppress and kill the causal agents of dermatophytoses is yet to be studied. We provide a complex study of the interactions between P. oligandrum and dermatophytes representing all species dominating in the developed countries. We assessed its biocidal potential by performing growth tests, on both solid and liquid cultivation media and by conducting a pilot clinical study. In addition, we studied the molecular background of mycoparasitism using expression profiles of genes responsible for the attack on the side of P. oligandrum and the stress response on the side of Microsporum canis. We showed that dermatophytes are efficiently suppressed or killed by P. oligandrum in the artificial conditions of cultivations media between 48 and 72 h after first contact. Significant intra- and interspecies variability was noted. Of the 69 patients included in the acute regimen study, symptoms were completely eliminated in 79% of the patients suffering from foot odour, hyperhidrosis disappeared in 67% of cases, clinical signs of dermatomycoses could no longer be observed in 83% of patients, and 15% of persons were relieved of symptoms of onychomycosis. Our investigations provide clear evidence that the oomycete is able to recognize and kill dermatophytes using recognition mechanisms that resemble those described in oomycetes attacking fungi infecting plants, albeit with some notable differences. Electronic supplementary material The online version of this article (10.1007/s11046-018-0277-2) contains supplementary material, which is available to authorized users. Abstract Pythium oligandrum (Oomycota) is known for its strong mycoparasitism against more than 50 fungal and oomycete species. However, the ability of this oomycete to suppress and kill the causal agents of dermatophytoses is yet to be studied. We provide a complex study of the interactions between P. oligandrum and dermatophytes representing all species dominating in the developed countries. We assessed its biocidal potential by performing growth tests, on both solid and liquid cultivation media and by conducting a pilot clinical study. In addition, we studied the molecular background of mycoparasitism using expression profiles of genes responsible for the attack on the side of P. oligandrum and the stress response on the side of Microsporum canis. We showed that dermatophytes are efficiently suppressed or killed by P. oligandrum in the artificial conditions of cultivations media between 48 and 72 h after first contact. Significant intra-and interspecies variability was noted. Of the 69 patients included in the acute regimen study, symptoms were completely eliminated in 79% of the patients suffering from foot odour, hyperhidrosis disappeared in 67% of cases, clinical signs of dermatomycoses could no longer be observed in 83% of patients, and 15% of persons were relieved of symptoms of onychomycosis. Our investigations Introduction Dermatophytic fungi of the genera Trichophyton, Nannizzia, Microsporum and Epidermophyton cause infections of keratinized structures such as the skin, hairs and nails of healthy individuals [1]. Although infections by dermatophytes are usually not lifethreatening, they are widespread and difficult to eliminate completely [2]. According to World Health Organization statistics, the global prevalence of dermatophytoses is approaching 20-25%, making it one of the most frequent infectious diseases, with treatment costs estimated at half a billion dollars annually [3,4]. Dermatophytes are grouped ecologically according to their habitat as being either anthropophilic (human-associated), zoophilic (animal-associated) or geophilic (soil-dwelling). Virulence factors of dermatophytes remained unknown until recently, when comparative genome analyses revealed candidate genes possibly involved in the infection process [5][6][7]. Three different classes of genes and their products are cited most often as critical factors: proteases secreted to degrade skin, kinases involved in signalling necessary for the interaction between the host and the fungus and LysM adhesins that appear to bind to surface carbohydrates of dermatophytes and mask them from the immune response of the host. These latter factors, in particular, appear to be responsible for the poor recognition of dermatophytes by keratinocytes and macrophages, and the subsequent inhibition of an effective immune response of infected organisms against dermatophytes. Considering the low efficiency of chemical antifungals against dermatophytes [8,9], their elimination using biological defence means would appear as an attractive alternative. However, the biological enemy will have to offer universal and safe elimination mechanisms, considering the physiological and etiological variability among individual dermatophytes. Pythium oligandrum is a non-pathogenic soil-inhabiting peronosporomycete (oomycete) colonizing the root ecosystems of many crop species [10]. This microorganism exhibits strong mycoparasitism against more than 50 fungal and oomycete species, including some of its relatives [11]. To become such an efficient parasite, P. oligandrum has developed a number of traits allowing it to recognize, engage and destroy target fungi [12,13] and it is assumed that it acquired the parasitism genes from the three eukaryotic kingdoms and from bacteria [14]. Hydrolytic enzymes, such as chitinases, cellulases, proteases and glucanases, secreted by P. oligandrum are often cited as molecular tools critical for their mycoparasitic success [10,15]. Competition for space and nutrients are other mechanisms used by P. oligandrum for biological control [16]. The unique possibilities of this microorganism have been used extensively for the protection of plants from fungi [11,[16][17][18][19]. In addition, P. oligandrum has found practical use in human and veterinary medicine for the elimination of dermatophytes [19,20]. Despite the practical medical observations cited above, the exact cellular and molecular mechanisms behind the elimination of the causal agents of dermatophytoses have not been investigated. Here we provide a systemic study of the interactions between P. oligandrum and dermatophytes representing all ecological groups and species that dominate in developed countries [21,22]. We assessed its biocidal potential by performing growth tests, both on solid and on liquid cultivation media, and by conducting a clinical study. We studied the molecular background of mycoparasitism using expression profiles of genes responsible for the attack on the side of P. oligandrum and the stress response on the side of Microsporum canis. Our investigations provide clear evidence that the oomycete is able to recognize and kill dermatophytes using recognition mechanisms that resemble those described in oomycetes attacking fungi infecting plants, albeit with some notable differences. Microbial Strains and Media The M1 strain of P. oligandrum was provided by the company Biopreparáty, Ltd (Ú nětice, Czech Republic) and corresponds to strain ATCC 38472. This oomycete was isolated from sugar beet [18,23]. Ten different species and 23 different strains of dermatophytes were obtained from the CCF collection (Culture Collection of Fungi, Charles University, Prague, Czech R.), from the CCM collection (Czech Collection of Microorganisms, Masaryk University, Brno, Czech R.) or from the working collection kept at the Institute of Microbiology of the Czech Academy of Sciences. Competition tests on solid media were carried out for the most common species of dermatophytes, including one strain of Epidermophyton floccosum, one strain of M. canis, two strains of Nannizzia fulva (syn. Microsporum fulvum), two strains of N. gypsea (Microsporum gypseum), two strains of N. persicolor (syn. Microsporum persicolor), five strains of Trichophyton benhamiae (syn. Arthroderma benhamiae), one strain of Trichophyton erinacei, four strains of Trichophyton interdigitale, three strains of Trichophyton rubrum and one strain of Trichophyton tonsurans (Online Resource 1). The strain M. canis CCM 8353 was used for the gene expression study. The identity of all strains was verified using ITS rDNA barcode sequence and PCR fingerprinting comparisons with reference strain according to Hubka et al. [24]. Interaction Studies on Plates The interaction between the dermatophytes and P. oligandrum was done on malt extract agar (MEA, malt extract, 20 g/l, D-glucose, 20 g/l, peptone, 1 g/l, agar 20 g/l) and potato dextrose agar (PDA, HI Media) incubated at 25°C in the dark. First, the dermatophytes under examination were inoculated on one side of the plate and allowed to grow for 3-10 days, producing colonies 20-25 mm in size. Thereafter, the agar block with P. oligandrum was placed onto the opposite side of the Petri dish, and the continuation of the growth of the dermatophyte and Pythium were evaluated every 2 days until 10 days. The measured parameter was the percentage occupancy of plates calculated as the distance of the front of the growing microorganism from the edge of the Petri dish (in mm) divided by the diameter of the plate (80 mm) and multiplied by 100. Each experiment was performed in triplicate on each of the media used. At the end of the experiment, the viability of the tested dermatophytes was also evaluated. An agar block (1 9 1 cm) was cut from the interaction zone, where both organisms were visibly present, and transferred to Czapek-Dox agar (CDA, sucrose 30 g/l, agar 20 g/l, NaN0 3 3 g/l, K 2 HPO 4 1 g/l, MgSO 4 0.5 g/l, KCl, 0.5 g/l, Fe 2 (SO 4 ) 3 0.01 g/l, pH 6.5), which enables the growth of the dermatophytes but not of P. oligandrum. . Test suspensions of dermatophytes were prepared by washing the spores with 0.05% polysorbate 80 in water, gentle shaking with the glass beads and filtration through a fritted filter with porosity 40-100 lm. The tested preparation of P. oligandrum (batch No. 060217.3, BARD s.r.o.) was resuspended in the distilled water to get 1% suspension with concentration ranging from 100 to 200 CFU/ml and activated for 30 min at 20°C. During the suspension test, 0.5 ml of dermatophyte spore suspensions (density of 3.04 9 10 6 /ml for T. rubrum, 0.88 9 10 6 /ml for T. interdigitale and 0.26 9 10 6 /ml for M. canis) was mixed with 0.5 ml of the P. oligandrum suspension and incubated at 20 ± 1°C for 1 h, 24 h and 48 h. Subsequently, tenfold serial dilutions were prepared from the incubated suspensions and number of CFU was evaluated by the cultivation on the Sabouraud agar for 7-10 days at 25°C. Results are expressed as logarithmic microbial viability reduction for each test microorganism designated as log R (reduction of vitality). The log R is calculated based on the formula log R = log N 0 -log N E , which accounts the concentration (CFU/ml) of the dermatophyte at the beginning (N 0 ) and at the end of the contact time (N E ) with P. oligandrum. Gene Expression Profiles on Agar Plates and in the Liquid Suspension The three genes connected with mycoparasitism on the side of P. oligandrum [10,15] and aggressivity and stress response on the side of dermatophyte [6] were selected for the understanding of the molecular mechanisms standing behind interaction of both organisms (Table 1). For gene expression profiles on agar plates, samples of both the dermatophyte and P. oligandrum were taken by cutting the agar blocks (4 9 8 mm) from the interaction zone and zone with pure P. oligandrum or dermatophyte as is shown in Fig. 4. The lower part of the block containing the agar was removed, and the upper part with the mycelium was used for DNA and RNA extraction using the protocol of Berendzen et al. [25]. Gene expression profiling started from day 4 (just before the physical contact of both microbes) and proceeded until day 7. For gene expression profiles in liquid suspension, conidia of M. canis CCM 8353 were prepared and counted using the method described by Saunte et al. [26]. A suspension of P. oligandrum was obtained using a liquid culture and counted on the basis of all reproductive forms, sporangia, zoospores and oospores [23]. To initiate the interaction experiment, 4 ml of MEA (without agar) medium was mixed with 0.5 ml of M. canis conidia (5 9 10 6 conidia/ml) and 0.5 ml of P. oligandrum (as the sum of reproductive forms, 5 9 10 6 cells/ml) in six-well plastic plates (BioTech, Czech Republic) and incubated at 30°C. In control experiments, one of the microorganisms was omitted and replaced by a pure medium. From each well, 50 ll of the liquid medium was collected in 6-h intervals for subsequent culturing on CDA plates for viability testing and gene expression profiling. Good aeration and homogeneity of the sample were assured by its frequent agitation. The primers were used based on the cited literature or designed with the Primer-Blast tool (http://ncbi. nlm.nih.gov/tools/primer-blast) ( Table 1). Specificity of the PCR primers was confirmed by the sequencing of their PCR products. All DNA/RNA amplifications were performed using the CFX Connect Real-Time PCR Detection System operated using the CFX Manager TM Software ver. 3.0 (BioRad) as described previously [27]. Briefly, reverse transcription reactions were performed in Hard-Shell Ò 96-Well PCR plates (BioRad) in the total reaction volume of 10 ll composed of 5 ll of the individual, stabilized RNA samples and 5 ll of the reverse transcription master mix A ? B (Generi Biotech, Czech Republic). Each plate was sealed and incubated at 42°C for 60 min. Thereafter, the reaction mixtures were diluted 509 using RNAse-free water according to the instructions of the manufacturer. The qPCR amplifications were performed in Hard-Shell Ò 96-Well PCR plates in the total reaction volume of 10 ll composed of 4 ll of diluted cDNA (see above), 1 ll of the primer mixture containing 5 lM of each of the corresponding forward and reverse primer, and 5 ll of SsoAdvanced Universal SYBR Ò Green Supermix (BioRad). The plates were placed into the CFX Connect Real-Time PCR System (BioRad) operated using CFX Manager TM Software. The cycling parameters were: 95°C for 3 min followed by 50 cycles of 95°C for 10 s and 60°C for 30 s. The data were evaluated using the 2 -DDCt (Livak) method as described in the Real-Time PCR Application Guide (BioRad). The expression of inducible genes was related to the expression of two constitutive reference genes for beta tubulin and glyceraldehyde-3-phosphate dehydrogenase, which expression varies within the range ± 50% under the range of tested conditions [28,29]. Gene expression on the interaction plate was corrected based on gene expression on the control plate containing the single microorganisms under examination [27]. Clinical Study This study is a retrospective clinical trial conducted at the Department of Dermatology and Venerology of the Pardubice Regional Hospital in Pardubice between 1 June 2007 and 1 June 2014. No randomization of patients was performed. Informed consent was obtained from all individual participants included in the study. Patients were mostly outpatients included in the study based on the following criteria. For the study of acute patients (n = 69), these criteria were: clinical symptoms of foot mycoses confirmed by either microscopic observation or a positive microbial cultivation test. For the recurrent infection study (n = 29), the criteria were: recurring problems with tinea interdigitalis infection at least twice yearly, an acute attack of tinea interdigitalis present upon entering the study and confirmation of foot mycosis by both microscopy and cultivation. No patients had to be excluded from the study on the basis of their age or clinical status. From the set of 29 patients with recurrent infection, 16 had an acute attack of tinea interdigitalis at least twice yearly, 9 of them three times a year, 3 patients stated four attacks annually, and 1 person suffered problems with the disease continuously. In acute patients, the profile of the causative agents revealed using cultivation was consistent with published data, with T. rubrum and T. interdigitale (8) representing two of the by far most common species (Online Resource 2). In patients with recurrent infection, the most common causal agent was T. rubrum (72% of patients), sometimes in combination with candidiasis. Pythium oligandrum was applied in the form of the cosmetic product Biodeur Ò (Bio Agens Research and Development, BARD s.r.o., Ú nětice, Czech R.) which is composed of P. oligandrum M1 dried spores ([ 2 9 10 5 oospores/ g) stored in the presence of a silica desiccative and dried millet (Panicum miliaceum), which provides the substrate for the revitalization of the microorganism. For the foot baths, 1 g of the Biodeur Ò was reconstituted in 2-3 l of tap water warmed to 34°C. Patients' feet were washed in this solution for 30 min and allowed to dry spontaneously. For acute infections, the foot bath was applied on two alternate days in three consecutive weeks, and the medical evaluation was performed 1 month after the last bath. The protocol for recurrent tinea interdigitalis patients was based on the initial bolus, identical to the acute situation with additional applications in weeks 5, 7 and 10, followed by maintenance applications 6 weeks apart. As an additional preventive measure, shoe spraying by Biodeur Ò suspension was applied twice per week in weeks 1-5, followed by once per week sprays in weeks 6-9, and additional sprays every second week thereafter. In the study of patients with recurrent tinea interdigitalis, continuous monitoring in the form of regular checks followed, the time of the cessation of the disease was evaluated based on clinical evaluation supplemented by mycological evaluation by microscopy and cultivation. Patterns and Kinetics of Interaction Between P. oligandrum and Dermatophytes on Plates and in the Suspension The three types of growth pattern scenarios were observed. Firstly, the growth of P. oligandrum over the dermatophyte was observed in case of M. canis, N. persicolor, T. benhamiae, T. rubrum and T. tonsurans (Fig. 1). Concerning the kinetics of the interaction, T. benhamiae, T. rubrum or T. tonsurans, got rapidly overgrown by the oomycete. In M. canis, the curve is biphasic, indicating a certain degree adaptation to Pythium in the initial stage (Fig. 2). In other species, the interaction started with the formation of a contact inhibition zone of various intensity, varying from wide (N. fulva, T. erinacei, T. interdigitale and E. floccosum) to narrow (N. gypsea) (Fig. 1). This interaction occurred at the level of substrate mycelium and was later followed by the production of aerial mycelium by P. oligandrum, which overgrew the dermatophyte species. The last category of dermatophytes includes N. fulva and some strains of N. gypsea, N. persicolor and T. interdigitale that were able to adapt to the attack mounted by Pythium and, eventually, to stop its action. The competitive ability of P. oligandrum on MEA was generally better than on PDA, with the exception of N. fulva and certain strains of T. interdigitale. Excluding species from which only a single strain was available, notable intraspecies variability was found in N. fulva, N. gypsea and N. persicolor (Fig. 3). Viability tests were conducted after the dermatophyte overgrown by P. oligandrum was transferred to CDA medium allowing the growth of the dermatophyte, but not of the Pythium. Using this approach, it could be shown that the dermatophytes were actually dead after their interaction with the Pythium. Under the conditions of the liquid culture, the 3-4 log viability reduction of the target dermatophyte after 48 h was observed ( Table 2). Gene Expression Profiles and Viability During the Elimination of Microsporum canis on the Plate The results regarding the expression of inducible aggressivity genes in P. oligandrum and M. canis are detailed in Fig. 4. Only expression relative to beta tubulin is shown since the expression relative to glyceraldehyde-3-phosphate dehydrogenase provided very similar data (not shown). Concerning the oomycete, on day 4 the expression of all genes was very low compared to the control situation. After the direct contact on day 5, the genes for the critical digestion enzyme cellulose (POCELL) got switched on, followed by the expression of cell wall lysing endo-b-1,3-glucanase (POENDO) and sporulation marker POSTRU with the peak on the day 6. On the day 7, only the POSTRU expression persisted, when the oomycete had most probably exhausted the c Fig. 1 Examples of the time course of direct interactions representing all five interaction patterns. Type I-exponential single phase of the ascending type. a Trichophyton rubrum CCF 4933. b Trichophyton benhamiae CCF 4918. Type IIexponential two-phase pattern of the ascending type. c Trichophyton erinacei CCF 4472. Type III-exponential single phase with an ascending and a descending phase. d. Nannizzia persicolor CCF 4542. e Epidermophyton floccosum PL 231. Type IV-two-phase pattern with an ascending and a descending phase. f. Nannizzia gypsea CCF 4626. The photograph was taken on days 4, 6, 8 and 10 of the experiment, ordered sequentially from left to right available nutrients and switched its metabolism to the sporulation mode. In case of dermatophyte, there was considerable upregulation of all genes before the contact of both taxa on the day 4 and after the contact on the day 5. Then, on day 6, the biological antagonism between the oomycete and the dermatophytes was decided for the oomycete. This was documented by the dramatic cessation of the expression of all analysed genes, with only negligible amounts of transcripts being detected. The gene expression profiles under the suspension conditions showed the same pattern as observed in the plate experiment (Figs. 4, 5). The notable difference was in the expression of the POENDO gene which preceded the expression of the POCELL gene, and in the normalized expression levels were generally much lower. Furthermore, while the dermatophyte retained considerable overexpression of the MCLYSM and MCMETA genes, we observed very little expression Clinical Efficacy of Pythium oligandrum in Patients with Tinea Pedis The efficacy of relief of foot mycoses symptoms is summarized in Fig. 6. Of the 69 patients included in the acute regimen study, 42 had odour symptoms, 43 exhibited hyperhidrosis, 58 had dermatomycosis, and 59 suffered from onychomycosis. Symptoms were completely eliminated in 79% of the patients suffering from foot odour, hyperhidrosis disappeared in 67% of cases, clinical signs of dermatomycoses could no longer be observed in 83% of patients, and 15% of persons were relieved of symptoms of onychomycosis (Fig. 6b). In patients with recurring infections, in 28 (97%) of all 29 patients, clinical symptoms disappeared within 6 weeks after the first application. Concerning the long-term follow-up of all 29 patients, only three patients that finished the 12-month application protocol had a single further episode of tinea interdigitalis within the next 3 months. The application of the biological cosmetic product containing P. oligandrum was well tolerated. We did not observe a single episode of an allergic reaction or any other side effect in either the acute or the recurrent group of patients (Fig. 6c). Discussion The ability of P. oligandrum to parasite on other fungi and oomycetes has been known since the forties of the last century [30]. However, this important aspect of the biology of this peronosporomycete has so far been analysed only in relation to its ability to provide protection against plant pathogens (reviewed in [10,31]). We used three approaches to perform our investigations: direct observations of interactions in dual cultures, molecular analyses of gene expression profiles of both interacting microorganism and clinical tests in real conditions. In the cases of dermatophytes investigated on the Petri dishes, we could distinguish at least four patterns of interaction (Fig. 2): pattern I, defined as an exponential single phase of the ascending type (T. benhamiae, T. rubrum and T. tonsurans); pattern II, defined as an exponential two-phase pattern of the ascending type (M. canis and T. erinacei); pattern III, defined as an exponential single phase with an ascending and a descending phase (E. floccosum N. fulva and N. persicolor); and, lastly, pattern IV, defined as a two-phase pattern with an ascending and a descending phase (e.g. N. gypsea and T. interdigitale). It may be assumed that such patterns might reflect features of the more detailed mechanisms of mutual interactions controlled by consecutive waves of mutual recognition of the target dermatophyte by the oomycete, or consecutive waves of diffusion of soluble molecular factors (the detailed nature of which is yet to be ascertained). In general, the results of the biological fight between the oomycete P. oligandrum and the target fungus is expected to be both species-and medium-(environment-) specific, and the presented observations confirm this (Fig. 3). Nevertheless, the entire group, including T. rubrum, a dominant dermatophyte species, is mostly killed and eliminated by P. oligandrum with high efficiency. The high efficiency of the biological elimination of the target dermatophyte species is further corroborated by the suspension interaction results, where the log suppression reached a 3-4 log reduction within 48 h ( Table 2) that is required for chemical biocides [32]. Such uniformly high efficiency of killing is not common in other groups of fungi, where more extensive variations between the efficiency of killing individual fungi are observed [30]. Our molecular analyses provided an additional understanding of the molecular nature of the antagonism between the oomycete and the dermatophytes. It was interesting to observe that while in the case of P. oligandrum direct contact with its prey appeared to initiate very little changes in the gene expression profiles, the dermatophytes must have been informed about the presence of the biological enemy long before the actual contact (dramatic gene upregulation on day 4, Fig. 4). We hypothesize that volatile or diffusible compounds produced by P. oligandrum (reviewed in [16]) can mediate this kind of early response in the attacked fungus. In case of the gene expression profiles of P. oligandrum, the sequence of gene upregulation could be explained based on our knowledge of the biological functions of the individual gene markers. Both cellulase (POCELL) and endo-b-1,3-glucanase (POENDO) belong to the group of glycohydrolase, which are able to digest the cell wall of the attacked fungus [33,34]. However, remodelling of the own cell wall of the oomycete allows the endoglucanase to feature also as an important marker of sporulation [35]. The tyrosine-rich structural protein (POSTRU) has recently been identified as one of the most specific markers of sporulation and is thus expressed during the late stages [36]. Indeed, the sequence of gene expression in the plate experiment started with the upregulation of cellulase on day 5 of the experiment, especially in the samples taken from proximal areas of the contact zone between the oomycete and the dermatophyte. All three followed genes of the oomycete were significantly upregulated on day 6 of the experiment, albeit the cellulase only in the contact areas (Fig. 4). On day 7 of the experiment, sporulation of the oomycete was expected, which was supported by the persistent upregulation of the tyrosine-rich structural protein (Fig. 4). The time sequence of gene expression of the oomycete during the suspension experiment was somewhat different, reflecting different cellular populations and a different environment (Fig. 5). Here, the endoglucanase gene was expressed earlier compared to the plate experiments at 12 h after mixing, while both cellulase and endoglucanase genes are highly expressed at 24 h after mixing. The tyrosine-rich structural protein is expressed only at 48 h after mixing, when the sporulation phase occurred under the solution experimental conditions (Fig. 5). The three dermatophyte genes followed in the present study are typical aggressivity and stress related genes [6]. LysM cell surface effector (MCLYSM) represents an important surface effector and masking protein that covers the surface of dermatophytes to block access of soluble protein recognition factors of the immune system. This mechanism is acting in plant-pathogenic fungi and in the response of dermatophytes to the human immune system [6]. We have shown that dermatophytes use this stress-induced pathway also in their response to mycoparasitism. Metalloprotease (MCMETA) is the principal digestion enzyme that allows dermatophytes to digest the skin protein keratin and thus mediates both the attachment of fungi to skin structures and their nutrition. Ca-dependent kinase (MCCAMK) is one of the most important signalling enzymes triggering and orchestrating the response of the dermatophytes. These genes were used because the genes that are upregulated during the biological struggle of the dermatophytes with the oomycete were not known. They were all switched very early on, showing that the dermatophytes were able to sense the enemy and react, and remained switched on until the moment of the death (Fig. 4). We again noticed some principal differences between the plate and the suspension experiment. In the plate experiment, the upregulation was much more dramatic and concerned all three followed genes on day 4 and day 5 of the experiment. In suspension, only the MCLYSM gene was upregulated at 12 h and accompanied by MCMETA at 24 h (Fig. 5). In 2002, Mencl described the use of the cosmetic biopreparation Biodeur, containing a fermented millet substrate with a surface growth of P. oligandrum for the suppression of the hidrotic feet syndrome (foot sweating) and interdigital mycoses [19]. This author reported an effect 1 month after its application on patients having infections with T. rubrum, T. interdigitale, and other dermatophytes and yeasts. In patients subjected to that study, there was 78.6% elimination of odour symptoms, 67.4% elimination of the hidrotic symptom and 82.8% elimination of the dermatophyte infection, evidenced by the absence of the dermatophyte upon the microbial cultivation. Our clinical results from the initial study can be viewed as remarkable, considering that dermatophytoses in humans are notoriously difficult to treat, and often recurring. We noticed a high efficiency of the cosmetic product containing oospores of the oomycete P. oligandrum in the elimination of clinical signs of dermatophytosis, odour symptoms and hyperhidrosis (83-67% of patients, n = 69). Onychomycoses are known to take much longer to resolve compared to other symptoms. Indeed, although notable recovery from interdigital damage could be observed as soon as 20 days after the first application of the biological products, as long as 9 months was needed to observe definitive signs of recovery in cases of onychomycosis (not shown). In conclusion, the present study demonstrates the ability of the oomycete P. oligandrum to suppress or eliminate dermatophytes, emphasizing its efficiency against a broad spectrum encompassing virtually all clinically important species and demonstrating a susceptibility profile that has not been observed for the classes or species of fungi targeted by this oomycete. By using of viable P. oligandrum propagules, it was possible to prove that their suppressing effect on dermatophytes starts within hours of the mutual encounter and interaction. Our detailed description of such an aggressive type of parasitism provides a scope for the practical use of the findings presented here.
6,965
2018-07-02T00:00:00.000
[ "Environmental Science", "Medicine", "Biology" ]
A Data-Driven Method for Vehicle Speed Estimation Resume This paper presents a method based on Artificial Neural Networks for estimation of the vehicle speed. The technique exploits the combination of two tasks: a) speed estimation by means of regression neural networks dedicated to different road conditions (dry, wet and icy); b) identification of the road condition with a pattern recognition neural network. The training of the networks is conducted with experimental datasets recorded during the driving sessions performed with a vehicle on different tracks. The effectiveness of the proposed approach is validated experimentally on the same car by deploying the algorithm on a dSPACE computing platform. The estimation accuracy is evaluated by comparing the obtained results to the measurement of an optical sensor installed on the vehicle and to the output of another estimation method, based on the mean value of velocity of the four wheels. ISSN 1335-4205 (print version) ISSN 2585-7878 (online version) of the speed is commonly obtained using optical or GPS-based sensors [9]. However, these solutions may present some limitations according to the operating conditions. Optical sensors suffer problems of costs, size and sensitivity to dirt and environment conditions. On the other hand, the GPS-based sensors may be not sufficiently robust and reliable in specific atmospheric conditions as well as in situations with limited sky visibility, such as tunnels and urban environments with tall buildings. These problems can be partially overcome by analyzing signals coming from more satellites, using the Differential GPS technique or by exploiting two GPS antennas [10]. However, these approaches are still characterized by high signal latency time during the broadcasting of corrections. Typically, this latency is in the range of seconds, which might be far from the requirements of the active solutions implemented on board of a vehicle [11][12]. Alternative solutions are based on extraction of the speed information from vehicle analytical models [13]. A solution presented in [14] is based on e measurements of an Inertial Measurement Unit (IMU) using a slip detection estimator. This technique is typically implemented considering the unknown road condition as a bounded uncertainty and employing the estimated friction-independent tire forces for correcting the estimate. Nevertheless, the need for an accurate assessment of the road friction coefficient and of parameters related to the tires, which are highly time-varying, represents a relevant limitation of this Introduction Automotive industry technologies witnessed a rapid evolution in the recent period, supported by the constant developments in the fields of electronics, actuation, automation and connectivity. Nowadays, commercial cars are highly performing and, at the same time, intelligent and sustainable [1]. Many benefits of the latest advancements are already tangible in terms of improved safety and comfort, reduction of the emissions and traffic congestions, lower stress for the car occupants and more confidence of the driver in a vehicle [2][3]. In this context, active strategies, relying on the real-time assessment of the vehicle dynamics assumes a crucial importance and the knowledge of the car states is a fundamental task, that is typically performed by direct measurement or, alternatively, by estimation and other indirect approaches [4]. However, some of the vehicle parameters (e.g. speed and sideslip angle) can be directly measured only with expensive, bulky and low robust devices, whose adoption in large production vehicles is not a viable solution. This motivates the considerable research effort that is recently being dedicated to investigation of alternative methods, such as the application of artificial intelligence to the assessment of vehicle dynamics [5][6]. In this paper, the attention is focused on estimation of the vehicle speed, a parameter that plays a key role in several active systems dedicated to control of the wheel slip, yaw rate and sideslip angle [7][8]. The direct measurement is a single set of features extracted from the measured vehicle parameters and including all the road conditions. The validation of the method has been conducted experimentally on the same vehicle by deploying the designed algorithm on an auxiliary electronic control unit. The effectiveness of the approach is demonstrated by comparing the output of the estimator to the direct optical measurement and with another estimation computed as the average of the velocity of the four wheels. This estimation is extracted from an algorithm that was already deployed on the vehicle control unit. The main contribution of this paper is the proposal of a data-driven method to estimate the vehicle speed. This approach has not been investigated yet in the literature and allows obtaining accurate results, if the training datasets effectively include all the significant behaviors of the vehicle in the widest possible set of handling manoeuvres and driving conditions. The good level of accuracy is quantified with the evidence obtained during the experimental validations. The results obtained on a high-performance vehicle allow highlighting the estimation behavior in extreme driving conditions. The paper is structured as follows. The first section is dedicated to description of the vehicle setup and of the regression and classification tasks. Afterwards, the design of the neural networks for the speed estimation and for the road condition identification is illustrated. The last section presents the discussion of the experimental results obtained on the real vehicle in different driving conditions and in correspondence of road condition transients. Estimation method and vehicle setup The architecture of the proposed method is illustrated in Figure 1 and consists of two interconnected stages dedicated to the speed estimation and identification of the road condition. The former exploits three parallel Non-linear Autoregressive with Exogenous Input neural networks (NARX) and provides three outputs, one per each road condition: dry ( vxD t ), wet ( vxW t ) and icy ( vxI t ). The regression networks are fed with eight measurements listed in Table 1 (parameters 1 to 8) and trained with a supervised learning procedure using the speed measured by the optical sensor (v x ) as the target output. Inputs from 5 to 8 are computed as where i is F or R, in the case of front or rear wheels, respectively, j is L or R, in the case of left or right wheels, respectively, ij is the angular speed of the ij-wheel, expressed in round per minutes and i t is the wheel radius of the wheels, measured in meters. The total steering angle TS (input 4) is computed as the sum of the steering wheel angle (defined as the angle between technique. Further approaches exploit the Kalman Filter (KF) [15], Adaptive Kalman Filter (AKF) [16] and its nonlinear versions, Extended Kalman Filter (EKF) [17] and Unscented Kalman Filter (UKF) [18]. Other methods rely on similar filter/observer-based techniques [19][20]. However, these model-based techniques may suffer accuracy problems if the reference model is inaccurate or unable to reproduce the vehicle dynamics in all the driving conditions. An alternative class of techniques is based on Fuzzy Logic (FL) [21][22][23][24], which is strongly dependent on the designer experience and requires a highly refined definition of the rules and membership functions [25]. Finally, a common solution computes the speed as the average of the velocity of the four wheels. Although simple and cheap, this method may result as inaccurate when one or more wheels are locking during a sudden braking or start spinning and skidding, i.e. while driving on wet or icy roads and during the extreme manoeuvres. This paper proposes a method to estimate the vehicle speed by using artificial intelligence to mitigate the limitations of model-based techniques and have an effective solution also in conditions that are difficult to represent in the models. Specifically, the presented method exploits a combination of regression and classification Artificial Neural Networks (ANNs). As well known, ANN-based approaches do not rely on any model and, if the networks are appropriately trained, may guarantee good levels of accuracy and robustness, as demonstrated by the growing attention that these methods are gaining in several engineering fields [26][27][28]. To the author's knowledge, although the ANNs are widely documented as effective in executing system modeling and timeseries estimation, few research studies using the ANNs for the vehicle speed estimation are reported in the literature so far, since most of them rely on other techniques [29]. In this work, the proposed architecture includes two tasks: a) speed estimation computed by three parallel regression ANNs, dedicated to three different road conditions (dry, wet and icy road) and b) identification of the road condition with a pattern recognition neural classifier. The classifier output is used to select the correct estimation among the three outputs of the regression networks. Typically, the problem of the road condition detection is tackled with estimation of the friction coefficient with model-based approaches [30], regression ANNs [31][32][33], or exploiting the radar measurement [34][35]. In this study, on the contrary, the aim is not to provide the value of the friction coefficient, but the information of the class of the road condition: dry, wet or icy. The ANNs' training datasets have been collected on a real vehicle, equipped with an optical sensor for acquisition of the reference speed, which is the target adopted during the supervised learning phase of the regression networks. Each regression network is trained with the dataset relative to the corresponding road condition. On the other hand, the input of the classifier which is exploited for the measurement of the speed and features a precision of ±0.5 km/h, declared by the manufacturer. A dSPACE MicroAutoBox is interfaced with the CAN-bus of the vehicle to allow collecting the training datasets in the design phase and deploying the designed algorithm for the experimental validation. The sampling rate of the data acquisition is 100 Hz. The training dataset collection and experimental validation have been conducted in all the possible combinations of the different conditions that are reported in Table 2. The following driving conditions have been explored in three different adherence conditions (dry, wet and icy road): different tire-road friction coefficients; summer and winter tires; new and used tires; tests performed with and without the active safety system; tests performed by selecting different car driving modes; the vehicle's direction of motion and the steered wheel direction) and the active front steering input which is obtained from the electronic control unit of the vehicle. The second stage allows identifying the road condition exploiting a classifier based on a pattern recognition feed-forward ANN. The classification process generates an output (s) allowing to select the correct estimation among the outputs of the three regression networks above described. The datasets adopted for the training of the regression and classification networks are collected on an instrumented vehicle in different test tracks and road conditions. The vehicle is a four-wheel drive (4WD) sport car with a power-to-weight ratio of around 0.35 kW/Kg and a weight of about 1700 kg. The vehicle is equipped with standard inertial sensors and a twoaxis optical sensor Correvit S-Motion from Kistler, Output Activation Function (OAF) is a linear function. Table 3 reports the training parameters of the three networks, the number of neurons in the hidden layer and the input and feedback buffer size. The networks are trained with the Levenberg-Marquardt backpropagation algorithm. These characteristics are the result of a trial and error procedure, since the design and training of a neural network does not follow a standard procedure, as discussed in detail in [36] and [37]. Classification task for the road condition identification The road condition identification is performed by the two-layered (one hidden and one output layer) feed forward pattern recognition ANN. This architecture connects an input feature space to an output space of multiple pattern classes and it has been already presented in the literature to solve classification problems in different engineering fields [38][39]. After a trial and error procedure, the hidden layer has been designed with a size of 50 neurons. The HAF is a hyperbolic tangent sigmoid and the OAF is a normalized exponential function. The adopted training procedure is based on the Levenberg-Marquardt backpropagation algorithm. The input of the classifier is a set of 64 predictors, extracted from seven of the acquired signals, namely longitudinal and lateral accelerations (a x and a y ), yaw rate ] o and longitudinal speed of the four wheels (v FL ,v FR ,v RL ,v RR ) [6]. Features from 1 to 22 have a straightforward definition (mean, standard deviation, peak to RMS value and variance for the acquired signals). Features from 23 to 64 result from a spectral analysis performed on the input signals, where PSD stands for Power Spectral Density, computed using the tests performed with different driving styles and by professional and common drivers. Design of the estimator This section describes the design of the regression neural networks for the estimation of the speed and of the classifier for identification of the road condition. Regression task for the vehicle speed estimation A NARX ANN architecture is adopted for the regression task. This network allows modelling a discrete non-linear system. The output of the network is defined as: , , , x n y n y n y n d x n x n d 2 are the inputs and outputs of the network at the discrete timestep n, respectively, d x and d y are the input and output buffer memory, respectively and { is the non-linear model represented by the network. During the regression procedure, the value of the dependent output signal y(n) is regressed on the previous d y values of the output signal considering previous d x values of the independent (exogenous) input signal. In the proposed solution, the NARX is adopted in open-loop during the training process and in closed-loop during the estimation phase, i.e. when the network is deployed on the electronic unit in the real application. The target input is y*(n), which is provided to the ANN during the supervised training phase. The Hidden Activation Function (HAF) is a hyperbolic tangent sigmoid and the Table 4, where MSE and MSE AVG are the mean square error of the proposed method and of the method based on the average of the four wheels' velocity, respectively. They are computed as follows: (4) Figure 2 shows results obtained on a dry asphalt with the Electronic Stability Control (ESC) system off and the car set in the racing driving mode. At 20 s, a sine-sweep manoeuvre is performed with a frequency of TS increasing from 0.5 Hz to 1.5 Hz and the speed v x equal to around 50 km/h. Three additional sine-sweep manoeuvres are performed at 40, 60 and 90 s and a stepsteer manoeuvre is performed at 130 s, when the vehicle longitudinal speed goes to zero, while TS reaches -550 deg. The speed is estimated accurately by the ANNbased algorithm. On the other hand, vxAVG t is affected by a relevant error at 155 s during a sudden braking, whereas the estimation of the proposed ANN-based method remains accurate. The error 2 f (dashed line) presents high peaks confirming that the estimate provided by the wheels' velocity average may be not completely reliable during some extreme manoeuvres. Figure 3 represents results obtained on a wet asphalt with the ESC system switched off and the car set in sport driving mode, during a lap on a handling circuit. During this acquisition, the driver has performed successive demanding manoeuvres, reaching 160 km/h and steering from -150 deg to 150 deg. The speed is estimated accurately by the proposed method, whereas the error 2 f reaches peaks of 7 km/h. Figure 4 shows results obtained on an icy asphalt with the ESC system enabled and the car set in normal driving mode. At the beginning of this acquisition, the driver performed a sine-sweep manoeuvre at an almost constant speed equal to about 40 km/h, while steering from -80 deg to 100 deg for about 30 s. The frequency of TS increases from 0.5 Hz to about 2 Hz during the sinesweep manoeuvre. Afterwards, the driver performed a step-steer manoeuvre with TS equal to -200 deg, once the vehicle has reached a maximum speed equal to about 90 km/h. The speed is estimated accurately by the ANNbased investigated method. periodogram technique [40] by dividing the considered signal into multiple overlapping blocks and computing the average of their squared magnitude Fast Fourier Transforms (FFT) [41]. The average spectral power (features from 51 to 64) is computed as the integral of the PSD over the two adjacent frequency bands: 0.5 ' 1.5 Hz (frequency band 1) and 1.5 ' 5 Hz (frequency band 2). The predictors are collected in buffers with a time length of 2s and refilled with a frequency of 10 Hz, which is the output rate of the classifier and hence of the overall estimation output vx t . The set of predictors was selected by a trial and error procedure, performed to maximize the accuracy of the classification task. A more refined selection phase could be performed after a quantification of the influence of each predictor. This aspect is currently a hot research topic [42][43], nevertheless, this analysis is beyond the scope of the present study and would require a dedicated work. Results and discussion The results are presented in different driving and road conditions. This section is dedicated to the analysis of the estimation behavior in correspondence of road condition transients. Speed estimation The accuracy of the proposed method is evaluated by comparing the estimation ( vx t ) to the measurement of the optical sensor mounted onboard the vehicle (v x ) and with a further estimation computed as the average of the velocity of the four wheels ( vxAVG t ). The latter is an algorithm that was already deployed in the electronic control unit of the vehicle. The more relevant experimental results are reported in the following figures. The graphs illustrate the comparison between v x , vx t and vxAVG t in the subplot a, the absolute error of the two estimation methods with respect to the measured value in the subplot b ( ) and behavior of the ANN input signals, specifically the longitudinal a x and lateral a y accelerations (subplot c), the total steering angle (subplot d), the wheels' speed (subplot e) and the yaw rate (subplot f). The main characteristics of the absolute error 2 f reaches 70 km/h in Figure 4.b during the last manoeuvre, while the error 1 f is limited to less than 5 km/h in correspondence of demanding manoeuvres. On the contrary, the estimation vxAVG t is not accurate during the last manoeuvre, since the four wheels start skidding and blocking on the icy road surface, as represented in Figure 4.d. The from icy to wet. Figure 6 reports the acquisition recorded with the car in normal driving mode and the ESC system disabled. During the initial part of the acquisition, the road is dry. Then, the road condition becomes wet at about 40 s. At around 80 s, the road is dry again. The output of the classifier S is reported in Figure 6 where the zoomed regions report the buffers along with the classification outputs. The second zoomed area is reported because it represents the occurrence of a misclassification. In this case, S indicates a wet road condition, although the asphalt is dry. However, this misclassification does not affect the longitudinal speed estimation, as represented in Figure 6. In Figure 7, the results obtained during the road condition transient from wet to icy and from icy to wet are represented. The car has the ESC system enabled and is set in racing driving mode. The classification output S is reported in the zoomed portions. All the buffers are correctly classified and the final value of the estimation is accurate, as represented in Figure 7. The number of misclassifications in correspondence of the road conditions change is very limited. This result has been achieved by reducing the length of the buffers considered for the feature extraction in the classification task. As a matter of fact, the larger the buffer, the higher is the possibility to incur in misclassifications. The high rate of classification output is also advantageous to limit the effect of estimation inaccuracies due to misclassifications. The estimation error is indeed recovered within the period of 0.1 s, corresponding to the output rate of the classifier. This motivates the absence of major estimation inaccuracies in correspondence of the misclassifications. Conclusions In this paper, a data-driven method for the vehicle speed estimation has been presented. The proposed technique exploits a combination of regression and Road condition identification The performance of the road condition identification is evaluated with a Confusion Matrix (CM) reported in Figure 5 The classification accuracy for each road condition is computed as follows: Validation during the road condition transients Accuracy of the road conditions identification has been validated also in correspondence of the transient between the two different conditions: a) from dry to wet and from wet to dry and b) from wet to icy and is presented as a reliable alternative to existing methods to mitigate the limitations due to the model classification neural networks task to estimate the speed and identify the road condition. The solution representation. The performance of the method was evaluated experimentally in several driving conditions. The good match between the estimated and measured values of speed demonstrates the effectiveness of the method, also in correspondence of transients between different road conditions.
4,965.8
2021-07-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Structural Design and Analysis of the RHOA-ARHGEF1 Binding Mode: Challenges and Applications for Protein-Protein Interface Prediction The interaction between two proteins may involve local movements, such as small side-chains re-positioning or more global allosteric movements, such as domain rearrangement. We studied how one can build a precise and detailed protein-protein interface using existing protein-protein docking methods, and how it can be possible to enhance the initial structures using molecular dynamics simulations and data-driven human inspection. We present how this strategy was applied to the modeling of RHOA-ARHGEF1 interaction using similar complexes of RHOA bound to other members of the Rho guanine nucleotide exchange factor family for comparative assessment. In parallel, a more crude approach based on structural superimposition and molecular replacement was also assessed. Both models were then successfully refined using molecular dynamics simulations leading to protein structures where the major data from scientific literature could be recovered. We expect that the detailed strategy used in this work will prove useful for other protein-protein interface design. The RHOA-ARHGEF1 interface modeled here will be extremely useful for the design of inhibitors targeting this protein-protein interaction (PPI). The interaction between two proteins may involve local movements, such as small side-chains re-positioning or more global allosteric movements, such as domain rearrangement. We studied how one can build a precise and detailed protein-protein interface using existing protein-protein docking methods, and how it can be possible to enhance the initial structures using molecular dynamics simulations and data-driven human inspection. We present how this strategy was applied to the modeling of RHOA-ARHGEF1 interaction using similar complexes of RHOA bound to other members of the Rho guanine nucleotide exchange factor family for comparative assessment. In parallel, a more crude approach based on structural superimposition and molecular replacement was also assessed. Both models were then successfully refined using molecular dynamics simulations leading to protein structures where the major data from scientific literature could be recovered. We expect that the detailed strategy used in this work will prove useful for other protein-protein interface design. The RHOA-ARHGEF1 interface modeled here will be extremely useful for the design of inhibitors targeting this protein-protein interaction (PPI). INTRODUCTION Precise interactions between proteins allow a tight control on many functions and pathways, eventually leading to gene expression or silencing, to protein release or degradation, and even to cell death. To date, there are between 130,000 and up to 650,000 protein-protein interactions (PPIs) described (Ottmann, 2015), but only a fraction of PPIs is validated experimentally, ranging from 14,000 (Rolland et al., 2014) to 125,000 PPIs (http://interactome3d.irbbarcelona.org/, Mosca et al., 2013). This structural gap comes from the difficulties of obtaining experimentally fulllength interacting proteins and then to resolving their structures using crystallography, NMR, or electron microscopy (EM). As a result, in most cases for a specific PPI, one has to combine existing incomplete experimental structures with in silico approaches. This virtual step is even critical for drug discovery, as examplified with the successful targeting of 50 PPIs by small molecules (Skwarczynska and Ottmann, 2015). When no protein-protein structure is available, one has thus to perform protein-protein docking predictions. The binding mode prediction of two proteins is very challenging since: (i) minor to major local structural rearrangements may be triggered upon protein recognition, (ii) one protein may recognize multiple proteins, and (iii) cofactors/nucleic acids may be involved to enhance or stabilize the interaction. The analysis of the various existing proteinprotein interfaces available in the Protein Data Bank indicates also that (i) the interface area varies greatly between protein families, (ii) the composition in amino acids in this interface may be biased, and (iii) the binding lifetime is transient. Due to the complexity of modeling these diverse PPIs, there are many methods developed, and a global evaluation called CAPRI is performed periodically. We use the most robust and successful methods validated in this competition for protein-protein docking evaluation of the optimal binding mode of our example (Lensink et al., 2007(Lensink et al., , 2018(Lensink et al., , 2019. To illustrate the process of modeling a protein-protein interface, we selected a protein complex where some unbound and bound experimental structures are available, and a complex between other members of the family is available. The first protein partner in our study is RHOA (gene RHOA), the second protein partner is Rho Guanine nucleotide Exchange Factor 1 (gene ARHGEF1). RHOA is a member of the RAS superfamily of small GTPases recognized as a master regulator of the actin cytoskeleton, thus driving multiple cellular processes, such as cell contraction, migration, proliferation, and gene transcription. While basal and controlled RHOA activity is required for homeostatic functions in physiological conditions, its uncontrolled overactivation plays a causative role in the pathogenesis of several diseases, such as cancer, neurodegenerative, or cardiovascular diseases (Guilluy, 2010;Cherfils and Zeghouf, 2011;Vetter, 2014;Loirand, 2015;Prieto-Dominguez et al., 2019;Arrazola Sastre et al., 2020). RHOA is a molecular switch that couples cell surface receptors to intracellular effector pathways by cycling between a cytosolic inactive state bound to guanosine 5 ′ -diphosphate (GDP), and an active GTP-bound state that translocates to the membrane. The activation of RHOA is mediated by Rho nucleotide guanine exchange factors (GEFs) that promote the exchange of GDP for GTP, which are themselves turned on by the activation of upstream membrane receptors (Cherfils and Zeghouf, 2013). ARHGEF1 is the Rho GEF responsible for the activation of RHOA by angiotensin II through type 1 angiotensin II receptor in vascular smooth muscle cells (Loirand and Pacaud, 2010;Luigia et al., 2015). This signaling pathway participates in the physiological control of the vascular tone and blood pressure, and is causally involved in the pathophysiology of hypertension (Guilluy, 2010). Small GTPases structure consists of a six-stranded β-sheet (β strands B1 to B6) linked by helices and loops (Ihara et al., 1998). In RHOA, the β-sheet is made up of the anti-parallel association of B1 and B2 and the parallel association of B3, B1, B4, B5, and B6, and there are five α helices (A1, A3, A3 ′ , A4, and A5) and three 3 10 helices (H1-H3). RHOA possess two hinge regions, a Abbreviations: RMSD, root mean square deviation; SAS, solvent accessible surface. loop called switch I (29-42) and an helix called switch II (62-68), which are described to be more flexible than the core βsheet (Dvorsky and Ahmadian, 2004), as shown in Figure 1 for various bound and unbound experimental structures. RHOGEF proteins catalyze the exchange of GDP with 5 ′ -triphosphate (GTP) on RHOA (Felline et al., 2019). Two domains on these proteins, Pleckstrin Homology (PH) and Dbl Homology (DH), are involved in the nucleotide exchange mechanism, the RHOAbound DH domain being more rigid than the PH domain ( Figure 1C). The DH domain consists of six α-helices arranged in an oblong shape, which interact with switch I and switch II regions of RHOA. The PH domain contains seven antiparallel β-strands forming a roll architecture, connected to helix α6 of the DH domain. The mechanism of nucleotide exchange involves large displacements of the PH domain relative to the DH-RHOA interaction (Felline et al., 2019). In this article, we show how to combine virtual approaches with experimental data to predict reliably the formerly nonexisting structure of a PPI. By using a rough structural superimposition or more advanced protein-protein docking methods, we build, analyze, and refine the RHOA-ARHGEF1 model and discuss the strengths and weaknesses of this approach. We, then, derive general recommendations to reproduce our approach to model other PPIs. Sequence Analysis A BLAST sequence search on the Non-Redundant (NR) database with RHOA or ARHGEF1 sequence as query was performed in June, 2018. The resulting sequences were aligned using clustal Omega (Madeira et al., 2019), amino acids conservation was estimated using Jalview (Waterhouse et al., 2009) and in-house scripts in Biopython (Cock et al., 2009). The phylogenetic analysis of human conserved sequences was performed in Genious version 2019.0.4 (http://www.geneious.com/). Structure and Interface Analysis We used all structures of RHOA complexed with all ARHGEFs available in the Protein Databank (Berman et al., 2003) in January 2018 and their respective unbound form when available. Only one representative chain by crystallographic structure was taken as reference (Supplementary Table 1). Experimental structures were analyzed using PDBePISA version 1.54 (Krissinel and Henrick, 2007;Krissinel, 2010). Two methods were selected to analyze protein-protein interfaces in order to obtain useful insights of the important residues involved in the interaction. The first one was 2P2I Inspector, version 2.0 (http://2p2idb. cnrs-mrs.fr/2p2i_inspector.html) (Basse et al., 2016) which computes a series of 51 chemical and physical descriptors from three-dimensional (3D) structures. The second one, PPCheck (http://caps.ncbs.res.in/ppcheck/) is a webserver for quantifying the strength of a protein-protein interface (Sukhwal and Sowdhamini, 2013). It can also be used to predict hotspots, perform computational alanine scanning, and to differentiate possible native-like conformations from the non-native ones FIGURE 1 | Representation of the major structural changes described for the unbound form of RHOA (1FTN, in white surface), and for the ARHGEF members. (A) Location of switch I (loop, 29-42) and switch II (alpha helix, 62-68) with representative residues indicated and green arrows to indicate their orientation. The GDP nucleotide is indicated in orange, blue, and red sticks, the magnesium is shown as a green sphere. (B) Diversity of switch I and switch II position as found in representative crystallographic structures. The most mobile Switch I is represented in the tube for the unbound RHOA (green: 1FTN, yellow: 5EZ6, unpublished), and in salmon when bound to the GAP domain of MgcRacGAP (5C2K, unpublished). (C) Superimposition on the DH domain of ARHGEF8 (4XH9, yellow), ARHGEF11 (3T06, cyan), ARHGEF12 (1X86, orange), and RHGEF25 (2RGN, gray). The PH domain is highlighted with an oval shape. (D) Orientation of RHOA (green surface) on ARHGEF11 (cyan surface) with important conserved residues shown as spheres (see text). Only switch I and switch II on RHOA and DH domain of ARHGEF11 are indicated for clarity. given a set of decoy ensembles as obtained through the proteinprotein docking as it computes the strength of non-bonded interactions between any two proteins/chains present in the complex. Robetta Server was used to perform virtual alanine scanning of the interface (Kortemme et al., 2004). Models from docking or molecular dynamics simulations were visually assessed and analyzed in The PyMOL Molecular Graphics System, Version 1.8 Schrödinger, LLC. Root Mean Square Deviation (RMSD) was computed using the PyMOL rms_cur command. As RMSD is a global measure, we use two specific measures for rigid body docking and molecular dynamics simulations interface analysis as described in Takemura and Kitao (2019): (i) Ligand-RMSD (L-RMSD) where one protein is Fixed (F) and the second protein is Mobile (M) to compute the L-RMSD. First the Fixed protein is superimposed on the same structure in the crystallographic reference, then the RMSD is computed on the Mobile (M) protein alone. (ii) Interface-RMSD (i-RMSD): in this case, only the amino acids known to be involved in the interface between both proteins are evaluated. Again, a first step consists of superimposing the one protein (RHOA or the PH domain of ARHGEFs) on the reference crystallographic structure to remove translation and rotation degrees of freedom potentially coming from the docking methods process. The angle between helices is computed using a plugin by Thomas Holder, which computes the angle between two vectors created from the coordinates of the Cα atoms of each helix. Superimposition Model of RHOA-ARHGEF1 A preliminary structure of RHOA bound to ARHGEF1 was derived from the co-crystal of RHOA-ARHGEF11 (PDB id:3T06) (Bielnicki et al., 2011). The complexes were created by superimposition of ARHGEF1 (3ODO) (Chen et al., 2011) on ARHGEF11 in PyMOL. This superimposition was submitted to MolProbity (Williams et al., 2018) for analysis and steric clashes were removed using Chiron (Ramachandran et al., 2011). Chiron performs rapid energy minimization of protein molecules using discrete molecular dynamics with an all-atom representation for each residue in the protein, this process allows to remove most of the steric clashes. Molecular Dynamics Simulation All-atom simulations of unbound and bound proteins were performed using GROMACS 2016.3 (Abraham et al., 2015), the starting structures are detailed in Supplementary Table 1. Each system was prepared with the AMBER forcefield FF99SB-ILDN (Lindorff-Larsen et al., 2010) in explicit solvent (TIP3P) (Jorgensen et al., 1983) with a specific attention to protonation states as reported in PROPKA. A NVT followed by the anisotropic pressure coupling (NPT ensemble) protocol was applied until equilibration was reached and the full molecular dynamics simulation was computed for 500 ns up to 1µs. All simulations were run on the CCIPL cluster facility at the University of Nantes using GPUs. The force field parameters for GDP and GTP were gathered from the AMBER parameters database (http://research.bmh.manchester.ac.uk/bryce/amber) and converted to GROMACS format files using acpype (da Silva and Vranken, 2012). The resulting trajectories were visualized in the VMD version 1.9.1 (Humphrey et al., 1996), and GROMACS tools were used for various measurements. ATTRACT Ab-initio protocol using a coarse-grained forcefield (ff2g) and manage to predict an estimation of the binding energy between the two proteins. PyContact was used to analyze protein contacts type, strength, and lifetime throughout the simulations (Scheurer et al., 2018). These contacts were plot with the R package MDplots (Margreitter and Oostenbrink, 2017). RESULTS In order to determine which docking method was the best for our specific needs, we have evaluated their performance (i) on recovering existing crystallographic structures of RHOA bound to a GEF, a process called re-docking, and (ii) on assembling the bound RHOA from a given crystallographic with a GEF from another crystallographic structure, this process being known as cross-docking. Z-Dock Is the Best Method for Building Our Complex As docking strategies are based on different methods, it is difficult to determine a priori which method will produce the most reasonable starting complex for further studies. We assessed the top five performing docking software from CAPRI assessment, briefly introduced in Table 1, to produce RHOA/ARHGEF1 complexes: ATTRACT, ClusPro, HADDOCK, PyDockWeb, and ZDOCK. Assessment of Webserver Performance in Re-docking Experiments We evaluated the performance of each software according to its ability to recover the existing structure of bound RHOA and ARHGEFs. This method called re-docking allows to discriminate the accuracy of the algorithm studied on our system. The web servers define the first input protein given as the receptor, so it stays fixed (F) and considers the second protein provided as the mobile protein (M). We performed our analysis with the small GTPase or the GEF as (F) or (M). The results are presented in Table 2. We computed the L-RMSD of the predicted complex by superimposing the Fixed protein with the same protein in the crystallographic structure and by computing the RMSD on the Mobile protein to assess the performance of each method. When comparing individually the predicted pose against the reference, ATTRACT and PyDockWeb perform equally with lower L-RMSD values for ATTRACT on its best predictions. ZDOCK and ClusPro present more diverse results and larger deviations. Although it should not be important, we observed that the input order of the fixed and mobile protein was affecting the prediction. This is of limited importance for ATTRACT and PyDockWeb, these methods are, therefore, less sensitive to the size of the mobile protein (RHOA being 200 AA long and GEF DH/PH domains being 600 AA long). We observe that for 4XH9, there is a dramatic decrease in the quality of the prediction although the calculations were performed three times. It is also possible to determine which models ranks the best among all poses and not only the first one: ZDOCK and ATTRACT proposes 10 binding modes, ClusPro offers multiple weights for their scoring functions for clusters of poses, and PyDockWeb presents the 100 best binding poses. The L-RMSD values are in (Å). The best prediction analyzed is the first of the best 10 poses or the best cluster of poses classified by each method. The best predictions are in bold, the wrong predictions are in italics. HADDOCK could not be evaluated at this stage since we chose to not use data guidance. Computing time is indicative of one docking experiment. **Calculation were performed independently in triplicate to exclude any temporary issue. All the poses identified as close to the crystallographic structure with a global RMSD under 2 Å were present in the top 10 solutions of the best cluster for each method. Altogether, this lead us to consider the PyDockWeb at the end of the redocking study, although its computation time is significantly higher than the other methods ( Table 2). As ZDOCK and PyDockWeb provided more reliably poses close to the original crystallographic structures, these two methods were kept for the following analysis. Assessment of the Best Performing Methods in Cross-Docking Experiments We verified the dependence on the initial structure for the protein-protein binding mode prediction. We applied a crossdocking experiment where each bound RHOA in one crystal structure is evaluated against another GEF partner. We used the free RHOA structure as a sensitivity control for our cross-docking measurements. There is no dependence for the docking result linked to the pdb input for RHOA or GEF. The results are available in Considering the results of the re/cross-docking, we selected ZDOCK as the best method to predict the more favorable binding mode between RHOA and ARHGEF1 (Figure 2). Evaluation of the Best Models Based on Known Interactions to Select the Best RHOA Candidate Structure Since no experimental structure of the RHOA-ARHGEF1 interface is available, we used the crystallographic structures The RMSD value (in Å) for the mobile part is displayed for the best prediction in the first 10 poses. The results for re-docking experiments, underlined, are identical to Table 2. The best predictions are in bold. Frontiers in Molecular Biosciences | www.frontiersin.org of other GEF paralogs bound to RHOA to determine which amino acids are shared in all complexes. The amino acids mapping to RHOA and ARHGEF1 was done using use multiple sequence alignments comparison (data not shown). Four residues are conserved in GEFs interfaces, namely, E423, Q563, R551, and N603 in ARHGEF1. As can be seen in Figure 3, these shared residues for all GEF interfaces can be split in zones or individual amino acids contacts. By analogy to existing complexes, ARHGEF1 E423 has to be present close to Y34/T37/V38 (a region called switch I in RHOA), R551 has to be close to V43/D45/E54 of RHOA, and N603 has to be close to D67/R68/L69 (a region called switch II in RHOA). Only one amino acid in RHOA, N41 seems to bind exclusively to Q563. It is well-established that RHOA is very rigid due to the strong structural requirements imposed by the GTP recognition and hydrolysis mechanism (Dvorsky and Ahmadian, 2004). Only two regions called switch I and switch II are more flexible with or without binding partners, as seen in Figure 1. Our study allows a more detailed understanding of the interaction between amino acids pairs important for the RHOA-ARHGEF binding. As done for ARHGEF1, we also analyzed the interface residues of the other GEFs present in each crystallographic structures in the complex with RHOA and assessed their amino acids conservation using multiple sequence alignments. Four residues of ARHGEF1 are present at the interface, and 10 for RHOA. As can be seen in Table 4, we have listed all amino acids present in interaction between both proteins, i.e., at least one amino acid of RHOA is in contact with one amino acid or more of a given ARHGEF. On average, the number of contact pairs recovered after docking represents at least half of the residues known to be present on both sides of the interface. This indicates that our docking strategy allows to build a reasonable starting structure for the RHOA-ARHGEF1 complex. As sequence conservation on the DH+PH domains modeled here is the most important among ARHGEF11, ARHGEF12, and ARHGEF1 (Supplementary Table 2), and provided the docking validation steps showed that ARHGEF11 docking FIGURE 3 | Conserved contacts in the interface between RHOA and all its GEFs. (Top) Diagram of conserved contacts by amino acids, amino acids E423, Q563, R551, and N603 in ARHGEF1, and residues linked by arrows pertaining to RHOA. (Bottom) Split view of ARHGEF1 (cyan) RHOA (green) with matching residues between proteins highlighted in yellow, white, red, blue, and purple, the rest of the interface is indicated in pale yellow. allowed to recover more contacts than with ARHGEF12 docking, we chose the RHOA structure found in 3T06 to dock it using ZDOCK with the unbound structure of ARHGEF1 (3ODO). The resulting complex will be referenced thereafter as complexD (Docking). Template-Based Complex Modeling Since there are some experimental structures of RHOA bound to ARHGEF1 homologs, we also predicted the bound structure of both proteins with a simpler approach, based on structural superimposition in PyMOL. We first used a rhoA-bound crystal structure and superimposed the free ARHGEF1 on all the homologous GEFs. By doing so with rigid models, we could not take into account the concerted induced-fit required for finely tuning the interaction. We used a webserver from Dokholyan Team named Chiron which allows to relax the most important steric clashes (http://redshift.med.unc.edu/ chiron/) (Ramachandran et al., 2011). In order to solve all the bumps, many rounds were necessary. A preliminary structure of RHOA bound to ARHGEF1 was derived from the RHOA-ARHGEF11 crystal structure (3T06). The complexes were created by superimposition of ARHGEF1 (3ODO) on ARHGEF11 in PyMOL. This superimposition was submitted to MolProbity for analysis, and steric clashes were removed using Chiron. From there, we used classical descriptors to evaluate the resulting complex (delta SAS, RMSD, . . . ) with a special look into the interface size. This interface was analyzed with PDBePISA. The best binding was found when using ARHGEF1 from 3ODO and RHOA from 3T06: the interface is 2,949 Å 2 corresponding to around 5% of the total surface of ARHGEF1 and to around 11% of RHOA total surface area. In the complexD, this interface comprises amino acids 3 to 181 from RHOA and 392 to 761 4 | Numbers of amino acids determined to be in interaction between both proteins at the interface, numbers from RHOA (x/10) or its complexed ARHGEF (y/4), as found in the different binding mode generated by ZDOCK during the crossdocking experiments, while the ARHGEF partner was kept fixed. from ARHGEF1, for a total interface size of 2,830 Å 2 . Those values are smaller than the values found for the homologs complexes where this interface area is on average 3,371 Å 2 . Before exploring further the complexD and complexT, we verified our models with PPcheck, which decomposes the interaction energy in three terms: (i) hydrogen bonding (Ehyd), (ii) interchain van der Waals interactions (Evw), and (iii) inter-chain electrostatic interactions (Eele). The total stabilizing energy is then divided by the total number of interface residues to obtain the energy per residue. No significant deviation requiring further refinements with the Chiron webserver was present in both models. Molecular Dynamics Simulation Refinement for ComplexT and ComplexD Both complexes were modeled using molecular dynamics simulations to determine if the interface could be refined during this procedure. The initial surface area in complexD, 2,830 Å 2 at the beginning of the simulation, stabilizes to 3,056 Å 2 between 200 and 1,000 ns of the simulation (Figure 4). The molecular dynamics simulation of RHOA-ARHGEF1 complexT also remained stable for most of the time with a rapid initial increase in the interface area followed by a plateau after 250 ns. The average interface size in this plateau is 3,150 Å 2 (data not shown). Starting with two different models, we observe an augmentation in protein surface contact driven by local adjustments. When considering individually each protein at the RMSD level, there is a higher deviation for RHOA than ARHGEF1, implying that RHOA undergoes most of the conformational changes, as we will see in details below. Interface Contacts Evolution Over Time To understand the evolution of interface complex during the simulations, we analyzed the hydrogen bonds between the two partners. The result is shown in Figure 5 where we only plotted hydrogen bonds with a lifetime in the simulation over 15%. With this plot, we can identify which amino acids, side chains are seeing important rotations. For instance, an important hydrogen bond is conserved between RHO1-ARG5 and ARHGEF1-GLU544 or ARHGEF1-ASP556. The most stable contact is RHOA-ARG68-ARHGEF1-ASN603/ASP611, since it is observed for 900 ns, or 90% of the simulation time. For others contacts, some were present from the beginning of the simulation, others appeared and disappeared. Since it may take time to stabilize contacts, we observe that an important interaction appears between RHOA-GLN61 and ARHGEF1-GLN563 at 500 ns. Interestingly, some of these hydrogen bonds are members of the very conserved list of amino acids listed above, for instance, for RHOA-ARG68 and ARHGEF1-ASN603. FIGURE 5 | Hydrogen bonds lifetime during molecular dynamics simulation on the complexT, the hydrogen bonds were defined using the hbond routine in GROMACS and analyzed using the MDplot package in R. DISCUSSION Since no experimental structure of RHOA-ARHGEF1 was available from X-ray studies, NMR, or EM studies, we had to model it. Using the protein sequences of Rho and GEF families, and the existing protein structures of bound homologs RHOA-ARHGEF8 (Petit et al., 2018), RHOA-ARHGEF11 (Bielnicki et al., 2011), RHOA-ARHGEF12 (Kristelly et al., 2004), and RHOA-ARHGEF25 (Lutz et al., 2007), we analyzed to determine which amino acids were shared at the interface of the complex (Figure 3). This sequence and structure-based information was important to assess the validity of our models. Initial Models of RHOA-ARHGEF1 Complex The prediction of protein-protein interface using docking methods is still an important field of research (Smith and Sternberg, 2002;Lensink et al., 2007) but the predictive power of these methods greatly varies depending on the protein families (Bendell et al., 2014;Wang et al., 2017). As no GEF or RHOA experimental structure was used as target to assess the methods in recent CASP experiments, we benchmarked how these methods could perform on our specific case, using re-docking and cross-docking experiments. We selected ZDOCK after a careful quantification and inspection of the re-docking/cross-docking experiments since its results were the most robust across most predictions and in agreement with our sequence+structure derived data. The best model (complexD) selected from ZDOCK contains 5 out of 10 shared amino acids in RHOA and 3 out of 4 shared amino acids from ARHGEF1, with an interface surface area of 2,830 Å 2 . As the docking experiments are time-consuming and contain also uncertainties, we did also a more crude approach using PyMOL. We analyzed the existing structures of RHOA bound to other members of the ARHGEF family to find the best starting template for our structural comparison. A superimposition of ARHGEF1 (3ODO) on RHOA-ARHGEF11 (3T06) was then performed in PyMOL and further refined using Chiron (Ramachandran et al., 2011). This modeled interface FIGURE 7 | Orientation of RHOA relative to ARHGEF1. (Left) Comparison of RHOA in complexT (blue) and RHOA in complexD (green) at the beginning of the simulation (T0). Only the surface of ARHGEF1 (gray) of complexT is shown for clarity. The center of one helix of RHOA is displayed in red stick to illustrate the clockwise movement observed during the simulation, with an angle of 22.47 • and a distance of 15.2Å between the top of the helix. (Right) Same orientation with the same angle and distance between the last snapshot of complexT (yellow) and complexD (black), the shift in both complexes is only 8.66 • for a distance of 6.6Å. ARHGEF1 of complexT is displayed in transparent gray surface, since ARHGEF1 proteins are aligned on the PH domain, the difference in the bottom of the figure comes from the movement of the DH domain. (complexT) allowed to correctly find the position of 2 out of 10 amino acids in RHOA and 2 out of 4 amino acids from ARHGEF1. Both methods allowed to define comparable starting complexes of the RHOA-ARHGEF1 interface from rigid templates. Major steric clashes were carefully examined using Chiron (Ramachandran et al., 2011) and visual inspection, but no further amino acids adjustment was required. The RMSD difference between both models is 0.4 Å for ARHGEF1 and 9.5 Å for RHOA. This larger difference in RHOA position comes from an alternate orientation of the protein relative to ARHGEF1 with a clockwise rotation of 22 • between RHOA in complexT and RHOA in complexD (Figure 7). This alternative positioning of RHOA in comparison to other members of the GEF family is also present in crystallographic structures. Both complexT and complexD seemed therefore reasonable starting complexes, with a comparable building time of 1 day for both protocols: instant for PyMOL superimposition plus 1 day for removal of clashes in Chiron and 1 day for ZDOCK prediction. Molecular Dynamics Interface Refinement A classical method to enhance protein models is molecular dynamics simulations (MD) (Mirjalili et al., 2014). We performed MD on complexT and complexD for 1µs each. During this simulation of the complexT, the interface area in the complex increased (3,480 Å 2 ) in comparison to the initial complex (2,949 Å 2 ). We identified amino acids conserved in all RHOA-ARHGEF complexes by combining structural sequence analysis ( Figure 3A). These contacts are stable throughout the simulation (R5, R68/E544, D556, N603, and D611). Interestingly new contacts are observed K27-E423, R68-D611 led by local rearrangements of amino acids, in particular K27, Y34 (RHOA), and E423 (ARHGEF1) (Figure 6). Both complexT and complexD lead to a similar RHOA-ARHGEF1 interface at the end of the simulation. At the beginning of the simulation (t = 0 ns), ComplexD and complexT only have a global RMSD difference of 1.4 Å between them, but at the end (t = 1,000 ns), the RMSD rises to 3.7 Å. Since the RMSD is a global measure of the movements, i.e., both proteins moved, it FIGURE 8 | Energy contribution of RHOA amino acids present at the interface for different snapshots of the simulation, when important shifts in hydrogen bond networks were observed. The displacement is computed as a virtual alanine scanning using the Robetta webserver. is important to understand how proteins evolved independently during the simulations. When each trajectory is taken individually, we observe that ComplexD moved more (5.1 Å) from its initial structure than complexT (2.9 Å). This apparent difference comes mostly from a larger movement of the PH domain in the complexD simulation since the RMSD between the initial structure and the end of the simulation concerning only the ARHGEF1 protein is 18 Å. This apparently large difference in ARHGEF1 position for complexD is the consequence of two movements: (i) the local rearrangements of the DH domain (the core RHOA binding domain of GEFs), which is similar in both complexD and complexT (6 Å), and (ii) a larger movement of the PH domain which may be involved in the nucleotide exchange. When aligning the DH domain in both trajectories, we, therefore, see a more important rotation of RHOA relative to ARHGEF1 for complexD (6 Å) than for complexT (3.5 Å) as illustrated in Figure 7. The main interface enhancements thus appear locally at the ARHGEF1-RHOA interface, mostly on the DH domain of ARHGEF1, and more globally with a clockwise (+22 • ) rotation in the complexT, and a slighter anti-clockwise (−3 • ) rotation in complexD. Both complexes are refined after molecular dynamics simulations, many important amino acids saw an increase in contact frequency and the position of RHOA relative to ARHGEF1, either inherited from the superposition onto ARHGEF11 or from the docking studies with ZDOCK, led to a strong convergence of the interaction. This study confirms the interest of using molecular dynamics simulation to increase model quality. Validation of the Binding Mode To validate the binding mode from the simulations, we used the interactions as a starting point and analyzed specific interaction of this complex found in the literature. We found back couples of interactions, which have a lifetime of over 15%, are conserved in all RHOA/GEFs binding modes, with some interactions specific to the RHOA-ARHGEF1 interface. We observed that the complex tends to go toward a more stable conformation when the PH domain moves to enclose RHOA, with an increase in the number of interactions, with a mean SAS going over 3,100 Å 2 , the mean surface area for all complexes of RHOA/GEFs available so far. We could identify some specific contacts for RHOA-ARHGEF1 from the complexT, namely D59-K567, Q63-T566, which were not described previously (Hoffman and Cerione, 2002;Derewenda et al., 2004). In the complexD, the specific contacts E97-S746 has already been described by Gasmi-seabrook (Gasmi-Seabrook et al., 2010) as an essential contact in the nucleotide exchange for PDZ-ARHGEF1 and RHOA. This contact is not observed in the complexT simulation. Starting from two complexes built with different strategies, we were able to have a perfect compatibility between experimental predictions and our in silico methods. The identified additional contacts, specific for the RHOA-ARHGEF1, will require experimental exploration since there were differences from the two simulations, potentially coming for the overall dynamics of the interface. To guide experimental validations, we FIGURE 9 | Energy contribution of ARHGEF1 amino acids present at the interface for different snapshots of the simulation, when important shifts in hydrogen bond networks were observed. The displacement is computed as a virtual alanine scanning using the Robetta webserver. studied via virtual alanine scanning if some amino acids could be qualified as hotspots (Kortemme et al., 2004;Jiang et al., 2017) of the interface (Figures 8, 9). Only three amino acids seem to contribute strongly to the binding of both proteins: (1) N41 for RHOA, already identified by multiple sequence alignments and structure comparison, (2,3) I558 and A605 in ARHGEF1. These residues seems to be robustly involved in the interface during all the simulations, either starting with the complexD or the complexT. Selection of the Best Model The knowledge acquired with our strategies helped us to understand the most relevant elements for the binding of the two partners altogether with insights for selecting/computing relatively good refined models. In the initial models after minimization for complexT, there were already 5 over the 10 conserved (E423, Q563, and N603) contacts and for the complexD 4 over 10 (mostly with E423). After simulation for complexT, we can see 8 over 10 conserved contacts and for complexD, there are 6 over 10 contacts during a short time frame where the interface SAS is the highest. Some contacts are seen only thanks to the simulations and one question rises, what is more important to select for qualifying the best model? Its higher number of contacts or the presence of conserved/important contacts? In our case, both models show conserved/important contacts and new contacts specific to each model. The amino acids detected as hotspots are not conclusive since they are present in both trajectories. During the simulations, even if starting from somewhat different structures, the interaction between ARHGEF1 and RHOA converges. Our model building strategy clearly indicates that a molecular dynamics simulation, starting from rationally designed PPI, improves the initial models. Comparison of the MD Model With Information-Driven HADDOCK Docking HADDOCK is very efficient for protein-protein docking when experimental/bioinformatics constraints can be added for driving the docking. As we had determined the important residues for the binding interface, we used them in HADDOCK webserver first for redocking experiments on the 3T06 crystal structure as shown for other methods in Table 2, and obtained a RMSD of 0.75 Å with 90% of the structure in the first same cluster, better than all other protein-protein docking methods. We then did the prediction of the interaction model using ARHGEF1 from 3ODO and RHOA from 3T06. This prediction gave us nine clusters, and after careful analysis only two seemed to have the expected binding mode: cluster 1 and cluster 5. When Frontiers in Molecular Biosciences | www.frontiersin.org cluster 1 is compared to ZDOCK's derived complexD (without information driven construction), this cluster 1 has a RMSD of 0.62 Å, also very close to complexT with a RMSD value of 0.65 Å (Figure 10). Qualitatively, the cluster1 model displays 6 over 10 of the contacts given as input. If experimental data are available, for instance, coming from mutagenesis experiments, HADDOCK allows their incorporation to guide the binding mode. In this situation, HADDOCK is certainly the best strategy to build a PPI, provided these data can be transformed in sufficient constraints as input. However, the resulting model provided by HADDOCK in 1 day compared to other docking methods is very interesting. The interface area for cluster 1 is 1,478 Å 2 , slightly better than complexD model before refinement (1,420 Å 2 ), but far from the refined interface obtained after molecular dynamics simulations (>3,000 Å 2 ). Even if it is possible to build a reliable model by integrating various data in HADDOCK, a long molecular dynamics simulation, with a simulation time above 250 ns is still required to enhance the quality of a PPI (Feig and Mirjalili, 2016). CONCLUSION Most biological processes involve transient protein-protein interactions, in particular for cellular signaling. The RHOA-ARHGEF1 interaction is responsible for the activation of RHOA downstream of type 1 angiotensin II receptor signaling in vascular smooth muscle cells, thereby controlling vascular tone and blood pressure (Loirand, 2015). Our study aims at exemplifying how one can model a proteinprotein interaction when sufficient experimental structures are present, but only experimental data for close homologs are available. We set up two different strategies summarized in Figure 11. One has first to identify the close homologs. If the members of the family have a standardized name, they should rapidly be identified directly in the Protein Data Bank (Berman et al., 2003). If not, a search on the National Center for Biotechnology Information (NCBI) structure service, in the Protein Families Database (Mistry et al., 2021), or in the PALI database (Balaji et al., 2001) should help in finding close homologs. If not, the protocols described in our work should be considered with caution. When the close homologs are identified, it is possible to apply the protocols previously described. The first one, based on structural superimposition of partners, allows a rapid building of the complex, but provides required local adjustments to avoid steric clashes. We expect it to be useful for a preliminary study of how the proteins interact. The second strategy based on most advanced methods combining the search of the best binding mode via the assessment of the results of protein-protein docking, followed by the refinement of the best docked model using molecular dynamics simulations. This model showed not only increased shape complementarity and increased contacts but also provides insights into the dynamics of the detailed amino acids interactions between the partners. This more advanced strategy is probably only accessible to experts and should only be required for atomic-level analysis and mechanistic studies. In our study, both strategies gave close initial models, but we do not expect the results on RHOA-ARHGEF1 to be amenable for general purpose. We, therefore, recommend to use a protein-protein rigid-body docking study (complexD) for producing the initial interaction mode. In our study, ZDOCK was better if precision, robustness, and time are taken altogether into consideration. When possible, we recommend to perform long molecular dynamics simulations to enhance the network of interaction between both proteins and to get a better overview of the lifetime of each interaction. More generally, we expect these strategies will be successfully applied to a variety of targets where a partial structural coverage of both partners is known, provided the complex to model has characteristics comparable with the two proteins described in this article. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. AUTHOR CONTRIBUTIONS ST and GL obtained the funding. EG did the experiments and analysis. BO and GL supervised the work. EG and ST wrote the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This work was part of the PIRAMID project funded by the French Regional Council of Pays-de-la-Loire and the TROPIC project, supported by ANR grant Programme d'Investissements d'Avenir (ANR-16-IDEX-0007), the French Regional Council of Pays-de-la-Loire, and Nantes Métropole.
9,342.2
2021-05-24T00:00:00.000
[ "Computer Science", "Biology" ]
Recombinant L. lactis vaccine LL-plSAM-WAE targeting four virulence factors provides mucosal immunity against H. pylori infection Background Helicobacter pylori (H. pylori) causes chronic gastric disease. An efficient oral vaccine would be mucosa-targeted and offer defense against colonization of invasive infection in the digestive system. Proteolytic enzymes and acidic environment in the gastrointestinal tract (GT) can, however, reduce the effectiveness of oral vaccinations. For the creation of an edible vaccine, L. lactis has been proposed as a means of delivering vaccine antigens. Results We developed a plSAM (pNZ8148-SAM) that expresses a multiepitope vaccine antigen SAM-WAE containing Urease, HpaA, HSP60, and NAP extracellularly (named LL-plSAM-WAE) to increase the efficacy of oral vaccinations. We then investigated the immunogenicity of LL-plSAM-WAE in Balb/c mice. Mice that received LL-plSAM-WAE or SAM-WAE with adjuvant showed increased levels of antibodies against H. pylori, including IgG and sIgA, and resulted in significant reductions in H. pylori colonization. Furthermore, we show that SAM-WAE and LL-plSAM-WAE improved the capacity to target the vaccine to M cells. Conclusions These findings suggest that recombinant L. lactis could be a promising oral mucosa vaccination for preventing H. pylori infection. Supplementary Information The online version contains supplementary material available at 10.1186/s12934-024-02321-4. Background Chronic gastritis and peptic ulcers are caused by the microaerophilic, gram-negative bacterium H. pylori, which invades the human stomach and duodenal mucosa [1].The primary treatment for H. pylori infection involves the use of a proton pump inhibitor or ranitidine bismuth citrate in combination with clarithromycin, amoxicillin, or metronidazole [2].The development of effective H. pylori vaccines is of utmost importance due to the diminishing efficacy of antibiotics due to increasing antimicrobial resistance [3].However, there are no approved few commercial H. pylori vaccines currently available to our knowledge.As intriguing candidate antigens for H. pylori vaccines, Urease [4,5], NAP [6], HpaA [7] and HSP60 [8,9] proteins have recently been suggested.We have previously reported the development of a multivalent vaccine against H. pylori based on these virulence factors that offered therapeutic protection in Mongolian gerbils [10].However, to make it an effective vaccine for oral delivery, and provide more robust defense against H. pylori infection, it is essential to design a rational mucosal vaccine delivery system. Approximately 90% pathogenic infections, including H. pylori, spread via mucosal surfaces [11,12].Due to this, mucosal vaccination can assist in overcoming the drawbacks of the presently available injection-based vaccinations by establishing a protective immunity against these illnesses.Oral vaccinations provide significant benefits over conventional injection-only vaccines, including good safety, compliance, and ease of manufacture [13,14].However, there are currently few commercially available mucosal vaccines.The difficulty in delivering antigens into the mucosa, the hostile environment, obstacles in the gastrointestinal tract, and immunity of the buccal mucosa toleration have been blamed for this [15,16]. Lactic acid bacteria (LAB) offer advantages as a novel oral vaccination delivery vehicle [17,18].LAB have been utilized for oral vaccinations against viruses and pathogens because they have great stability, are resistant to stomach acid, and is generally recognized as safe (GRAS) [19].L. lactis can operate as mucosal immune adjuvants, boost the immunological potency of mucosal vaccines [20].Additionally, the administration of modified L. lactis into the mucosa may effectively trigger immune responses at the mucosal and systemic levels [21].Moreover, NICE-Nisin-Controlled gene Expression has been developed for use in L. lactis for the expression of foreign proteins [22]. The most important characteristic of an effective oral vaccination is to ensure that antigens are ingested and then delivered into MALT over the mucosal defense.Therefore, to construct efficient oral mucosal vaccines, M cells are appropriate targets for delivering antigens and generating a mucosal immune response [23].M cells, which are found in the nasopharynx-associated lymphoid tissue (NALT) or the follicle-associated epithelium (FAE) of Peyer's patches (PPs), are essential for the absorption of luminal antigens and the generation of antigen-specific immune responses in both systemic and mucosal compartments [24,25].In fact, M cell targeting has been attempted utilizing a variety of M cell-targeting ligands, including Co1 [26,27], Cpe [28,29] and CKS9 [30].These ligands facilitate the absorption of oral vaccinations by M cells and augment the immune response specific to antigens on both systemic and mucosal surfaces. In our earlier research, we developed a multi epitope vaccine containing four H. pylori virulence factors-Urease, NAP, HSP60, and HpaA, this was designated CWAE, oral CWAE with polysaccharide adjuvant (PA)-immunized mice demonstrated excellent protection against H. pylori infection [10]. .Unlike previous studies [31], this study uses a similar L. lactis surface expression system but with different combination of H. pylori antigens (Urease, NAP, HSP60, and HpaA) [10].In this investigation, plSAM (pNZ8148-SAM) was used to make it easier to administer the CWAE vaccination and elicit immunological responses in the gastrointestinal tract.In addition, we engineered an LL-plSAM-WAE (pNZ8148-SAM-WAE in L. lactis NZ9000) that expressed the CWAE multi epitope antigens via the NICE system, and targets M cells.We investigated the protective efficacy, effectiveness of LL-plSAM-WAE in a Balb/c mouse model and investigated both systemic and mucosal responses. Plasmid, bacterial strains and growth conditions L. lactis NZ9000 and the plasmid pNZ8148 (Zoonbio Biotechnology, China) were used in this study.Helicobacter pylori Sydney Strain-1 (H.pylori SS1) was stored in our laboratory.L. lactis was cultivated at 30 °C in M17 broth (Qingdao Haibo Biotechnology, China) containing 0.5% glucose (w/v) (GM17) and, where needed, supplemented with chloramphenicol (5 µg/mL) for plasmid selection.H. pylori SS1 were cultured on brain-heart infusion (BHI) plates (Qingdao Hope Biotechnology, China) containing 5% sterile defibrillated sheep blood and bacteriostatic under microaerophilic conditions at 37 ℃ for 4 d.The bacteria were harvested and re-suspended in normal sodium, and the final concentration was adjusted to a density of 1×10 10 colony forming units (CFUs) per milliliter before inoculation. Vaccine formulation The plSAM system, designed for L. lactis, is a synthetic plasmid that specifically targets M cells [31].Its main constituent is SAM, which comprises several key elements, including the custom-designed M cell-targeting peptide Mtp containing CKS-9, Cpe and Col1, and the cA binding domain.SAM was subsequently inserted into the pNZ8148 plasmid, resulting in the construction of the plSAM plasmid.Then, the WAE gene (Urease, HpaA, HSP60, and NAP) was amplified from the pET-CWAE plasmid via PCR.Subsequently, the fragment WAE was further inserted into the plSAM plasmid named plSAM-WAE.After that, the recombinant plasmid plSAM-WAE was immediately transformed into L. lactis NZ9000 to construct LL-plSAM-WAE. Expression and identification of LL-plSAM-WAE The LL-plSAM-WAE were cultivated on GM17 solid medium overnight in advance, and then a single colony was isolated in 5 mL GM17 liquid medium for amplification.After that, 4 mL of the amplified bacterial fluid was added to 100 mL of GM17 broth containing 5 µg/mL chloramphenicol.When the OD 600 of the broth reached 0.6-0.8, the inducer nisin (Sigma -Aldrich, USA) was added to the culture at a concentration of 1 ng/mL expressing the SAM-WAE proteins.LL-plSAM-WAE was incubated at 30 °C until the OD 600 reached approximately 2.0.Subsequently, the cellular samples were harvested and centrifuged, washed twice with PBS, and lysed via sonication.The lysates were mixed with 6× loading buffer and boiled in a water bath for complete denaturation.The bacterial proteins were identified through SDS-PAGE and Western blotting.Briefly, the protein samples were separated by 12% SDS-PAGE and transferred onto a PVDF membrane (Millipore, USA).The membrane was blocked with 5% skim milk solution at room temperature for 2 h, followed by incubation with mouse anti-WAE serum (1:2500) previously prepared in our laboratory at 4 °C overnight and washing with TBST three times.Then the membrane was incubated with HRP-labeled goat anti-mouse IgG (1:5000, Proteintech, USA) at room temperature for 1 h, and washed with TBST three times.Finally, the proteins were visualized using ECL reagent (NCM Biotech, China).Furthermore, immunofluorescence analysis was performed to verify proteins produced by LL-plSAM-WAE.Mouse anti-WAE serum (1:500) and FITC-labeled goat anti-mouse IgG (1:100, Proteintech, USA) were used to stain SAM-WAE proteins.Meanwhile, ELISA was performed to test the surface display of the SAM-WAE proteins.In brief, 100 µl/well coated solution containing H. pylori antibody was added to the enzyme-labeled plates for overnight at 4℃, the final concentration was 2 µg/mL.The next day, 200ul/well PBST was added to soak and wash the plate for 5 times, and 2 µg SAM-WAE protein and 1 × 10 8 CFU LL-plSAM-WAE, LL-plSAM were added, respectively.LL-plSAM was incubated for 1 h and then, anti-WAE serum (1:2500) was added and incubated for 1 h.Finally, HRP labeled anti-mouse IgG was used to detected. Immunization protocol and sample collection Animal experimentation protocols were approved by the Animal Ethical and Experimental Committee of Ningxia Medical University.A total of 18 six-to-eightweek old male SPF Balb/c mice were randomly divided into 3 groups, and orally administered 1 × 10 9 CFU/300 µL LL-plSAM-WAE, L. lactis NZ9000 or PBS on weeks 1, 2, 3 and 4. One week after the final vaccination, blood, spleen and MLN samples of mice were collected for subsequent testing.The experimental program was executed as described in Fig. 5a.The procedure of the protective model is shown in Figs.6a and 7 days after the last oral vaccination, mice received 300 µL of an H. pylori suspension at 31, 33 and 35 days.Then, 15 days after the last H. pylori infection, the immunized and control mice were sacrificed for evaluation of H. pylori infection. Measurement of antigen-specific antibody in the serum The mice were sacrificed one week after the last immunization, blood was taken through the orbital vein, allowed to stand for 30 min at room temperature and centrifuged at 3000 rpm for 15 min to collect serum.The antigens of Urease, UreA, UreB, HpaA and NAP were separated on SDS-PAGE gels and transferred onto PVDF membranes.Then, the serum was analyzed and incubated with HRPlabeled goat anti-mouse IgG.Additionally, enzyme-linked immunosorbent assay measurements of antigen-specific antibodies were made.In brief, 96-well microplates were coated at 4 °C for an overnight period with 5 µg/well SAM-WAE, urease, NAP, HSP60, HpaA, or BSA.The plates were washed and blocked with 3% BSA in PBS.Then the plates were washed and incubated with 100 µL of mouse serum at 37 °C for 1 h.Before being measured, the serum was diluted to a concentration of 1:500 after being separated from mice that had been inoculated with either LL-plSAM-WAE or L. lactis NZ9000.After washing, HRP-conjugated goat anti-mouse (1:1000, Proteintech, USA) was added, and the plates were incubated again for 1 h.The color reaction based on TMB (Solarbio, China) was terminated after incubation for 15 min at room temperature by the addition of 50 µL of H 2 SO 4 (1 M), and the absorbance at 450 nm was measured by a microplate reader (Thermo Fisher Scientific, USA). Immunofluorescence of the spleen A crucial sign of specific immunity is the presence of CD4 + and CD8 + T cells.Each spleen had tissue removed, which was then embedded in paraffin after being preserved in 4% paraformaldehyde.Then frequencies of CD4 + and CD8 + cells were detected [32]. Flow cytometry analysis on mesenteric lymph nodes and spleen tissue T cells Vaccinated mice were sacrificed, and the mesenteric lymph nodes (MLNs) and spleens were harvested.By pulverizing the tissue through a 40 μm cell strainer, lymph nodes and spleens were produced as single-cell suspensions.ACK buffer (HyClone) was used to lyse red blood cells, which were then centrifuged and suspended in RPMI 1640 medium (Basal Media) supplemented with 10% FBS (BI, Israel), and 1% streptomycin/penicillin. Single-cell suspensions were directly stained for flow cytometric analysis.Before intracellular cytokine staining, cells were stimulated in BFA and cell stimulation cocktail (Invitrogen) for 6 h.Next, the cells were stained for extracellular markers, fixed and permeabilized with Intracellular Fixation/Permeabilization Buffer (Invitrogen).Rat anti-mouse CD3 (clone 17A2), CD4 (clone GK1.5), IFN-γ (clone XMG1.2),IL-4 (clone 11B11), and IL-17 A (clone TC11-18H10.1) antibodies were purchased from BioLegend (San Diego, CA, USA).Flow cytometry was performed on a FACS Celesta flow cytometer (BD Biosciences, USA). ELISPOT Utilizing a mouse IFN-γ precoated ELISPOT kit (Mabtech AB, Sweden), ELISPOT experiments were carried out.Briefly, the plate was incubated at room temperature for 2 h with medium containing 10% serum.A total of 3 × 10 5 lymphocytes purified from vaccinated mouse spleens were treated with 10 µg/mL CWAE (Urease, NAP, HpaA, HSP60) antigen [10] or 10 ng/mL PMA.Then, the cells were incubated in RPMI-1640 at 37 °C for 30 h.Following a wash, the plates were incubated with a detection antibody at 37 °C for two hours.After washing, the plates were incubated with streptavidin-ALP for 1 h at 37 °C.Finally, stop color development by washing extensively in tap water.Plates were counted using an ELISPOT-reader (AID). Analysis of M cell-targeting properties The M cell targeting ability of LL-plSAM-WAE was investigated using closed ileum loop experiments, which were modified from the procedures outlined in other reports [33,34].In brief, 100 µL of LL-plSAM-WAE, 100 µg/ mL SAM-WAE protein, or 100 µg/mL WAE protein was injected into the ileum loops, as appropriate.The loops were incubated, washed, fixed, and cryo-sectioned.Alexa Fluor 647 goat anti-rabbit IgG antibody and rabbit anti-WAE antibody were used to stain the sections.The Alexa Fluor 488 anti-Gp2 monoclonal antibody was used to identify M cells, and DAPI (Sigma, USA) was utilized to identify the nuclei.Finally, the cells were detected using confocal laser scanning microscopy (LSM900, Carl Zeiss AG). Efficiency of removal of H. pylori infection Following oral vaccination, the assessment of H. pylori infection was conducted using quantitative PCR (qPCR) and a quick urease test.The stomach tissue was taken aseptically, divided into 3 parts along the large bend of the stomach, and the contents were removed.After weighing the first portion of stomach tissue, 0.5 mL normal saline was added and homogenized with a homogenizer for CFU assays or urease activity detection.The second portion of stomach tissue was immersed in 10% formaldehyde solution and fixed for histopathological examination.The third part of stomach tissue was immersed in 1.5 mL tube with 1 mL normal saline and temporarily stored in liquid nitrogen for qPCR detection.For CFU assays, the gastric tissues weighted and then subjected to homogenization, and by serial dilution.The samples (100 µl) was plated onto BHI blood plates (Qingdao Hope Bio-Technology, China) supplemented with antibiotics.Then, H. pylori DNA in stomach was extracted by bacterial genome extraction kit (TIANGEN DP302, China) and quantified by real-time PCR method.The primers are as follows: In the rapid urease test, a specimen of stomach tissue was submerged in a solution specifically designed for the test, known as the RUT solution [35].Subsequently, the sample was subjected to incubation at 37 °C for 4 h.The measurement of absorbance was performed at a wavelength of 550 nm. Immunohistochemical investigation HE staining, inflammatory scores, and immunohistochemistry studies were performed on the stomach tissue.Briefly, 10% neutral formaldehyde solution was used to fix sections of stomach tissues before they were embedded in paraffin.HE was used to stain the sections, and gastritis was assessed as previously described [36]. ELISA was used to detect H. pylori antigen-specific antibodies Serum IgG and mucosal secretory IgA (sIgA) concentrations were measured using ELISA.Before testing, the samples were briefly diluted with PBS.Microplates containing 100 µL of diluted samples and 5 µg/well of H. pylori lysates were coated overnight at 4 °C.HRP-conjugated goat anti-mouse IgG (31,430, Thermo Fisher) and sIgA (ab97235, Abcam) were used after the plate had been cleaned with PBST.Tetramethylbenzidine (Solarbio, China) was used to view the plates after washing for 15 min in complete darkness.Finally, a solution of 2 M sulfuric acid (Solarbio, China) was used to terminate the process.A microplate reader was used to measure the absorbance at 450 nm. Identification of lymphocyte reactions unique to H. pylori Lymphocytes were isolated from the mouse spleen after it was removed.Following this, lymphocytes were cultivated with H. pylori lysates (5 µg/mL) for 72 h, after which the supernatant was collected to quantify cell proliferation using the CCK-8 test and to measure the levels of numerous cytokines (IL-4, IFN-γ, and IL-17) by ELISA. Statistical analysis GraphPad Prism 8.0 software was used for statistical analysis.The results are presented as the mean ± standard deviation (SD).A t test was used to assess statistical significance.*p < 0.05; **p < 0.01, ***p < 0.001. Construction and verification of plSAM and plSAM-WAE plasmids The core component of SAM (Fig. 1b), was synthesized and inserted into the plasmid pNZ8148 to create plSAM (Fig. 1a).It was digested with Nco I and Hind III, and the resultant 887 bp fragment roughly matched the SAM gene (Fig. 1c).Meanwhile, the WAE (Fig. 1d) fusion gene was amplified and introduced into plSAM as plSAM-WAE (Fig. 1a).Then the plasmid was validated by enzyme digestion and gene sequencing, and the expected fragment was observed by gel electrophoresis (Fig. 1e).These results confirmed the successful construction of plSAM and plSAM-WAE. Confirming the expression of H. pylori antigens on L. lactis After treatment with nisin, the results obtained from SDS-PAGE analysis indicated that recombinant L. lactis successfully produced fusion proteins of SAM-WAE, with a molecular weight of 83.69 kDa (Fig. 2a).Moreover, the expression of SAM-WAE was confirmed through western blotting using mouse anti-WAE serum, as evidenced by the presence of a specific band (Fig. 2b).Conversely, negative results were observed in the lane containing normal mouse serum (Fig. 2b).Additionally, the use of specific antibodies for immunolabeling proved to be an effective method for detecting expression proteins.In this regard, green fluorescence was observed in the LL-plSAM-WAE group, while no fluorescence was detected in the LL-plSAM group (Fig. 2c).We deposited LL-plSAM-WAE, LL-plSAM, and SAM-WAE into ELISA plates at different concentrations, demonstrating that SAM-WAE was expressed by LL-plSAM-WAE (Fig. 2d-e). Vaccination with LL-plSAM-WAE induced T-cell immune responses Immunofluorescence was used to determine the frequency of splenic CD4 + and CD8 + T cell responses in the PBS, SAM-WAE, and LL-plSAM-WAE groups one week after the last vaccination (Fig. 5a).As shown in Fig. 3a, the proportion of CD4 + and CD8 + T lymphocytes considerably increased compared to other groups after oral vaccination with LL-plSAM-WAE, which caused a greater frequency of CD4 + and CD8 + T cells in the marginal zone of the spleen.Meanwhile, we observed that the CD4 + T-cell response was relatively dominant (Fig. 3a-b).It has been suggested that the Th1/Th2/ Th17 T-cell immune response promotes the best defense against H. pylori.Consequently, to analyze whether LL-plSAM-WAE could boost cellular immune responses, we detected the types of Th cells.The ratios of IFN-γ + , IL-4 + , and IL-17 A + T-cell among splenic CD4 + T cells increased in the LL-plSAM-WAE group compared with the PBS group (Fig. 3e-g).The results showed that Th1/ Th17 immune responses were activated after oral administration of SAM-WAE (Fig. 3e-g).Moreover, IFN-γ + , IL-4 + , and IL-17 A + levels were considerably elevated in the mesenteric lymph nodes of mice given the LL-plSAM-WAE vaccine (Additional file 1: Figure S1), demonstrating the migration of Th1/Th2/Th17-type memory T cells.We also investigated the frequency at which spleens produced IFN-γ-producing cells specific to the CWAE antigen using ELISpot.Oral administration of free SAM-WAE generated 7 antigen-specific IFNγ-producing cells per 500,000 spleen cells, as illustrated in Fig. 3c-d.However, in the LL-plSAM-WAE group, the frequency of antigen-specific IFN-γ-producing cells was considerably higher, demonstrating that the LAB delivery system significantly strengthened the IFN-γ-cell response. LL-plSAM-WAE or SAM-WAE targets M cells effectively We next tested whether LL-plSAM-WAE and SAM-WAE targeted M cells using the closed ileum loop model.LL-plSAM-WAE, SAM-WAE protein, or CWAE protein were each injected into naïve mice ileum loops, and the fluorescent signals were evaluated by confocal microscopy (Fig. 4).FITC-labeled anti-Gp2 was used to mark the M cells in PPs, and red light indicates fluorescent secondary antibody-labeled recombinant protein SAM-WAE.There were noticeably more yellow puncta, which indicate the co-localization of SAM-WAE and M cells, in the group treated with LL-plSAM-WAE or SAM-WAE protein (Fig. 4).In contrast, fewer puncta were observed in Peyer's patches treated with CWAE protein (Fig. 4).These findings suggested that the LL-plSAM-WAE or SAM-WAE protein targeted M cells effectively due to the extra SAM component. BALB/c mice immunized with LL-plSAM-WAE produced neutralizing antibodies Effective vaccines are urgently needed to generate anti-H.pylori neutralizing antibodies.BALB/c mice were vaccinated with LL-plSAM-WAE, and serum was collected at one week after the last vaccination (Fig. 5a).Our results showed that the antiserum obviously recognized the antigens Urease, UreA, UreB, HpaA and NAP (Fig. 5b).And serum from mice vaccinated with LL NZ9000 as negative controls (Additional file 1: Figure S2).Meanwhile, ELISA performed with the same antiserum revealed a similar outcome (Fig. 5c).This evidence proved that antiserum induced by LL-plSAM-WAE Protective effect and histopathological analysis after oral vaccination Mice were orally immunized with LL-plSAM-WAE or SAM-WAE and then infected with H. pylori to assess the protection of oral vaccination (Fig. 6a).Then, the stomach of each mouse was removed and used to measure the H. pylori burden using various techniques.According to the results of the quick urease test (Fig. 6d), H. pylori quantitative culture (Fig. 6c), and RT-qPCR (Fig. 6b), oral vaccination with LL-plSAM-WAE or SAM-WAE generally reduced the H. pylori burden and urease activity compared to the other groups.Notably, oral vaccination resulted in dramatic post immunization stomach inflammation and an elevated level of leukocyte infiltration (Fig. 6e, f ).Additionally, the inflammation between the LL-plSAM-WAE group and the SAM-WAE group did not vary significantly.Using anti-H.pylori antibody in IHC analysis, it was discovered that stomach tissue samples from the LL-plSAM, and SAM groups had H. pylori colonization.Intriguingly, only a small quantity of H. pylori was discovered in the LL-plSAM-WAE and SAM-WAE groups, demonstrating the effectiveness of these treatments in preventing H. pylori invasion (Fig. 6g). L. lactis LL-plSAM-WAE or SAM-WAE protein improves lymphocyte response against H. pylori We were curious whether LL-plSAM-WAE or SAM-WAE protein with PA may stimulate lymphocyte immunological responses.As a result, antibodies were detected in the serum, stomach, intestine, and feces of orally inoculated mice.Serum IgG and mucosal sIgA against H. pylori were clearly increased in those samples following oral stimulation with LL-plSAM-WAE or SAM-WAE protein, as shown in Fig. 7a and b.Furthermore, lymphocytes from LL-plSAM-WAE or SAM-WAE with PA-treated mice proliferated more than lymphocytes from LL-plSAM or SAM with PA-vaccinated animals (Fig. 7c).Furthermore, ELISA findings demonstrated that LL-plSAM-WAE or SAM-WAE with PA vaccination enhanced three cytokines (IFN-γ, IL-4, and IL-17) in splenic lymphocyte supernatant (Fig. 7d-f ), which consistent with our previous research.Finally, the LL-plSAM-WAE or SAM-WAE protein increased the production of H. pylori-specific antibodies and promoted lymphocyte responses against H. pylori invasion. Discussion Helicobacter pylori colonizes the human gastric mucosa and may lead to gastritis, ulcers, and even cancer [37].Development of an effective H. pylori vaccine would be an important step in solving gastrointestinal diseases.We previously developed a multivalent epitope vaccination called CWAE that had multiple copies of certain B and Th cell epitopes, the cholera toxin B subunit (CTB), NAP, and other antigens [10].Additionally, studies indicated that CWAE taken orally enhanced CD4 + T-cell responses and antibodies directed against H. pylori [38].This research closely follows the our previously published work using a similar L. lactis surface expression system but with different combinations of H. pylori antigens The effectiveness of oral mucosal immunity is hampered by the poor bioavailability of protein antigens such as our polyvalent epitope vaccines due to the harsh environment of the gastrointestinal tract, pepsin decomposition, and mucosal clearance of foreign antigens, which induces immune tolerance rather than immune stimulation [39].Effective vaccine delivery systems are required to overcome this natural barrier, and these should not only delivery a payload antigen, but also deliver to immune effector cells to activate the cellular and secretory IgA antibodies immune responses [40].Designing targeting M cells that reside in the mucosa-associated lymphoid tissue is essential to improve the bioavailability of foreign antigens [41].A small number of M cells scattered on the follicular-related epithelium take up antigen in the intestinal lumen through adsorption, pinocytosis, and transport them to the APCs in their pockets to initiate the intestinal mucosal immune response [42].To create effective oral mucosa vaccines, several studies have examined immunological methods that target antigens to M cells.Antigen transport and the beginning of immune responses specific to an antigen are both influenced by the interaction between the M cell-targeting ligands Co1 and C5aR on M cells [43,44], and the ligands Cpe and Claudin4 [28,29,45]. Based on the above foundation, researchers have concentrated on and achieved advancements in oral vaccination administration technologies [16].L. lactis is a good example of an LAB that has been used to deliver oral vaccinations.L. lactis has thus far been utilized to express a number of foreign antigens, including bacterial antigens [46], virus antigens [47,48], and parasite antigens [49].Additionally, the display of vaccine antigens on the surface of L. lactis has drawn significant interest [50]. In this work, we created an M cell-targeting L. lactis surface display system for the gastrointestinal tract delivery of the vaccination antigen WAE.Additionally, WAE antigen was added to the system as LL-plSAM-WAE, and the NICE system was used to induce the production of SAM-WAE fusion proteins.According to the SDS-PAGE data, L. lactis was likely expressed recombinant proteins SAM-WAE after the addition of Nisin.Then, further ELISA evidence confirmed the SAM-WAE proteins expression on the surface of LL-plSAM-WAE.As for which expression mode of SAM-WAE is mainly in intracellular and bacterial surfaces, further related experiments should be explored.Additionally, the LL-plSAM-WAE reportedly elicited systemic, mucosal, and cell-mediated immune responses, there are two possibilities: on the one hand, although the recombinant proteins are rarely displayed on the surface of lactic acid bacteria, but it secreted enough to activate the immune response.On the other hand, the L. lactis, as a vaccine delivery system to elicit mucosal immune response, with the death of the LL-plSAM-WAE, which is targeted by M cells, followed by effective antigen presentation, thus triggering the immune response. IgA-specific mucosal immune responses are effective against mucosal surface infections [51].The molecular mechanism of H. pylori causing infection has not been fully clarified, and there is currently no vaccination that is especially effective against H. pylori.It is worth investigating novel methods of vaccination; oral administration is one of the constructive strategies as mentioned [52].Prior research has shown the significance of antibodymediated humoral immunity for defense against H. pylori infection [53].However, further research revealed that an antibody-independent mechanism may also provide protection against H. pylori infection [54].In our research, LL-plSAM-WAE or SAM-WAE protein administration in mice contributed to the generation of H. pylori-specific antibodies against urease, HpaA, HSP60 and NAP, and lymphocyte immune responses were promoted during H. pylori invasion.In addition, stomach, intestinal and faeces sIgA levels were elevated after oral immunization with LL-plSAM-WAE or SAM-WAE, compared to LL-plSAM or SAM.Based on these findings, we assume that LL-plSAM-WAE may play a protective role through humoral immunity and produce antibodies against H. pylori. Given that M cells are specialized epithelial cells for the uptake of luminal antigens and possess a variety of APCs capable of transporting antigens to the underlying immune inductive organ of the mucosa and inducing antigen-specific immunity, a new delivery system containing M cell-targeting ligands would be valuable [55].In contrast to CWAE, our investigation discovered that the SAM-conjugated multi epitope antigen CWAE, demonstrated good targeting capacity to M cells of ileum PPs.Additionally, the findings of the closed ileum loop experiment and immunohistochemistry analysis demonstrated that the peptide WAE present in the LL-plSAM-WAE and SAM-WAE proteins helped M cells and antigens co localize. The role of the CD4 + T-cell (Th cell) response in defense against H. pylori infection has been previously described [56].Host immunity and immunopathology events are fundamentally regulated by H. pylori-specific CD4 + T cells.It is known that Th1, Th2, Th9, Th17, Th22, and T regulatory (Treg) cells, either alone or in combination, may influence the outcome of a H. pylori infection [57,58].Previous studies have shown that, Treg and Th2 cells have anti-inflammatory effects when H. pylori infection occurs, however Th1 and Th17 cells may be either protective or harmful [59].Th1 cells predominate in number among all stomach T cells obtained from H. pylori-infected individuals.More crucially, the antigen fragments included in the SAM-WAE vaccine (UreA 27-53 , UreB 158-251 and UreB 321-385 ) included a variety of known and projected CD4 + T-cell epitopes that might activate CD4 + T-cell responses specific to H. pylori.Interestingly, splenic lymphocytes isolated from LL-plSAM-WAEimmunized mice exhibited stronger proliferation ability after H. pylori infection, and the concentrations of IFNγ, IL-17 and IL-4 increased to high levels.These findings showed that the immune defense mechanism of LL-plSAM-WAE may be producing specific sIgA and IgG antibodies against a number of H. pylori virulence proteins, as well as the immunological response of CD4 + T cells. The ultimate goal of vaccines is to effectively prevent or treat microbial invasion and infection.Therefore, we constructed a therapeutic and preventive mouse model of H. pylori to evaluate the effect of the recombinant vaccine in the prevention of H. pylori, and the experimental results showed that oral with immunization LL-plSAM-WAE could significantly protect against H. pylori infection.The mechanism of prevention is mainly to activate the adaptive immune response, and produce serum IgG and mucosal IgA antibodies, and at the same time the body rapidly responds to inflammatory factors to defend against H. pylori infection, which plays a key role in the cellular immune response mediated by CD4 + T cells.Finally, at the population level, the application value of this L. lactis vaccine delivery system is mainly reflected in the following aspects.Firstly, compared to other host vectors, the use of L. lactis as mucosal vaccine vectors is a promising alternative, owing to their "generally regarded as safe" status, potential adjuvant properties, and tolerogenicity to the host [60].Next, many investigations using L. lactis as cell factories for the production of specific antigen and testing the production of pharmaceutical products [61].And the studies also found that the L. lactis-TB1-Co1 can induce elevations in mucosal as well as systemic immune reactions, and to a certain extent, provide protection against FMDV [48].Therefore, using this vaccine carrier has a more long-term application prospect. Conclusion In conclusion, we developed L. lactis LL-plSAM-WAE, a new vaccine delivery system with SAM-WAE antigen expressed on bacterial surfaces, which dramatically improved the ability to target M cells.Our research has shown that the M cell-targeting ligand SAM comprising Co1, Cpe, and CKS9 is a crucial development for H. pylori oral mucosal vaccinations.It is critical for the successful induction of mucosal and systemic immune responses in mice as well as the efficient transport of ligand-conjugated multi epitope antigens to mucosal immune components.Additionally, LL-plSAM-WAE oral vaccination increased the generation of antibodies against H. pylori virulence factors and the proliferation of T cells, providing immunological protection against H. pylori infection.An L. lactis based H. pylori vaccine is a strong candidate for progression to clinical trials. Fig. 4 M Fig. 4 M cell-targeting detection.Red signal indicates fluorescent secondary antibody-labeled recombinant protein SAM-WAE, green signal indicates FITC-labeled M cells, and blue signal indicates DAPI labeled nuclei.White arrows indicate the co-localization signals for antigens that were directed toward M cells Fig. 5 Fig. 6 Fig. 5 Antiserum specificity test for LL-plSAM-WAE.Antisera of mice induced by LL-plSAM-WAE via oral vaccination were collected to detect anti-H.pylori neutralizing antibodies.(a) The schedule of vaccination of BALB/c mice.(b) Western blot results showed that the antiserum obviously recognized the antigens Urease, UreA, UreB, HpaA and NAP.(c) Before ELISA, the antiserum was diluted to a concentration of 1:500, and the plates were coated with 5 µg/ mL concentrations of SAM-WAE, urease, HpaA, HSP60, NAP and BSA Fig. 7 Fig. 7 Detection of lymphocyte responses and antibodies specific for H. pylori.Samples of sera, stomach, intestine and feces were collected to identify the levels of serum IgG (a) and immunoglobulin A (b). Splenetic lymphocytes were isolated for proliferation analysis (c).ELISA was used to measure IFN-γ (d), IL-4 (e), and IL-17 (f) in the supernatant
7,287
2024-02-24T00:00:00.000
[ "Medicine", "Biology" ]
Explaining the Double-Slit Experiment In response to Orion and Laitman’s [1] explanation of the classic double-slit experiment of quantum mechanics, we propose an alternate explanation of that experiment by treating physical degrees of freedom as a conserved physical quantity, instead of referring to “vague terms” used in previous explanations, [1], that are not broadly applicable. Explanation in [1] refers to properties of groups of particles, even though the double-slit experiment’s results should address only to a single particle. By using physical degrees of freedom and the application of Hamilton’s principle, we obtain a single particle explanation of the double-slit experiment in terms of properties and via methods which apply equally in a quantum and a classical regime. Introduction The famous double-slit experiment involves either the observation or non-observation of an interference pattern in two physical situations which differ only by whether a certain measurement is not or is taken respectively [2,3].An electron source projects electrons onto a screen with a double slit, through which an electron is then projected onto a second screen.When no measurement is taken to determine through which slit the electron passes, an interference pattern is observed on the second screen.When such a measurement is taken however, no interference pattern is observed.Various explanations have been proposed, but many of these explanations still regard this result as mysterious to varying degrees [4].Perhaps the most recent such explanation is that proposed by Orion and Laitman [1] to which the current discussion serves as a specific response.The discussion in [1] proposes a Kevutsa or group interpretation of the double-slit experiment intended as an improvement on previous explanations which are claimed to have used "unclear terms".Further, in [1] the interpretation depends on two proposed principles, an "equivalence of form" and "the particles connection to other particles, effectively functioning as a group."The two most noticeable problems with the proposed explanation provided in [1] are as follows: 1) A group-based and hence effectively multi-particle explanation is difficult to justify for a physical phe-nomenon which can be observed in a single particle situation as can the self-interference characteristic of the double slit experiment [2,3]. 2) The proposed explanation simply replaces what are considered as ill defined terms with new terms namely a principle of form and the notion of Kevutsa. For the sake of argument, one accepts the point of view that reference to a wave particle duality [2,3] is not especially useful in understanding the results of the double-slit experiment.After all, such a duality merely gives a name to the underlying observation that electrons and similar quantum particles have properties both of a wave and of a particle and that those properties will manifest themselves in different types of interactions; that is the definition of a wave particle duality.If then the wave particle duality is not useful in and of itself for understanding the results of the double-slit experiment, at least by assumption (again for the sake of argument), one needs a system by which to explain the results in terms which are well defined and without needing to worry about how an electron can be a wave, a particle or simultaneously both. The physical degrees of freedom, namely the characteristic minimum number of variables needed to fully describe the physical situation, provide such an explanation.If one sends a single electron in a known physical state toward the screen with the two slits, by definition of a state that electron has no physical degrees of freedom before it reaches the slit-screen, In other words, the state of the electron is a known quantity initially.Then it encounters the slit-screen.If no measurement is taken to determine which slit the electron went through, a degree of freedom is introduced at that point.If such a measurement is taken, no such degree of freedom is introduced, because the measurement removes the ambiguity of which slit the electron could have passed through.When the electron reaches the final screen (i.e., the projection screen), then if a degree of freedom was introduced, some physical manifestation of that degree of freedom must occur (hence one sees an interference pattern), but if no such degree of freedom was introduced the electron remains in a known state and so no interference pattern is seen.Admittedly, the two physical events, namely the electron at the slit-screen and the electron at the projection screen, are not occurring at exactly the same moment in time, but a nearly trivial application of Hamilton's principle [5] surmounts that difficulty easily.Nothing anthropocentric is needed in this explanation, and all terms are clear.Moreover this same term, i.e., physical degrees of freedom, is applicable to any arbitrary physical situation, not just the double-slit experiment and similar self-interference phenomena. The resulting understanding of the double-slit experiment can then be summarized as follows.Quantum mechanics postulates that any particle such as an electron can also be described as a wave characterized by its de Broglie wavelength [1][2][3].If so, then one should be able to create an interference pattern from an electron using a screen with a double-slit in it.That is in fact observed.The initially surprising aspect is the effect of a measurement taken at the slit-screen: namely, that such a measurement causes the interference pattern to disappear.That phenomenon can be understood by treating the number of physical degrees of freedom as a conserved quantity in the sense that the number of independent variables needed to describe any given physical situation is characteristic of that physical situation just like energy or momentum would be in collisions [5,6]. Re-Examination of the Double-Slit Experiment The famous double-slit experiment [2,3] traditionally exemplifies the wave-particle duality of quantum mechanical systems.The double-slit experiment has been termed the fundamental quantum mechanical mystery [4,7].The implications for the understanding of quantum mechanical systems makes it a topic of on-going research [8][9][10][11], albeit in varying forms, to this day, and by the same token it is used as a toy model for understanding more complex systems [12,13], similar to the manner in which one often uses the simple harmonic oscillator [2,3].In short, the double-slit experiment continues to both perplex and intrigue [14,15], encapsulating what is and is not understood about the quantum mechanical world. The present discussion attempts to understand the usual double slit experiment in a novel manner.Namely, in any physical system, the number of degrees of freedom remains characteristic of the system [5].For example, when describing a classical particle moving through space x, one is able to define a set of coordinate axes x' so that the linear momentum becomes a vector   parallel to one of the major axes i x E thus effectively eliminating the need for the two other components of the position in space.Nevertheless, the particle retains three degrees of freedom so that the energy and total linear momentum become effectively parameters of the system rather than derivable quantities, in order that along with uniaxial position the system retains three degrees of freedom [5,6].One has in this example transformed from generalized coordinates Conventional usage does not term the number of degrees of freedom a conserved quantity because it is not a directly measurable quantity itself, but in a sense the number of degrees of freedom is indeed a conserved quantity, albeit at a level of abstraction removed in that one does not have a degrees of freedom-meter with which to directly measure the total degrees of freedom.One may think of the number of degrees of freedom as a constraint upon the physical system, because this number is always constant.In the example already cited of the classical particle, the number of total degrees of freedom acts as a constraint in that it sets the number of input parameters from which other quantities may be derived.Similarly, in a quantum mechanical system, one may choose among alternate sets of what are termed "good" quantum numbers , , etc. (meaning quantum numbers which refer to simultaneously measurable quantities) [2,3], but the total number of quantum numbers needed to describe a system remains the same. More formally, any local coordinate transformation of generalized coordinates h (defined as variables on which a specific Lagrangian depends) to another local mapping of generalized coordinates k (defined as an alternate set of variables in terms of which that same Lagrangian can also be represented) takes the form [5,6]. Yet at the same time one's choice of coordinates h or coordinates q k q remains arbitrary, and so similarly one has Both ( 1) or (2) can be written as matrix equations, and so matrix k h must be the inverse matrix of matrix q q    h k q q   .The existence of this pair of inverse matrices is only possible if the range of index is identically the same as the range of index .For unitary transformations, this common range becomes the trace the value is terms the number of degrees of freedom.In the arbitrary case, the total number of degrees of freedom f tells one how many independent and simultaneously measurable variables h or k are needed to describe the given physical situation.Algebraically, one then needs a similar number of equations to solve the system generally. q q A classical example [5,6] is the orbital motion of a planet about the sun.To specify the location of the planet within the plane of its orbit at any time, one may use, respectively, a radial distance and an angle    . One may also use energy and momentum , but one always will need two variables in this example. E P Where the application becomes more interesting is in those cases where the Lagrangian describes both initial and final states.If one considers as an example a bucket of sand with a hole in the bottom, the physical situation takes the same number of variables to describe both just before the first sand begins to fall out of the bucket and throughout the process.In this example, using energy as a single variable is simplest, but in principle one could for example also use the center of mass of the sand.   E Yet, the total number of degrees of freedom has more profound implications than as a sort of bookkeeping device used to make sure one has the right number of equations.The notion that the total number of degrees of freedom remains constant and characteristic of any physical systems constrains that physical system to express all the degrees of freedom [5,6].For example, a particle in its proper frame of reference defines the origin of the coordinate system and thus has a fixed position.Nevertheless, if in some other reference frame the motion of the particle is described by four independent equations, its motion must also be characterized by four equations in its proper frame of reference [5,6,16].This is the aspect relevant to the double-slit experiment.Attempts to understand the connection between measurement and observation have spawned bizarre physical models such as the "many worlds" interpretation of quantum mechanics [4] and seemingly endless philosophical debate [7].Yet, treatment of the total degrees of freedom as a conserved physical quantity, where meas-urement is viewed as extraction of a particular degree of freedom, i.e., removal of that degree of freedom from the system being considered, allows a physical interpretation of the results of the double-slit experiment which remains thoroughly physical and does not venture into speculative areas of discussion.This non-speculative interpretation of the double-slit experiment and of similar quantum mechanical phenomena which involve "selfinterference" [2,3] is the viewpoint on the double-slit experiment expounded in this discussion. The classic double-slit experiment consists of a source emitting single electrons (or photons), a screen with two narrow slits (to be termed the "slit-screen") and a screen onto which each electron is projected (to be termed the "projection-screen").The three objects are placed so that each electron must pass through the slit-screen in order to be projected onto the projection-screen.Electrons may only pass through the two narrow slits in the slit-screen, but the width of the electrons (as defined by their de Broglie wavelength [1][2][3]) is still negligible compared to that of the slits, at least in the classic experiment [14].The relative positions of the two screens determines the specific geometry of the interference pattern observed, as per the usual interference law [2] (in terms of separation between the two slits, angle d  measured with respect to the central axis between the two screens, de Broglie wavelength  and integer which counts peaks of intensity from the center of the projection screen) but this level of detail can be ignored for the purposes of the present discussion.The key points of the present discussion are that when a single electron leaves the source in some known state, two possible results may be seen on the projection-screen, depending on whether or not a measurement is taken at the slit-screen (the nature of measurement as a physical interaction has been thoroughly discussed, [4,7], elsewhere in the literature).When no measurement is taken at the slit-screen, an interference pattern is observed on the projection-screen.Yet, when a measurement of which slit the electron passes through is taken at the slit-screen, no interference pattern is observed on the projection-screen.These relatively qualitative results are all that are needed to frame the present discussion; they imply an effectively static, i.e., time-independent, one dimensional physical system.At any two positions 1 m x and 2 x along the path of the electron, the number of degrees of freedom f must be the same so that degrees of freedom in the two positions 1 x and 2 x are identical as     x .Here one is only demanding continuity. Implicitly one has used Hamilton's principle [5] to construct an equation of generalized motion of a form which uses a generalized Lagrangian (linear in degrees of freedom L f ) and in which one in principle allows the number of degrees of freedom f to vary with position.Conservation of the number degrees of freedom f simplifies this equation of generalized motion to The slit-screen and projection-screen are not immediately adjacent, but no interaction takes place between them.So, one can equivalently simply use i.e., the number of degrees of freedom f for an isolated physical system remains invariant, regardless of the position or time at which that system is observed. In terms of degrees of freedom as a conserved quantity, the results of the double-slit experiment mentioned above become understandable.The electron leaves the electron source in some known state so that one may define its initial number of degrees of freedom f  as zero ; its final number of degrees of freedom 0 f   f   will be treated as the variable to be determined.The slit-screen introduces in principle one degree of freedom into the system, i.e., through which slit the electron passes, so that one may speak of the slit-screen as having one degree of freedom when no measurement is taken.The electron and slit-screen together constitute the physical system initially, and the electron and the projection-screen constitute the final physical system. The projection-screen introduces no degrees of freedom since the relative geometry here does not matter, and so one defines the projection-screen's degrees of freedom as .Conservation of the number of degrees of freedom at the position of the slit-screen then implies so that the electron has a final number of degrees of freedom .That degree of freedom must be expressed somehow.Energy-momentum conservation precludes variations in frequency because these are respectively proportional directly to energy and inversely to momentum by Planck's normalized constant , Only variation in intensity   I x or some equivalent variable is physically permitted because the state of the electron takes a general wave-form and the phase  is purely arbitrary so that it can have no direct physical manifestation.An interference pattern, by definition, is a variation in intensity   I x of the electron on the projection screen.Similarly, when a measurement is taken at the slit-screen of through which slit the electron passes, the slit-screen does not introduce any actual degree of freedom.One may think of this alternately as taking a degree of freedom out of the system, analogous to the manner in which an unobserved neutrino carries energy out of the system, although total energy is conserved [17].Here, conservation of the number of degrees of freedom at the position of the slit-screen leads to so that the final number of degrees of freedom for the electron remains at zero, namely .The intensity cannot vary because this would require another degree of freedom which the physical system does not have.Thus, no interference pattern is observed.The double-slit experiment, the quintessentially quantum mechanical experiment, can be understood in terms of the number of degrees of freedom associated with the physical system when this is viewed as a conserved quantity. Conceptually one can separate understanding of the double-slit experiment into two parts.First, this experiment tests whether or not a particle like an electron can be treated as a wave with a characteristic de Broglie wavelength [1][2][3].This discussion takes for granted that the result of this part of the experiment is positive because the interference pattern is observed and as discussed elsewhere [2,3] that interference pattern is exactly what one should expect from a wave characterized by the electron's de Broglie wavelength.The second part of understanding the double-slit experiment is what this discussion addresses, namely why that interference pattern vanishes when a measurement is taken of which slit the electron passes through to get to the projection-screen.All that is needed to understand this phenomenon is the notion of physical degrees of freedom as characteristic of any physical system, a notion which applies both in quantum and in purely classical situations. Conclusions The proposed explanation [1] of the classical double-slit experiment [2,3], while perfectly valid in other ways, does not achieve the goal set forth.Namely, [1] simply replaces the mystery of a wave-particle duality with the mystery of why one needs to treat a single particle only as a member of a group of particles in order to understand a phenomenon that occurs when only the one particle is involved.Where other particles enter into the physical situation is not sufficiently explained.To address the latter point, we propose an explanation of the results of the double-slit experiment which requires no "vague terms" and does not invoke or replace a "quantum mystery" as an explanation [7].Namely, the number of physical degrees of freedom   f f x  at a location x is treated as a conserved quantity, i.e., as a quantity characteristic of the physical system at location x.By definition, so long as one has a closed system, that quantity will not change. To summarize the experiment, an interference pattern occurs when a single electron is projected onto a screen, termed the projection-screen, if the electron is made to pass through a screen with a double-slit in it, termed the slit-screen, so long as no measurement is taken of which of the two slits in the slit-screen that electron passes through. One has taken for granted that the electron can be treated as a wave characterized by its de Broglie wavelength [2,3].The issue under discussion is simply why an interference pattern occurs if no measurement is taken at the slit-screen to determine which slit the electron passes through but no interference pattern is observed if such a measurement is taken. The physical system first consists of the electron and the slit-screen and then consists of the electron and the projection-screen.These physical events are not immediately adjacent of course, but since no interaction occurs between them, a nearly trivial application of Hamilton's principle [5] relates the two events directly.One assumes the electron leaves its source in a known state because any extraneous degrees of freedom the electron might have when leaving the source have no bearing on this discussion.One then simply demands continuity.The physical degrees of freedom (DOF) can be summarized in the two cases then as follows, treating the final number f of DOF for the electron as a variable.One reads off on the two rows in Table 1 so that and , for which nm 0 1 0 and m f denote, respectively, the final DOE of the electron if no measurement is imposed or when accounting for measurement. The value of the variable f in the two cases is clear.The electron has a degree of freedom which must manifest itself at the projection-screen if no measurement was taken at the slit-screen, and otherwise it does not.The only degree of freedom potentially available to the elec tron at the projection-screen is its wave intensity because its de Broglie wavelength fixes energy and equivalent variables while the projection-screen itself is at a fixed location.Phase of the wave is arbitrary and so cannot be physically manifested.This explanation actually addresses the notion that physical degrees of freedom are well understood and applied equally in arbitrary classical and quantum physical situations. Acknowledgement The line of inquiry pursued in this discussion was prompted by questions posed by Arie Issar whose contribution the authors would like to gratefully acknowledge. , r  or one may use Cartesian planar coordinates   , x y
4,918.2
2011-01-29T00:00:00.000
[ "Physics" ]
Entrainment in the master equation The master equation plays an important role in many scientific fields including physics, chemistry, systems biology, physical finance and sociodynamics. We consider the master equation with periodic transition rates. This may represent an external periodic excitation like the 24 h solar day in biological systems or periodic traffic lights in a model of vehicular traffic. Using tools from systems and control theory, we prove that under mild technical conditions every solution of the master equation converges to a periodic solution with the same period as the rates. In other words, the master equation entrains (or phase locks) to periodic excitations. We describe two applications of our theoretical results to important models from statistical mechanics and epidemiology. Introduction Consider a physical system that can be in one of exactly N possible configurations and let x i (t) denote the probability that the system is in configuration i at time t. We record the probabilities of all configurations at time t by the (column) state-vector . . The master equation describes the time evolution of these probabilities. It can be explained intuitively as describing the balance of probability currents going in and out of each possible state. To formulate the master equation for a specific model, one needs to know the rates of transition p ij from configuration i to configuration j. A rigorous derivation of the master equation for a chemically reacting gas-phase system that is kept well stirred and in thermal equilibrium is given in [1]. The master equation plays a fundamental role in physics (where it is sometimes referred to as the Pauli master equation), chemistry, systems biology, sociodynamics and more. (For example, see the monographs [2,3] for more details. In this paper, we treat the general case where the transition rates p ij may depend on both time t and on the probability distribution x(t) at time t. The resulting system of differential equations (see (1.2) below) constitutes a time-varying nonlinear dynamical system. Note that even the special case, where the transition rates do not depend on the state x (the master equation (1.2) is then linear), is of general interest because it is intimately connected with the theory of Markov processes on finite configuration spaces. The relation is the following: such a Markov process is uniquely determined by an initial probability distribution on the configurations 1, . . . , N and by transition probabilities (Q τ (t)) ij that denote the probabilities to be in configuration i at time t given that the system is in configuration j at time τ < t. These transition probabilities need to satisfy the Chapman-Kolmogorov equations which are essentially equivalent to the condition that the columns of the matrix Q τ (t) satisfy the linear version of the master equation known as the forward equation in the theory of Markov processes [4,5]. In many physical systems, the number of possible configurations N can be very large. For example, the well-known totally asymmetric simple exclusion principle TASEP model (e.g. [6,7] and the references therein) includes a lattice of n consecutive sites, and each site can be either free or occupied by a particle, so the number of possible configurations is N = 2 n . In such cases, simulating the master equation and numerically calculating its steady state may be difficult even for small values of n and special methods must be applied (e.g. [7,8]). Here, we are interested in deriving theoretical results that hold for any N. Specifically, we consider the case where the transition rates p ij (t, x) are periodic in time t with a common period T > 0. In this situation, we arrive at a T-periodic master equation. For such systems, we consider the problem of entrainment (or phase-locking): Problem 1.1. Given a system described by a T-periodic master equation, determine if for every initial condition the probabilities x i (t), i = 1, . . . , N, converge to a periodic solution with period T. If this is so, determine if the periodic solution is unique or not. In other words, if we view the transition rates as a T-periodic excitation, then the problem is to determine if the state of the system entrains, that is, converges to a periodic trajectory with the same period T. If this is so, an important question is whether there exists a unique periodic trajectory γ and then every solution converges to γ . Entrainment is important in many natural and artificial systems. For example, organisms are often exposed to periodic excitations like the 24 h solar day and the periodic cell-cycle division process. Proper functioning often requires accurate entrainment of various biological processes to this excitation [9]. For example, cardiac arrhythmia is a heart disorder occurring when every other pulse generated by the sinoatrial node pacemaker is ineffective in driving the ventricular rhythm [10]. Epidemics of infectious diseases often correlate with seasonal changes and the required interventions, such as pulse vaccination, may also need to be periodic [11]. In mathematical population models, this means that the so-called transmission parameter is periodic, with a period of 1 year, and entrainment means that the spread of epidemics converges to a periodic pattern with the same period. As another example, traffic flow is often controlled by periodically varying traffic lights. In this context, entrainment means that the traffic flow converges to a periodic pattern with the same period as the traffic lights. This observation could be useful for the study of the green wave phenomenon [12]. Another example, from the field of power electronics, involves connecting a synchronous generator to the electric grid. The periodically varying voltage in the grid may be interpreted as a periodic excitation to the generator, and proper functioning requires the generator to entrain to this excitation (e.g. [13] and the references therein). Our main results provide affirmative answers for problem 1.1 under quite general assumptions. Basic regularity assumptions on the transition probabilities that are required throughout the paper are summarized in assumption 2.1. In theorem 2.2, we then formulate with (2.5), see also (2.3), a condition that guarantees entrainment. It is observed in corollary 2.3 that condition (2.5) is always satisfied in the linear case. Uniqueness of the periodic attractor is shown in theorem 2.5 under the additional assumption of irreducibility, a condition that is well known in the theory of Markov processes (a definition of irreducibility is provided just before the statement of theorem 2.5). In the special case of time-invariant rates, problem 1. are T-periodic for any T > 0 and thus entrainment means convergence to a solution that is T-periodic for any T > 0, i.e. a steady state. This basic observation is the content of corollary 2.4. As the x i s represent probabilities, for all t ≥ t 0 . The structure of the master equation guarantees that if x(t) satisfies (1.1) at time t = t 0 , then (1.1) holds for all t ≥ t 0 even when the x i s are not necessarily linked to probabilities (see (1.9) below). Our results also hold of course in this case. The next example demonstrates such a case. 2. An important topic in sociodynamics is the formation of large cities due to population migration. Haag [2, ch. 8] considers a master equation describing the flow of individuals between N settlements. The transition rates p ij in this model represent the probability per time unit that an individual living in settlement i will migrate to settlement j. A mean-field approximation of this master equation yields a model in the form (1.2), where x i represents the average density at settlement i, and p ij = exp((x j − x i )k ij ), with k ij > 0. This models the fact that the rate of transition from settlement i to settlement j increases when the population in settlement j is larger than in i, i.e. the tendency of individuals to migrate to larger cities. Note that the rates here are state-dependent, but not timedependent. However, it is natural to assume that migration decisions depend on the season. For example, the tendency to migrate to colder cities may decrease (increase) in the winter (summer). This can be modelled by adding time dependence, say, changing the scaling parameters k ij to functions k ij (t) that are periodic with a period of 1 year. Then the transition rates depend on both state and time, and are periodic. It is important to note that, in general, nonlinear dynamical systems do not entrain to periodic excitations. Indeed, Nikolaev et al. [14] discusses two 'simple looking' nonlinear dynamical systems whose response to periodic forcing is chaotic (rather than periodic). Moreover, these systems commonly appear as components of larger sensing and signal transduction pathways in systems biology. This highlights the importance of proving that entrainment does hold in specific classes of dynamical systems. Although entrainment has attracted enormous research attention, it seems that it has not been addressed before for the general case of systems modelled using a T-periodic master equation. Here we apply the theory of cooperative dynamical systems admitting a first integral to derive conditions guaranteeing that the answer to problem 1.1 is affirmative. In §3, we describe two applications of our approach to important systems from statistical physics. The first is the totally asymmetric simple exclusion process (TASEP). This model has been introduced in the context of biocellular processes [15] and has become the standard model for the flow of ribosomes along the mRNA molecule during translation [16,17]. More generally, TASEP has become a paradigmatic model for the statistical mechanics of non-equilibrium systems [6,7,18]. It is in particular used to study the stochastic dynamics of interacting particle systems such as vehicular traffic [19]. The second application is to an important model from epidemiology called the stochastic susceptibleinfected-susceptible (SIS) model. The remainder of this paper is organized as follows. In the following subsection, we briefly explain the central mathematical concepts used in the proofs of our main results theorems 2.2 and 2.5. The exact mathematical formulation of these results is provided in §2. The subsequent section then describes the two applications to statistical physics and to epidemiology mentioned above. This is followed by a brief discussion of the significance of the results and an outlook on possible future directions of research in §4. The appendix includes all the proofs. These are based on known tools, yet we are able to use the special structure of the master equation to derive stronger results than those available in the literature on monotone dynamical systems. Formulation of master equation and concepts of proof We begin by formulating the master equation, determined by given transition rates p ij (t, x) ≥ 0, that governs the evolution of the probability distribution on N configurations: In lemma A.4 below, it is shown in a slightly more general setting that system (1.2) defines a flow in the set of probability distributions Assume also that all the rates are periodic with period T > 0. Using the fact that x 1 (t) + x 2 (t) ≡ 1 yieldṡ x 1 (t) = p 21 (t) − (p 12 (t) + p 21 (t))x 1 (t). (1.5) Recall that x 1 (0), x 2 (0) ∈ [0, 1] with x 1 (0) + x 2 (0) = 1. Equation (1.5) implies that x 1 (t) ∈ [0, 1], for all t ≥ 0, and thus x 2 (t) ∈ [0, 1], for all t ≥ 0. Solving (1.5) yields and where c(t) := . (1.7) It is straightforward to show that the right-hand side in this equation is in [0, 1], so in this case, there exists a unique periodic trajectory γ (t), with γ 1 (0) equal to the expression in (1.7) and γ 2 (0) = 1 − γ 1 (0). To determine if every trajectory converges to γ , let z(t) := x(t) − γ (t), that is, the difference between the solution emanating from x 0 and the unique periodic solution. Theṅ As p 12 (t)+p 21 (t) is non-negative for all t, positive on a time interval and T-periodic, z 1 (t) converges to zero and we conclude that any trajectory of the system converges to the unique periodic solution γ . Of course, when N > 2 and the rates depend on both t and x, this type of explicit analysis is impossible, and the proof of entrainment requires a different approach. In general, proving that a time-varying nonlinear dynamical system entrains to periodic excitations is non-trivial. Rigorous proofs are known for two classes of dynamical systems: contractive systems and monotone systems with additional structure like a tridiagonal Jacobian [20] or admitting a first integral. A system is called contractive if any two trajectories approach one another at an exponential rate [21,22]. Such systems entrain to periodic excitations [9,23]. An important special case is asymptotically stable linear systems with an additive periodic input u, that is, systems in the forṁ In this case, x(t) converges to a periodic solution γ (t) and it is also possible to obtain a closed-form description of γ using the transfer function of the linear system [24]. We note that even in the case that the p ij s in (1.2) do not depend on x, i.e. when (1.2) is linear in x, the master equation is not of the form (1.8) because the periodic influence in (1.2) enters through the transition rates p ij and not through an additive input channel. Next, we turn to the notion of a first integral. Define H : so that the value of H(x(t)) remains constant under the flow, that is, H is a first integral of (1.2). A system is called monotone if its flow preserves a partial order, induced by an appropriate cone K, between its initial conditions [25]. An important special case of monotone systems is cooperative systems for which the cone K is the positive orthant. To explain this, define a partial ordering between vectors a, b ∈ R n by a ≤ b if every entry of a is smaller or equal to the corresponding entry of b. A systemẋ = f (x) is called cooperative if for any two initial conditions a, b with a ≤ b the solutions satisfy x(t, a) ≤ x(t, b) for any time t ≥ 0. In other words, the dynamics preserves the ordering between the initial conditions. Cooperative systems that admit a first integral entrain to periodic excitations. It is interesting to note that proofs of this property often follow from contraction arguments [26]. The master equation (1.2) is, in general, not contractive, although as we will show in theorem A.8 below it is on the 'verge of contraction' with respect to the 1 vector norm (see [27] for some related considerations). However, (1.2) admits a first integral and is often a cooperative system (see theorem A.9 below). In particular, when the rates do not depend on the state, i.e. p ij = p ij (t), then (1.2) is always cooperative. Main results We begin by specifying the exact conditions on (1.2) that are assumed throughout. For a set S, let int(S) denote the interior of S. For any time t, x(t) is an N-dimensional column vector that includes the probabilities of all N possible configurations. The relevant state-space is thus For an initial time t 0 ≥ 0 and an initial condition x(t 0 ), let x(t; t 0 , x(t 0 )) denote the solution of (1.2) at time t ≥ t 0 . For our purposes, it will be convenient to assume that the vector field associated with system (1.2) is not only defined on the set Ω, but on all the closed positive cones Throughout this paper, we assume that the following condition holds. 1 That is, the real part of every eigenvalue of A is negative. There exists T > 0 such that the transition rates p ij (t, x) are: continuous and nonnegative on [0, T] × R N + ; continuously differentiable with respect to x on [0, T) × int(R N + ) and the derivative admits a continuous extension onto [0, T) × R N + ; and are jointly periodic with period T, that is, Let relint(Ω) denote the relative interior of Ω, that is, Note that if the rates are only defined on x ∈ Ω, with partial derivatives with respect to x j on relint(Ω) with continuous extensions to Ω, then they can be extended to R N + so that the conditions in assumption 2.1 hold. For example, by defining them to be constant on rays through the origin and multiplied by a cut-off function χ (|x| 1 ), where χ is a smooth function with a compact support in [0, ∞), satisfying χ (s) = 1 for s = 1, and where |x| 1 denotes the 1 -norm of x. We now determine the conditions guaranteeing that (1.2) is a cooperative dynamical system. Note that (1.2) can be written asẋ 3) The Jacobian of the vector field f is the N × N matrix We can now state our first result. Then, for any t 0 ≥ 0 and any x(t 0 ) ∈ Ω the solution x(t; t 0 , x(t 0 )) of (1.2) converges to a periodic solution with period T. If the rates depend on time, but not on the state, i.e. p ij = p ij (t) for all i, j, then the condition in theorem 2.2 always holds, and this yields the following result. converges to a periodic solution with period T. Thus, theorem 2.2 describes a technical condition guaranteeing entrainment, and this condition automatically holds in the case where all the rates are functions of time only. If the rates depend on the state, but not on time then we may apply theorem 2.2 for all T > 0. Thus, the trajectories converge to a periodic solution with an arbitrary period, i.e. a steady state. This yields the following result. Corollary 2.4. If p ij = p ij (x) for all i, j and in addition condition (2.5) holds, then for any t 0 ≥ 0 and any x(t 0 ) ∈ Ω the solution x(t; t 0 , x(t 0 )) of (1.2) converges to a steady state. In some applications, it is useful to establish that all trajectories of (1.2) converge to a unique periodic trajectory. Recall that a matrix M ∈ R n×n , with n ≥ 2, is said to be reducible if there exists a permutation matrix P ∈ {0, 1} n×n , and an integer 1 is an irreducible matrix for all x ∈ Ω. Then (1.2) admits a unique periodic solution γ in Ω, with period T, and every solution x(t; t 0 , x(t 0 )) with x(t 0 ) ∈ Ω converges to γ at an exponential rate. Example 2.6. Consider the system in example 1.3. This is of the form (2.2) with If there exists a time t * such that p 12 (t * ), p 21 (t * ) > 0 then A(t * ) is irreducible. We conclude that, in this case, all the conditions in theorem 2.5 hold, so the system admits a unique T-periodic solution γ and every trajectory converges to γ . This agrees of course with the results of the analysis in example 1.3 above where we arrived at the same conclusion under the slightly weaker assumption that p 12 (t * ) + p 21 (t * ) > 0. The next section describes an application of our results to two important models. Entrainment in totally asymmetric simple exclusion process The totally asymmetric simple exclusion process (TASEP) is a stochastic model of particles hopping along a one-dimensional chain. A particle at site k hops to site k + 1 (the next site on the right) with an exponentially distributed probability 2 with rate h k , provided the site k + 1 is not occupied by another particle. This simple exclusion property generates an indirect link between the particles and allows to model the formation of traffic jams. Indeed, if a particle 'gets stuck' for a long time in the same site, then other particles accumulate behind it. At the left end of the chain particles enter with a certain entry rate α > 0 and at the right end particles leave with a rate β > 0 ( figure 1). As pointed out in the introduction, TASEP has become a standard tool for modelling ribosome flow during translation, and is a paradigmatic model for the statistical mechanics of non-equilibrium systems. We note that in the classical TASEP model the rates α, β and h i are constants, but several papers considered TASEP with periodic rates [29][30][31] that can be used, for example, as models for vehicular traffic controlled by periodically varying traffic signals. It was shown in [32] that the dynamic mean-field approximation of TASEP, called the ribosome flow model (RFM), entrains. However, the RFM is not a master equation and the proof of entrainment in [32] is based on different ideas. For more on the analysis of the RFM, see e.g. [33][34][35][36]. For a chain of length n, denoting an occupied site by 1 and a free site by 0, the set of possible configurations is {0, 1} n , and thus the number of possible configurations is N = 2 n . The dynamics of TASEP can be expressed as a master equation with transition rates p ij that depend on the values α, β and h i , i = 1, . . . , n. For the sake of simplicity, we will show this in the specific case n = 2, but all our results below hold for any value of n. When n = 2, the possible configurations of particles along the chain are C 1 := (0, 0), C 2 := (0, 1), C 3 := (1, 0) and C 4 := (1, 1). Let x i (t) denote the probability that the system is in configuration C i at time t, for example, x 1 (t) is the probability that both sites are empty at time t. Then x 1 may decrease (increase) due to the transition C 1 → C 3 [C 2 → C 1 ], i.e. when a particle enters the first site (a particle in the second site hops out of the chain). This givesẋ Similar considerations for all configurations lead to the master equationẋ = Ax, with If the entry, exit and hopping rates are time dependent and periodic, all with the same period T, one easily sees that the resulting master equation satisfies assumption 2. Figure 1. The TASEP model includes particles randomly hopping along a chain of n sites. Note that the particle in site 1 cannot hop forward because site 2 contains a particle. solution with period T. Moreover, if there exists a time t * such that α(t * ), β(t * ), h 1 (t * ) > 0, then A(t * ) is irreducible. Hence, the conditions of theorem 2.5 are also satisfied, so we conclude that the periodic solution is unique and convergence takes place at an exponential rate. It is not difficult to show that the same holds for TASEP with any length n. Example 3.1. When n = 3, the possible particle configurations are C 1 := (0, 0, 0), C 2 := (0, 0, 1), C 3 := (0, 1, 0), C 4 := (0, 1, 1), . . . , C 8 := (1, 1, 1). Let x i (t) denote the probability that the system is in configuration C i at time t. The TASEP master equation in this case isẋ = Ax, with We simulated this system with the rates α(t) = 1 + cos(t), β(t) = 1 + cos(t + π ), h 1 = 1 2 , h 2 = 1 4 and initial condition x(0) = [ 1 8 . . . 1 8 ] . Note that all the rates here are jointly periodic with period 2π . Figure 2 depicts x 1 (t) (black square), x 4 (t) (red asterisk) and x 8 (t) (blue circle) as a function of t (we depict only three x i s to avoid cluttering the figure). Note that as the entry rate α(t) is maximal and the exit rate β(t) is minimal at t = 0, the probability x 8 (t) [x 1 (t)] to be in state (1, 1, 1) [(0, 0, 0)] quickly increases (decreases) near t = 0. As time progresses, the probabilities converge to a periodic pattern with period 2π . S(t) I(t) c(t) a(t) b(t) recovery infection by contact infection by external agent Entrainment of the probabilities x i has consequences for other quantities of interest in statistical mechanics. For instance, an important quantity is the occupation density, i.e. the probability that site k is occupied, often denoted by τ k , cf. [37,38]. Denoting the kth component of the configuration C i ∈ {0, 1} n by C i,k , a straightforward computation reveals that It is thus immediate that the occupation densities also converge to a unique periodic solution. This phenomenon has already been observed empirically in [29] that studied a semi-infinite and finite TASEP coupled at the end to a reservoir with a periodic time-varying particle density. This models, for example, a traffic lane ending with a periodically varying traffic light. The simulations in [29] suggest that this leads to the development of a sawteeth density profile along the chain, and that 'The sawteeth profile is changing with time, but it regains its shape after each complete period. . .' [29, p. 011122-2] (see also [30,31] for some related considerations). Our results can also be interpreted in terms of the particles along the chain in TASEP. As the expectation of the occupation densities τ k converges to a periodic solution, this means that, in the long term, the TASEP dynamics 'fluctuates' around a periodic 'mean' solution (see e.g. the simulation results depicted in fig. 5 in [32]). Moreover, in [30,31] it was found for closely related models that the limiting periodic density profiles (whose existence is also guaranteed by our results) have an interesting structure that depends in a non-trivial way on the frequency of the transition rates. Entrainment in a stochastic susceptible-infected-susceptible model The stochastic SIS model plays an important role in mathematical epidemiology [39]. But, as noted in [40], it is usually studied under the assumption of fixed contact and recovery rates. Here, we apply our results to prove entrainment in an SIS model with periodic rates. Consider a population of size N divided into susceptible and infected individuals. Let S(t) [I(t)] denote the size of the susceptible (infected) part of the population at time t, so that S(t) + I(t) ≡ N. We assume two mechanisms for infection. The first is by contact with an infected individual and depends on the contact rate a(t). The second is by some external agent (modelling, say, insect bite) with rate c(t). The recovery rate is b(t) (figure 3). We assume that a(t), b(t) and c(t) are continuous and take non-negative values for all time t. If I(t) = n (so S(t) = N − n), then the probability that one individual recovers in the time interval [t, t + dt] is b(t)n dt + o(dt), and the probability for one new infection to occur in this time interval is a(t)n((N − n)/N) dt + c(t)((N − n)/N) dt + o(dt). For n ∈ {0, . . . , N}, let P n (t) denote the probability that I(t) = n. This yields the master equation: 0, b, 2b, . . . , Nb), and P is the (N + 1) × (N + 1) matrix: where q i := 1 − i/N. Note that M(t) is Metzler, as a(t), b(t) and c(t) are non-negative for all t. Thus, theorems 2.2 and 2.5 yield the following result. Corollary 3.2. If a(t), b(t) and c(t) are all T-periodic then any solution of then there exists a unique T-periodic solution γ in Ω and every solution converges to γ . Note that if the irreducibility condition (3.2) does not hold then the system may have several periodic solutions. To see this, consider, for example, the case b(t) = c(t) ≡ 0. Let e i ∈ R N+1 denote the vector with entry i equal to one and all other entries zero. Then both x(t) ≡ e 1 and x(t) ≡ e N+1 are (periodic) solutions of the dynamics. Discussion In his 1929 paper on periodicity in disease prevalence, Soper [41] states: 'Perhaps no events of human experience interest us so continuously, from generation to generation, as those which are, or seem to be, periodic.' Soper also raised the question of whether the observed periodicity in epidemic outbreaks is the result of a 'seasonal change in perturbing influences, such as might be brought about by school break-up and reassembling, or other annual recurrences?' In modern terms, this amounts to asking whether the solutions of the system describing the dynamics of the epidemics entrain to periodic variations in the transmission parameters. Here, we studied entrainment for dynamical systems described by a master equation. We considered a rather general formulation where the transition rates may depend on both time and state. Also, we did not assume any symmetry conditions (e.g. detailed balance conditions [3, ch. V]) on the rates. We also note that this formulation implies similar results for nonlinear systems. Indeed, consider the timevarying nonlinear system:ẋ and assume that f (t, 0) = 0, for all t. Let J(t, x) := (∂f /∂x)(t, x) denote the Jacobian of the vector field. Theṅ x) has the form (2.3) then the results above can be applied to (4.1). We proved that entrainment indeed holds under quite mild technical conditions. This follows from the fact that the master equation is a cooperative dynamical system admitting a first integral. Owing to the prevalence of the master equation as a model for natural and artificial phenomena, we believe that this result will find many applications. To demonstrate this, we described two applications of our results: a proof of entrainment in TASEP and in a stochastic SIS model. The rigorous proof that the solutions of the master equation entrain is of course a necessary first step in studying the structure of the periodic trajectory (or trajectories), and its dependence on various parameters. Indeed, in many applications it is of interest to obtain more information on the periodic trajectory, e.g. its amplitude. Of course, one cannot expect in general to obtain a closed-form description of the limit cycle. However, for contractive dynamical systems there do exist efficient methods for obtaining a closed-form approximation of the limit cycle accompanied by explicit error bounds [23]. Developing a similar approach for the attractive limit cycle of the master equation may be an interesting topic for further research. In the specific case of TASEP with fixed rates, there exists a powerful representation of the steady state in terms of a product of matrices [7,37]. It may be of interest to try and represent the periodic steady state using a similar product, but with matrices with periodic entries. This could be used in particular to study the effects of periodic perturbations to the boundary-induced phase transitions that have been observed for TASEP in [42]. Data accessibility. All the relevant data are contained in this paper. Authors' contributions. M.M., L.G. and T.K. performed the research and wrote the paper together. Competing interests. We declare we have no competing interests. Funding. This research is supported, in part, by a research grant from the Israeli Science Foundation (ISF grant 410/15). Acknowledgements. We thank Yoram Zarai for helpful comments. L.G. and T.K. are grateful to Joachim Krug for very helpful discussions on interacting particle systems and for pointing out a number of references. Appendix A. Proofs of theorems 2.2 and 2.5 The proofs of theorems 2.2 and 2.5 are based on known tools from the theory of monotone dynamical systems admitting a first integral with a positive gradient (e.g. [43][44][45]). We present in this appendix a self-contained proof taking full advantage of the technical simplifications that our specific setting permits. This, in particular, allows us to prove that the results hold on the closed state-space and also that irreducibility at a single time point is enough to guarantee convergence to a unique periodic solution. Without loss of generality we always assume that the initial time is t 0 = 0. It is convenient to work with the 1 vector norm |x| 1 = i |x i |. We begin by introducing some notation. First recall the notation for the closed positive cone. Define a set of vector fields by and has a continuous extension that is, the set of vector fields in F that are also T-periodic. It is straightforward to check that f (t, x) = A(t, x)x with A defined by (2.3) belongs to F T if assumption 2.1 and the assumptions of theorem 2.2 hold. Therefore, theorem 2.2 follows from the following result. The next result is a generalization of theorem 2.5. Theorem A.2. If f ∈ (F T ∩ F Ω irr ) then the differential equationẋ = f (t, x) admits a unique T-periodic solution γ : R → Ω. Moreover, there exists α > 0 such that for any initial condition x 0 ∈ Ω the corresponding solution x(t; x 0 ) satisfies i.e. the solution converges to γ with exponential rate α. Complete proofs of theorems A.1 and A.2 are provided in the following seven subsections. We begin by showing in lemma A.4 that solutions ofẋ = f (t, x), f ∈ F that start in the closed state-space R N + are unique and remain in R N + for all positive times. In the second subsection, we prove that for the subset of linear vector fields f in F the flow is cooperative, and non-expansive or even contractive in the case of irreducibility. The latter property is then generalized to the nonlinear setting (theorem A.8), which is enough to prove theorem A.2 in subsection A.4. The cooperative behaviour for nonlinear vector fields is stated in theorem A.9. In subsection A.6, we argue that the non-expansiveness of the flow together with the existence of a fixed point in the ω-limit set of the period map implies the asymptotic periodicity of the solution. The proof that such a fixed point exists is deferred to the final subsection. It uses the cooperative behaviour of the flow as well as the fact that the first integral H has a positive gradient, i.e. ∇H ∈ int(R N + ). We have established thatx remains in the compact set {z ∈ R N + | H(z) = H(x 0 )} and it follows that the solutionx(t; x 0 ) exists for all t ≥ 0. As the solutions x( · ; x 0 ) andx( · ; x 0 ) coincide on R N + , this completes the proof. A.2. Linear time-varying systems The properties that are essential in the proofs of our main results are cooperativeness, non-expansiveness and contractivity of the flow. As it turns out, it is convenient to first prove these properties for linear time-varying systems. Let Then the initial value problemẋ = A(t)x, x(0) = x 0 ∈ R N + , has a unique solution x( · ; x 0 ) : [0, ∞) → R N that satisfies the following properties: (a) x(t; x 0 ) ∈ R N + and H(x(t; x 0 )) = H(x 0 ) for all t ≥ 0. (b) If x j (t * ; x 0 ) > 0 for some j ∈ {1, . . . , N} and t * ≥ 0, then x j (t; x 0 ) > 0 for all t ≥ t * . (c) If x 0 = 0 and A(t * ) is irreducible for some t * ≥ 0, then x(t; x 0 ) ∈ int(R N + ) for all t > t * . Proof. Existence and uniqueness of the solution are immediate from the linearity and continuity of A. The proof of (a) follows from lemma A.4 and the fact that f (t, x) := A(t)x belongs to F for each A ∈ A. To prove (b), assume that x j (t * ; x 0 ) > 0. Let y(t) := x j (t; x 0 ). Then y solves the scalar initial value probleṁ Thus, letting q(t) := t t * a jj (u) du yields To prove property (c), first note that irreducibility of A(t * ) implies that there exists δ > 0 such that This follows from the fact that irreducibility is equivalent to the associated adjacency graph being strongly connected, i.e. certain edges have positive weights, and the continuity of A(t). Pick x 0 ∈ R n + \ {0}. We consider two cases. with U ∈ R k×k and Z ∈ R (N−k)×(N−k) . As A(t * ) is Metzler and irreducible, every entry of Y is non-negative and at least one entry is positive, so there exists j > k such thatẋ j (t * ; x 0 ) > 0. Therefore, at least k + 1 entries of x(t; x 0 ) are positive for t > t * . Now an inductive argument and using (A 4) completes the proof. As the columns of Φ A (t) are x(t; e j ), where e j ∈ R N + denotes the jth canonical unit vector, the next result follows from properties (a) and (c) in lemma A.5. We now use this to prove non-expansiveness with respect to the 1 -norm and contractivity in the case of irreducibility. The first step is to note that stochastic matrices have useful properties with respect to this norm. Proof. The first statement follows from To prove the second statement, pick x = 0 such that H(x) = 0. Then there exist k 1 , k 2 ∈ {1, . . . , N} such that x k 1 < 0 < x k 2 . Thus, if Q ∈ S + then for any j ∈ {1, . . . , N}, N k=1 Q jk x k < N k=1 Q jk |x k | because the sum on the left contains both positive and negative terms. Now arguing as in (A 5) completes the proof. A.4. Proof of theorem A.2 We can now prove theorem A.2. We note that the proof proceeds without the explicit use of the cooperative behaviour of dynamical systems (though we will use this property for the proof of theorem A.1; see subsections A.5 and A.6). Note that for f ∈ F T a solutionγ = f (t, γ ), γ : In other words, P T (a) is the value of x(T) for the initial condition x(0) = a. Observe that P T (Ω) ⊆ Ω (as H is a first integral ofẋ = f (t, x)). Moreover, for f ∈ (F T ∩ F Ω irr ) there exists t * ∈ [0, T) such that J(t * , x) is irreducible for all x ∈ Ω. Then T > t * , so theorem A.8(b) implies that P T is Lipschitz on the closed set Ω with Lipschitz constant M T < 1. The Banach fixed point theorem implies that P T has a unique fixed point in Ω, that is, there exists a unique T-periodic function γ : R → Ω that solvesγ = f (t, γ ). Fix α > 0 such that max{ 1 2 , M T } ≤ e −αT . ( A 9 ) Pick x 0 ∈ Ω and t ≥ 0, and let k ∈ N 0 be such that kT ≤ t ≤ (k + 1)T. Then theorem A.8 yields where the last inequality follows from (A 9). Thus, every solution x(t; x 0 ) converges to the unique periodic solution γ (t) at an exponential rate.
9,419.2
2017-10-19T00:00:00.000
[ "Physics" ]
The Roles of GSK-3β in Regulation of Retinoid Signaling and Sorafenib Treatment Response in Hepatocellular Carcinoma Rationale: Glycogen synthase kinase-3β (GSK-3β) plays key roles in metabolism and many cellular processes. It was recently demonstrated that overexpression of GSK-3β can confer tumor growth. However, the expression and function of GSK-3β in hepatocellular carcinoma (HCC) remain largely unexplored. This study is aimed at investigating the role and therapeutic target value of GSK-3β in HCC. Methods: We firstly clarified the expression of GSK-3β in human HCC samples. Given that deviated retinoid signalling is critical for HCC development, we studied whether GSK-3β could be involved in the regulation. Since sorafenib is currently used to treat HCC, the involvement of GSK-3β in sorafenib treatment response was determined. Co-immunoprecipitation, GST pull down, in vitro kinase assay, luciferase reporter and chromatin immunoprecipitation were used to explore the molecular mechanism. The biological readouts were examined with MTT, flow cytometry and animal experiments. Results: We demonstrated that GSK-3β is highly expressed in HCC and associated with shorter overall survival (OS). Overexpression of GSK-3β confers HCC cell colony formation and xenograft tumor growth. Tumor-associated GSK-3β is correlated with reduced expression of retinoic acid receptor-β (RARβ), which is caused by GSK-3β-mediated phosphorylation and heterodimerization abrogation of retinoid X receptor (RXRα) with RARα on RARβ promoter. Overexpression of functional GSK-3β impairs retinoid response and represses sorafenib anti-HCC effect. Inactivation of GSK-3β by tideglusib can potentiate 9-cis-RA enhancement of sorafenib sensitivity (tumor inhibition from 48.3% to 93.4%). Efficient induction of RARβ by tideglusib/9-cis-RA is required for enhanced therapeutic outcome of sorafenib, which effect is greatly inhibited by knocking down RARβ. Conclusions: Our findings demonstrate that GSK-3β is a disruptor of retinoid signalling and a new resistant factor of sorafenib in HCC. Targeting GSK-3β may be a promising strategy for HCC treatment in clinic. Retinoids are very important for hepatic homeostasis, which effects are mediated by retinoic acid receptors (RARα, RARβ and RARγ) and retinoid X receptors (RXRα, RXRβ and RXRγ) [13]. Deregulated metabolism of retinoids and altered expression of their receptors are implicated in HCC development and progression [14,15]. GSK-3β can inhibit RARα-dependent differentiation of myeloid leukemia [16,17]. Paradoxically, GSK-3β protects RXRα from calpain-mediated truncation in certain solid tumors [18]. Implication of GSK-3β in retinoid signaling and HCC development need further explore. We characterized here RXRα as a direct substrate for GSK-3β. GSK-3β phosphorylates RXRα and impairs its activation of RARβ promoter. Clinically, GSK-3β is overexpressed and associated with RARβ reduction in a majority of HCC. RARβ mediates retinoid action but is frequently silenced during carcinogenesis [14]. Thus, GSK-3β may confer HCC through interfering RARβ-mediated retinoid signalling. This prompted us to further determine whether targeting GSK-3β/RARβ could be of therapeutic significance in HCC. Sorafenib, a multi-kinase inhibitor, is currently used to treat HCC [19,20]. However, its therapeutic resistance remains a significant problem in clinic [21,22]. Interestingly, sorafenib can stimulate GSK-3β activity in vitro and in vivo. We demonstrated that GSK-3β regulation of RARβ is involved in sorafenib resistance in HCC. HCC samples HCC samples (tumors and para-tumor tissues) were collected from The 174 th Hospital affiliated to Xiamen University. The tumors were histologically diagnosed as described [23,24]. All the use of human samples and study protocols were approved by the Hospital Ethics Committee. All patients signed an informed consent form in prior to sample collection. The clinical data were provided in Table S1. Cell culture and transfection HepG2 (HB-8065) and HEK293T (CRL-11268) were purchased from ATCC, while SMMC-7721, Bel-7402, and QGY-7703 from Institute of Biochemistry and Cell Biology (SIBS, CAS). All cell lines were obtained between 2008 and 2013 and authenticated by the vendors. The newly received cells were expanded and aliquots of less than 10 passages were stored in liquid nitrogen. All cell lines were kept at low passage, returning to original frozen stocks every 6 months. During the course of this study, cells were thawed and passaged within 2 months in each experiment. QGY-7703 was cultured in RPMI-1640 medium, while other cell lines were grown in Dulbecco's Modified Eagle's Medium. The cultured cells were supplemented with 10% fetal bovine serum. Sub-confluent cells with exponential growth were used throughout the experiments. Transfections were carried out by using Lipofectamine 2000 according to manufacturer's instructions. Generation of stable lines GSK-3β stable lines were generated with retroviral vectors. Briefly, HEK293T cells were transfected with PCDH-puro-GSK-3β together with envelope plasmid VsVg (addgene, #8454) and packaging plasmid psPAX2 (addgene, #12260). Retroviral supernatant was harvested at 48 h after initial plasmid transfection and then infected various HCC cell lines. Stable cell pools were selected with 1 μg/ml puromycin (Amresco). The expression efficiency was determined by Western blotting and RT-PCR. MTT assay and flow cytometry Cell viability was performed with MTT method as described [15]. For apoptotic analysis, control and treated cells were harvested and washed with precooled PBS twice. The cells were then stained with Annexin V-FITC and propidium iodide (PI) at room temperature for 15 min in the dark. Apoptotic cells were quantitated with flow cytometer analysis (Thermo, Attune NxT). Dual-luciferase reporter assays Cells were co-transfected with pGL6-βRARE firefly luciferase reporter constructs, renilla luciferase expression vector (renilla), and ΗΑ-RARα/myc-RXRα in the presence or absence of GSK-3β. The cells were treated with 1 µM 9-cis-RA combined with or without 5 µM tideglusib for 20 h. Cell lysates were measured for luciferase activities. The fluorescence intensity was detected in Multiskan Spectrum (PerkinElmer, USA). The renilla luciferase activity was used to normalize for transfection efficiency. Co-immunoprecipitation HCC cells and tumor tissues were lysed and sonicated in 500 µL lysis buffer containing 150 mM NaCl, 100 mM NaF, 50 mM Tris-HCl (pH 7.6), 0.5% NP-40 and 1 mM PMSF. The lysates were incubated with antibodies against endogenous or tags of ectopic proteins and purified with protein A/G beads. For detection of GSK-3β-associated RXRα phosphorylation, HCC tumor lysates were subjected to two rounds of immunoprecipitation (IP). The first round of IP (IP1) was performed with anti-RXRα (D20), which product was then subjected to secondary IP (IP2) with anti-GSK-3β antibody. The lysates or IP2 samples were separated by 10% SDS-PAGE and blotted with anti-p-S/T antibody. In vitro kinase assays GFP-RXRα was expressed in and purified from HepG2 cells with immunoprecipitation (IP) using anti-GFP antibody. The cell lysates and IP products were incubated with bacterially purified His-GSK-3β protein in a kinase reaction buffer (pH 7.5, 20 mM Tris-HCl, 10 mM MgCl 2 and 100 mM ATP) at 37°C for 45 min. The reactions were stopped by boiling the samples in loading buffer for 10 min and then separated with 10% SDS-PAGE. GSK-3β-induced RXRα phosphorylation was detected by anti-phospho-ser/thr (p-S/T) antibody. Animal experiments Male BALB/c nude mice were injected with HepG2/3β cells (2×10 6 cells) subcutaneously in the posterior flanks and treated with 10 mg/kg sorafenib, 2 mg/kg 9-cis-RA, and 5 mg/kg tideglusib every other day at Day 3 of post-implantation. After three weeks of treatment, the mice were sacrificed. The tumors and various organ tissues were collected for further analysis. Tumor volume was measured twice weekly with a caliper. Tumor samples were immunoblotted with antibodies against RARβ, GSK-3β, p-GSK-3β and GAPDH. Paraffin sections were immunostained using antibodies against Ki-67 and cleaved caspase 3 with DAB Detection Kit (Polymer) (MXB biotechnologies, Fuzhou, China). The expression of p-GSK-3β and RARβ was determined with fluorescent immunostaining. The sections were co-stained with DAPI and detected by Laser Scanning Confocal Microscope (Zeiss). The study protocols were approved by the Institutional Animal Care and Use Committee of University of Xiamen University. Statistical analysis Data were represented as mean ± standard deviation (SD) or median ± SEM. The statistical significances of differences were determined using an analysis of variance or Student t test. A P value of <0.05 was considered as significant. All data were acquired in at least three independent experiments. GSK-3β is overexpressed and associated with RARβ reduction in HCC To clarify the role of GSK-3β in HCC, we firstly collected HCC samples (n=18) to examine GSK-3β expression. Our results showed that GSK-3β was upregulated in 66.7% of tumors (≥1.5-fold increase) compared to adjacent liver tissues (Fig. 1A, 1B and Table S1). This was consistent with other reports [25,26]. In most tumors, increased GSK-3β expression remained significant active (with low Ser 9 phosphorylation level) (Fig. 1A). We noted that high GSK-3β was not correlated to downregulation of total and nuclear β-catenin ( Fig. S1A and B), suggesting that the tumor-suppressing effect of GSK-3β via Wnt/β-catenin was lost in HCC. To study whether GSK-3β could confer HCC growth, we overexpressed or knocked down GSK-3β with various HCC cell lines. We showed that overexpression of GSK-3β could strongly promote colony-forming capability of HCC cells, while siRNA-mediated downregulation of GSK-3β resulted in reduced colony formation (Fig. S1C). The role of GSK-3β in HCC was further strengthened in two HepG2/siβ clones (Fig. S1D). Importantly, GSK-3β-mediated tumor growth and proliferation was confirmed in in vivo experiment (Fig. S1E-G). Interestingly, tumor-associated GSK-3β was inversely correlated with RARβ expression ( Fig. 1A and C). To further evaluate the possible clinical relevance of GSK-3β/RARβ, we analyzed GEPIA (Gene Expression Profiling Interactive Analysis) database (http://gepia.cancer-pku.cn/). Agreement with our results, GSK-3β was higher and RARβ lower in HCC than adjacent liver tissue (Fig. 1D), which phenomenon was also observed in many other malignant tumors ( Supplementary Fig. S2). Kalpen-Meier survival plot showed that the patients with high GSK-3β had a shorter overall survival (OS) than those with low GSK-3β (Fig. 1E). Together, our results suggest that GSK-3β play a role in RARβ regulation and HCC development. We thus proceeded to map GSK-3β-mediated phosphorylation site on RXRα. Deletion mutation analysis showed that GSK-3β could phosphorylate RXRα/ΔN20, ΔN40 and ΔN60, which effect was impaired in ΔN80 and lost in ΔN100, indicating that the putative phosphorylation site is located between 60~100 aa (Fig. 4A). To identify the phosphorylation site, we introduced Ala point mutation into these putative sites. Our results showed that RXRα phosphorylation by GSK-3β was kept at S49A and S66A, but abolished in S78A and S78A-containing mutations ( Fig. 4B and C), thus identifying that Ser 78 is the site for phosphorylation by GSK-3β. Since GSK-3β recognizes sequence motif in the context of S/T-X-X-X-S/T, Thr 82 was expected as a priming phosphorylation site. We thus also introduced Asp mutation into Ser 78 (S78D) and Thr 82 (T82D) to mimic their phosphorylation. As a result, GSK-3β-mediated RXRα phosphorylation was abolished in S78A and T82A, but retained in S78D and T82D (Fig. 4D). The upper-shifted band seen in S78D was due to its acidic carboxylic group-contained aspartic acid, which was unrelated to the activity of GSK-3β [28]. Thus, Thr 82 phosphorylation primes RXRα for subsequent phosphorylation of Ser 78 by GSK-3β. This phosphorylation event was confirmed with anti-p-S/T antibody ( Fig. 4E and data not shown). RXRα could also form dimer with itself. We thus also study the effect of GSK-3β on 9-cis-RA-induced TREpal luciferase reporter activity, which expression was driven by RXRα:RXRα. We showed that overexpression of GSK-3β could strongly inhibit the formation of RXRα homodimer, which could be rescued when GSK-β was inactivated by tideglusib ( Fig. S3F and S3G). Thus, our results demonstrated that GSK-3β could impair the dimeric capacity and transcriptional activity of RXRα. RARβ and p21, two direct target genes of RXRα [29,30], could be induced by 9-cis-RA. Such induction was inhibited when overexpression of GSK-3β ( Fig. 5F and Fig. S4A). GSK-3β-mediated silence of RARβ and p21 could be relieved when GSK-3β was inactivated by tideglusib or LiCl (Fig. 5G and Fig. S4C). In contrast, 9-cis-RA and LiCl alone only played minor role in modulating the expression of p27, Cyclin D1 and Cyclin B1, all of which do not contain RXRα binding sites on their promoters (Fig. S4C). Interestingly, combined treatment of 9-cis-RA and LiCl could synergistically induce expression of these genes (Fig. S4C). Biologically, sorafenib resistance was observed in various GSK-3β stable cell lines compared to their vector-transfected counterparts ( Fig. 6F and Fig. S6A). Sorafenib response was reestablished in HepG2/3β when GSK-3β activity was inhibited by tideglusib. The enhancement of sorafenib response regarding its anti-proliferation activity (Fig. 6F) and anti-colony formation (Fig. S6B) were achieved by co-treatment of 9-cis-RA and tideglusib, which combination could effectively induce RARβ expression and PARP cleavage (Fig. 6G). When RARβ was silenced by specific siRNA, the dose-dependent effect of sorafenib on inducing PARP cleavage was inhibited even in the presence of 9-cis-RA/tideglusib (Fig. 6G). We then used flow cytometry to evaluate the synergy of 9-cis-RA/tideglusib on enhancing the apoptotic effect of sorafenib. Sorafenib could alone induce 22.1% apoptotic cell death, which effect was promoted to 50.8% by combining with 9-cis-RA/tideglusib. The enhancement of 9-cis-RA/tideglusib on sorafenib apoptotic response was impaired when silencing RARβ (Fig. 6H, 6I and Fig. S6C). Thus, our results demonstrated that GSK-3β-mediated RARβ inhibition was responsible for sorafenib resistance in HCC cells. HepG2/3β stable cells were transfected with pGL6-βRARE, Renilla, HA-RARα and myc-RXRα. After 24 h transfection, the cells were pretreated with vehicle or with different concentrations of tideglusib (2 µM, 5 µM) for 1 h followed by 1µM 9-cis-RA for 20 h. Luciferase activities were similarly detected. ** p<0.01 (vs respective control); ## p<0.01 (GSK-3β vs mock transfection). (C) HepG2 cells were transfected with vector or Flag-GSK-3β in combination with myc-RXRα and HA-RARα for 36 h. The cells were treated with or without 1 µM 9-cis-RA for 6 h. The lysates were immunoprecipitated with anti-myc tag and blotted with anti-HA and anti-myc antibodies. (D) HepG2/3β cells were transfected with siGSK-3β or scramble (500 pmol in 10 cm dish) for 48 h. The cells were then treated with or without 1 µM 9-cis-RA for 6 h. Co-IP was performed with anti-RARα and blotted with anti-RARα and anti-RXRα antibodies. (E) HepG2/3β cells were pretreated with 5 µM tideglusib for 1 h and then treated with vehicle or 1 µM 9-cis-RA for 6 h. The cell lysates were immunoprecipitated with anti-RARα and blotted with anti-RARα and anti-RXRα (D20) antibodies. For (C), (D), and (E), the inputs were detected with 5% of whole cell lysates. (F)(G) RARβ mRNA expression. HepG2 cells were transfected with vector or Flag-GSK-3β for 24 h (F). HepG2/3β cells were pretreated with 5 µM tideglusib for 1 h (G). Both HepG2 and HepG2/3β cells were treated with 1 µM 9-cis-RA or vehicle for 24 h. RARβ and GAPDH transcripts were detected with RT-PCR. (H) CHIP assays. HepG2 cells were transiently transfected with Flag-GSK-3β, while HepG2/3β stable cells were pretreated with 5 µM tideglusib for 1 h. Both cell lines were treated with vehicle or 1 µM 9-cis-RA for 20 h. The Chromatin DNA was purified and immunoprecipitated with anti-RXRα (D20) antibody or nonspecific IgG. The IPs were subjected to RT-PCR analysis by using specific RARβ promoter primers as indicated in Materials and Methods. (I) Different GSK-3β stable cell lines were pretreated with 5 µM tideglusib and then treated with vehicle or with increasing concentrations of 9-cis-RA for 20 h. The lysates were blotted with anti-RARβ, anti-Flag, anti-Ser 9 GSK-3β and anti-GAPDH antibodies. (J) HepG2 and HepG2/3β cells were pretreated with 5 µM tideglusib and then exposed to vehicle or increasing concentrations of 9-cis-RA for 48 h. HepG2/3β cells were also transfected with RARβ siRNA or scramble siRNA and then subjected to similar treatments. The cell proliferation was detected with MTT. ** p<0.01 (vs respective control). Targeting GSK-3β enhances the anticancer effect of sorafenib The significance of GSK-3β/RARβ involved in regulation of sorafenib treatment response was finally determined in vivo. Targeting GSK-3β could significantly inhibit tumor growth of subcutaneous or orthotopical xenografts (Fig. 7A and Fig. S6D). We found that inactivation of GSK-3β by tideglusib could significantly shrink tumor (22.3% inhibition), inhibit Ki-67 expression (14.2%) and induce caspase 3 activation (9.1%), further supporting that overexpression of GSK-3β might be a tumor promoter in HCC (Fig. 7A and B). Sorafenib treatment resulted in tumor inhibition by 48.3%, which effect could be largely enhanced to 93.4% by combining 9-cis-RA/ tideglusib (Fig. 7A). Consistently, sorafenib-induced Ki-67 inhibition and caspase 3 activation were greatly promoted by 9-cis-RA/tideglusib from 28.4% to 50.2% and from 12.4% to 30.3% respectively (Fig. 7B). 9-cis-RA was alone inefficient to improve the anti-tumor effect of sorafenib when GSK-3β remained active (Fig. 7A, B and D). Combination of 9-cis-RA/tideglusib/sorafenib did not affect the mouse weight and change the normal histological characteristics of various tissues including liver, lung, kidney, heart, and spleen, demonstrating that this strategy has less toxic side effect ( Fig. S7A and B). Mechanistically, tideglusib and 9-cis-RA/tideglusib could efficiently inactivate GSK-3β, but only combination could strongly induce RARβ expression ( Fig. 7C-E). Interestingly, RARβ expression by 9-cis-RA/tideglusib was extensively translocated into cytoplasm (Fig. 7C), implying that the nuclear export of RARβ is responsible for apoptotic induction and tumor growth inhibition. The involvement of GSK-3β/RARβ in sorafenib action was summarized in Fig. 7F. Discussion Functional GSK-3β is recently demonstrated to confer tumor development and poor prognosis in a wide range of solid tumors [34,35]. It is thus widely attempted to design GSK-3β inhibitor for cancer treatment [36][37][38]. However, the concern is its another important function in suppressing tumor growth [2]. Inhibition of active GSK-3β, whether beneficially or detrimentally, is highly dependent on contextual environment and clinical settings [35,39]. Disclosing the role and mechanism of GSK-3β in tumor will help develop new therapeutic strategy. HCC is the fourth most common tumor worldwide but with very limited treatment options [24,40]. The expression and therapeutic significance of GSK-3β in HCC remain largely unexplored. Although the samples we examined are small, we could consistently demonstrate that GSK-3β is increased in almost every tumor and upregulation of ≥1.5-fold is seen in 66.7% HCC (Fig. 1A, B and E). Increased GSK-3β is closely associated with shorter overall survival (OS) (Fig. 1E). Consistently, it was recently demonstrated that GSK-3β is overexpressed in HCC and targeting GSK-3β can induce degradation of c-FLIPL, a master anti-apoptotic regulator [25]. Our study further showed that overexpression of GSK-3β conferred HCC cell proliferation, colony formation and tumor development, while targeting GSK-3β by tideglusib can significantly induce about 22.3% of growth inhibition in HepG2/3β xenografts (Fig. 7A). Thus, we demonstrated that overexpression of functional GSK-3β supports HCC growth. Since overexpression of GSK-3β renders HCC resistant to certain chemotherapies like retinoid and sorafenib, the therapeutic significance of targeting GSK-3β may lie on its combination with other anticancer drugs. Hepatocarcinogenesis is closely linked to impaired retinoid metabolism and altered retinoid receptors [15,41,42]. GSK-3β is recently suggested to be a modulator of retinoid signaling as it strongly inhibits RARα-dependent myeloid leukemia differentiation in response to all-trans retinoic acid treatment [16,17]. However, the roles of retinoid receptors in leukemia and solid tumors can be quite different. The therapeutic effects of retinoids are usually less efficacy in solid tumors than leukemia. The mechanism and implication of GSK-3β-mediated impairment of retinoid signaling in solid tumors have not been reported. We demonstrated here that overexpression of GSK-3β can inhibit RARβ expression and impair retinoid signaling in HCC ( Fig. 1 and 5). RARβ expression is required for mediating retinoid action [30], but this protein is frequently down-regulated in HCC with poorly understood mechanism [43]. It was demonstrated that chromatin hypermethylation can impact negatively on RARβ expression [44]. Interestingly, GSK-3β was shown to play a fundamental role in maintaining DNA methylation [45]. There are currently no reports on GSK-3β regulation of RARβ. We demonstrated here that GSK-3β-mediated RARβ inhibition is attributed to its direct inactivation of RXRα (Fig. 5D), suggesting that a functional RXRα is required for RARβ induction. We identified RXRα as a new substrate for GSK-3β. GSK-3β can directly interact with and phosphorylate RXRα at Ser 78 within its N-terminal proline-directed context of S/T-X-X-X-S/T (Fig. 4). Such modification renders RXRα incapable of heterodimerizing with RARα to activate retinoic acid response element on RARβ promoter (Fig. 5). Targeting GSK-3β can recover the function of RXRα (Fig. 5D) and promote retinoid-induced RARβ expression in vitro (Fig. 5G) and in vivo (Fig. 7D). Our results thus disclosed a novel mechanism by which GSK-3β regulates RARβ expression in HCC. Deregulation of RARβ-mediated retinoid signaling by GSK-3β may at least partially explain why clinical trials of some classical retinoids like β-retinoic acid have no proven benefit in HCC [46]. Interestingly, clinical trial of acyclic retinoid, a synthetic analog of retinoids that target at phosphorylated RXRα, revealed a promising effect in reducing the incidence rate of secondary HCC by about 20% [47,48]. Sorafenib, a multi-kinase targeted anti-cancer drug, is being widely used to treat HCC [19,20] but with significant treatment resistance. We thus asked if GSK-3β-mediated RARβ inhibition could impact on sorafenib treatment response. We found that sorafenib can extensively activate GSK-3β both in vitro (Fig. 6A-D) [32] and in tumor microenvironment (Fig. 7E). Since GSK-3β is highly expressed in HCC, sorafenib treatment will generate abundantly hyperactive GSK-3β. Overexpression of functional GSK-3β strongly inhibits sorafenib action as indicated in various GSK-3β stable liver cell lines vs their vector-transfected counterparts ( Fig. 6F and Fig. S6A). 9-cis-RA cannot alone induce RARβ expression in GSK-3β stable cell lines, in which RARβ is silenced by GSK-3β. Targeting GSK-3β by tideglusib can greatly potentiate 9-cis-RA activation of RARβ-dependent signaling. Importantly, reactivation of RARβdependent signaling that is inhibited by overexpression of GSK-3β returns profoundly unexpected sorafenib treatment outcome (tumor inhibition raised sharply from 48.3% to 93.4%) (Fig. 7A). In this study, we only used low dose of tideglusib in animal experiment by considering that GSK-3β is normally a critical regulator of cell metabolism and homeostasis. In addition, tideglusib has been demonstrated to have fewer side effects under phase II trial in Alzheimer's disease treatment [49,50]. On the other hand, normal tissues are resistant to 9-cis-RA-induced cytoxicity [51]. Combination of tideglusib and 9-cis-RA do not exacerbate deleterious effect of sorafenib in liver and other normal tissues (Fig. S7B). In summary, our findings suggest that HCC may take advantage of GSK-3β overexpression to support its growth possibly through interfering RARβmediated retinoid signalling. The discovery of GSK-3β/RARβ in sorafenib treatment response may help design improved strategy to overcome the significant treatment resistant problem of sorafenib in clinic.
5,121.2
2020-01-01T00:00:00.000
[ "Biology", "Chemistry", "Medicine" ]
The Extremal Function and Colin de Verdi\`{e}re Graph Parameter We study the maximum number of edges in an $n$ vertex graph with Colin de Verdi\`{e}re parameter no more than $t$. We conjecture that for every integer $t$, if $G$ is a graph with at least $t$ vertices and Colin de Verdi\`{e}re parameter at most $t$, then $|E(G)| \leq t|V(G)|-\binom{t+1}{2}$. We observe a relation to the graph complement conjecture for the Colin de Verdi\`{e}re parameter and prove the conjectured edge upper bound for graphs $G$ such that either $\mu(G) \leq 7$, or $\mu(G) \geq |V(G)|-6$, or the complement of $G$ is chordal, or $G$ is chordal. Introduction We consider only finite, simple graphs without loops. Let µ(G) denote the Colin de Verdière parameter of a graph G introduced in [3] (cf. [4]). We give a formal definition of µ(G) in Section 2. The Colin de Verdière parameter is minor-monotone; that is, if H is a minor of G, then µ(H) ≤ µ(G). Particular interest in this parameter stems from the following characterizations: µ(G) ≤ 4 if and only if G is linklessly embeddable. Items 1, 2, and 3 were shown by Colin de Verdière in [3]. Robertson, Seymour, and Thomas noted in [18] that µ(G) ≤ 4 implies that G has a linkless embedding due to their theorem that the Petersen family is the forbidden minor family for linkless embeddings [19]. The other direction for 4 is due to Lovász and Schrijver [11]. See the survey of van der Holst, Lovász, and Schrijver for a thorough introduction to the parameter [7]. There is also a relation between the Colin de Verdière parameter and Hadwiger's conjecture that for every non-negative integer t, every graph with no K t+1 minor is t-colorable. One way to look for evidence for Hadwiger's conjecture is through considerations of average degree. In particular Mader showed that for every family of graphs F, there is an integer c so that if G is a graph with no graph in F as a minor, then |E(G)| ≤ c|V (G)| [12]. It follows by induction on the number of vertices that every graph G with no graph in F as a minor is 2c + 1-colorable. In fact Mader showed that: 2 . However asymptotically, as noted by Kostochka [9] and Thomason [23], based on Bollobás et at. [2]: [23] There exists a constant c ∈ R + such that for every positive integer t there exists a graph G with h(G) ≤ t + 1 and |E(G)| > ct √ log t|V (G)|. Furthermore, Kostochka showed that asymptotically in t the same is an upper bound [9]. This gives the best known bound on Hadwiger's conjecture, that graphs G with no K t minor have χ(G) ≤ O(t √ log t). We conjecture that an analog of Theorem 1.2 holds instead for the Colin de Verdière parameter: 2 . Nevo asked if this is true and showed that his Conjecture 1.5 in [15] implies Conjecture 1.1. Tait also asked this question as Problem 1 in [22] in relation to studying graphs with maximum spectral radius of their adjacency matrix, subject to having Colin de Verdière parameter at most t. We also observe that there is a relation between Conjecture 1.1 and the graph complement conjecture for the Colin de Verdière parameter. Let G denote the complement of G. The graph complement conjecture for the Colin de Verdière parameter is as follows: This conjecture was introduced by Kotlov, Lovász, and Vempala, who showed that the conjecture is true if G is planar [10]. Their result is used in this paper and will be stated formally in Section 4. Conjecture 1.2 is also an instance of a Nordhaus-Gaddum sum problem. See the recent paper by Hogben for a survey of Nordhaus-Gaddum problems for the Colin de Verdière and related parameters, including Conjecture 1.2 [6]. We observe that: Observation 1. If there exists a constant c ∈ R + so that for every graph G, |E(G)| ≤ cµ(G)|V (G)|, then there exists a constant p ∈ R + so that for every graph G, µ(G) + µ(G) ≥ p|V (G)|. This follows from noting that we would have cµ(G)|V (G)| + cµ(G)|V (G)| ≥ |E(G)| + |E(G)| = |V (G)| 2 . So our main Conjecture 1.1 would imply an asymptotic version of the graph complement conjecture for the Colin de Verdière parameter. This weaker version is currently not known. In the other direction we will show in Section 2 that: Then in particular the graph complement conjecture for Colin de Verdière parameter would imply that all graphs G are 2µ(G) + 2-colorable. We will also show in Section 2 that: Observation 3. Let H be any edge-maximal planar graph on at least 4 vertices and let t ≥ 3 be an integer. Then let G denote the join of H and So for every positive integer t, Conjecture 1.1 is tight for infinitely many graphs. We say a graph G is chordal if for every cycle C of G of length greater than 3, the induced subgraph of G with vertex set V (C) has some edge that is not in E(C). The main result we prove is Theorem 1.4: Note that it is equivalent to say that for such graphs, for every integer t with µ(G) ≤ t ≤ n, |E(G)| ≤ t|V (G)| − t+1 2 . We also note that the analog of Theorem 1.4 for the Hadwiger number is false. For n 1 , n 2 , . . . , n k ∈ Z + , let K n 1 ,n 2 ,...,n k denote the complete multipartite graph with independent sets of size n 1 , n 2 , . . . , n k . Every complete multipartite graph has chordal complement. Furthermore, as observed in the literature (see [13] and [20]), K 2,2,2,2,2 has h(K 2,2,2,2,2 ) = 7, yet |E(K 2,2,2,2,2 )| > 6|V (K 2,2,2,2,2 )| − 6+1 2 . Definitions and Preliminaries In this section we begin by briefly introducing our notation. Then we state the definition and some basic facts on the Colin de Verdière parameter, prove the observations from the introduction, and prove two lemmas that will be used in both of the next sections. In Section 3 we prove our main theorem, Theorem 1.4, for chordal graphs and the complement of chordal graphs. Finally, in Section 4 we prove Theorem 1.4 for graphs G with µ(G) ≤ 7 or µ(G) ≥ |V (G)| − 6. Let G be a graph. We will write an edge connecting vertices u and v as uv. We write δ(G) for the minimum degree, ∆(G) for the maximum degree, and ω(G) for the clique number of G. The set of vertices adjacent to a vertex v is denoted If e is an edge of G, we write G/e for the graph obtained from G by contracting e and deleting all parallel edges. We will use A := B to mean that A is defined to be B. Next we give the definition of the Colin de Verdière parameter. Let n be the number of vertices of G. It will be convenient to assume that V (G) = {1, 2, . . . , n} and that G is connected. If G is not connected, then define µ(G) to be the maximum among all connected components H of G of µ(H). We denote I := {ii : i ∈ {1, 2, . . . , n}}. Definition 1. The Colin de Verdière parameter µ(G) is the maximum corank of any real, symmetric n × n matrix M such that: 2. M has exactly one negative eigenvalue. 3. If X is a symmetric n × n matrix such that M X = 0 and X ij = 0 for ij ∈ E ∪ I, then X = 0. From the survey of van der Holst, Lovász, and Schrijver, we have: Let G be a graph, let H be a minor of G, and let v ∈ V (G). Then Then Observation 3, which we restate below, follows from induction on t by (iii) above and noting that for any positive integers t ≥ 3 and n, Observation 3. Let H be any edge-maximal planar graph on at least 4 vertices and let t ≥ 3 be an integer. Let G denote the join of H and K t−3 . Then µ(G) = t and To relate the extremal problem to the graph complement conjecture for Colin de Verdière parameter, and for the next two sections, it will be convenient to state the following lemma. Proof. Observe that n−t 2 + tn − t+1 We will also need the following theorem of Pendavingh (Theorem 5). Now we are ready to prove: Proof. Let G be a graph on n vertices. Since µ(G) is the maximum Colin de Verdière parameter of any connected component of G, by Theorem 2.2 either G is isomorphic to the disjoint union of K 3,3 and an independent set of vertices, or |E(G)| ≥ µ(G)+1 2 . In the . So by Lemma 2.1, we are done. If G is isomorphic to the disjoint union of K 3,3 and a set of k independent vertices, then µ(G) = 3 and by (iii) of Theorem 2.1 and since µ(K 3,3 ) = 2, µ(G) = k + 2. So then and again we are done by Lemma 2.1. We finish this section by proving some basic facts about a counterexample to the main Conjecture 1.1 such that every induced subgraph on one less vertex satisfies the conjecture. This lemma will be used in Sections 3 and 4 to help prove our main Theorem 1.4. Chordal Graphs and Complements of Chordal Graphs In this section we will show that if G is a graph such that G is chordal or G is chordal, then . Define a simplicial vertex of a graph G to be a vertex v such that G[N (v)] is a complete graph. We will use the fact that every chordal graph has a simplicial vertex. Proof. Let G be a vertex-minimal counterexample. Let u be a simplicial vertex of G. Then d(u) ≤ ω(G)−1 ≤ µ(G). This is a contradiction to Lemma 2.2 since every induced subgraph of a chordal graph is chordal and G is a vertex-minimal counterexample. For graphs with chordal complement, we need to introduce the following two theorems. Mitchell and Yengulalp showed that: For an integer t ≥ 3, let K t − ∆ denote the graph obtained from K t by deleting the edges of a triangle. Fallat and Mitchell proved that: Theorem 3.2. [5] Let G be a chordal graph. Then µ(G) = ω(G) if and only if G has We are now ready to prove the final lemma of this section. If S = ∅, then E(G) = ∅ and G would satisfy the lemma. So S = ∅. Then let u ∈ S and v ∈ S. We have uv ∈ E(G). Let uv also denote the new vertex of G/uv. Since in G the vertex u is adjacent to no vertices in S and v is adjacent to no vertices in S, the vertex uv is adjacent to every other vertex in G/uv. Also, since |S| ≥ 2, G/uv contains an edge. So by (iii) of Theorem 2.1, µ(G/uv) = µ(G − {u, v}) + 1. Graphs with Small or Large Parameter In this section we will show that graphs G such that either µ(G) First we give some definitions related to clique sums. Let k be a non-negative integer and let G 1 and G 2 be two vertex-disjoint graphs. For i = 1, 2 let C i ⊆ V (G i ) be a clique of size k of G i . Then let G denote the graph obtained from G 1 and G 2 by identifying the vertices in cliques C 1 and C 2 by some bijection. We say G is a pure k-clique sum of G 1 and G 2 . Let H be some fixed graph and let k be a non-negative integer. We say a graph G is built by pure k-sums of H if either G is isomorphic to H, or if G is a pure k-clique sum of graphs H 1 and H 2 , where H 1 and H 2 are built by pure k-sums of H. The following generalization of Theorem 1.2 is due to Jørgensen. For graphs with no K 9 minor, Song and Thomas proved: Then |E(G)| = 7|V (G)| − 27, and either either G is isomorphic to K 2,2,2,3,3 , or G can be built by pure 6-sums of K 1,2,2,2,2,2 . We will also make use of the following theorem due to Kotlov, Lovász, and Vempala. Kotlov, Lovász, and Vempala also characterized exactly which graphs G have µ(G) ≥ |V (G)| − 3 (Theorems 3.3 and 5.2, [10]). Let P 3,2 denote the graph formed from three disjoint paths of length two by identifying one end from each path. That is, P 3,2 is the graph in Figure 1. We will make use of the following corollary of these theorems: Now we are ready to prove the following lemma. For the next lemma we need to give some definitions related to subdivisions. Fix a graph H ′ . We say a graph H is a subdivision of H ′ if H can be formed from H ′ by replacing edges of H ′ with internally-disjoint paths with the same ends. Then we say Then since δ(G) ≥ 1, we have n ≤ 2|E(G)| ≤ 2( c 2 − 1). In total, we have 8 + c ≤ n ≤ 2( c 2 − 1). This implies that c ≥ 5. Now we will show that µ(G) ≥ c − 2. Otherwise, µ(G) ≤ c − 3 ≤ 3. Then by Theorem 4.3, n − 2 ≤ µ(G) + µ(G) ≤ n − 3, a contradiction. Now we proceed by cases. Then since µ(G) ≥ c − 2 = 3, G is not outerplanar. So G has a subgraph H that is either a subdivision of K 4 or a subdivision of K 2,3 . Let D ⊆ V (H) be the set of branch vertices of H. Then since δ(G) ≥ 1 and n ≥ 8 + c = 13, In either case we get a contradiction. Then µ(G) ≥ 4 and so G is not planar. So G has a subgraph H that is either a subdivision of K 5 or a subdivision of K 3,3 . If H is a subdivision of K 5 then similarly to before, since δ(G) ≥ 1 and n ≥ 8 + c = 14, we have 6 2 − 1 ≥ |E(G)| ≥ 1 2 (5 * 4 + 9) = 29 2 , a contradiction. So H is a subdivision of K 3,3 . Let u, v ∈ V (H) be distinct branch vertices of H that are in the same part of H such that d G (u) + d G (v) is maximum. We will show that G − {u, v} contains no P 3,2 subgraph and no cycle.
3,564.4
2017-06-22T00:00:00.000
[ "Mathematics" ]
HonestBait: Forward References for Attractive but Faithful Headline Generation Current methods for generating attractive headlines often learn directly from data, which bases attractiveness on the number of user clicks and views. Although clicks or views do reflect user interest, they can fail to reveal how much interest is raised by the writing style and how much is due to the event or topic itself. Also, such approaches can lead to harmful inventions by over-exaggerating the content, aggravating the spread of false information. In this work, we propose HonestBait, a novel framework for solving these issues from another aspect: generating headlines using forward references (FRs), a writing technique often used for clickbait. A self-verification process is included during training to avoid spurious inventions. We begin with a preliminary user study to understand how FRs affect user interest, after which we present PANCO1, an innovative dataset containing pairs of fake news with verified news for attractive but faithful news headline generation. Automatic metrics and human evaluations show that our framework yields more attractive results (+11.25% compared to human-written verified news headlines) while maintaining high veracity, which helps promote real information to fight against fake news. Introduction Fake news has become a medium by which to spread misinformation (Oshikawa et al., 2020;Vicario et al., 2019).One common way to fight against fake news is to release verified news.2However, as the goal of news verification is to correct misinformation, verified news headlines are often bland, making it difficult to gain the attention of users, which works against the need to alleviate the harmful impact of fake news.Therefore, headlines for verified news articles should be rewritten to be more intriguing but still faithful, which is expected to pique reader interest in verified news.Many studies have been conducted on generating attractive headlines (Jin et al., 2020;Xu et al., 2019), among which clickbait represents the style that generates the most reads or clicks.Despite their success in attracting readers, there are several challenges in current models.First, clickbait datasets for training headline generators with sensational style transfer are commonly collected based on the amount of views or clicks, which assumes that headline popularity is always due to the writing style (Song et al., 2020).However, user reading preferences could also be motivated by trending topics or major events.For instance, "Flights cancelled as typhoon nears" was the most popular news on a day that a typhoon was coming.Although such headlines get many views and clicks, the writing style itself is not interesting, and could end up as noise in the dataset.Second, harmful "hallucinations" created by headlines exaggerated to be more sensational could distort the meaning of the original article.This is especially critical as we do not want our model itself to spread misinformation.However, as such sensational headline generation models often generate clickbait with more ambiguous words, it increases the difficulty of evaluating faithfulness by aligning title semantics with the news content. In this work, we propose making real news intriguing by learning what fake news is good at.We seek to learn what makes fake news eye-catching instead of simply mimicking the titles of fake news.Quantity-wise, the many circulating fake news articles serve as learning materials by which we can learn to generate more attractive headlines; stylewise, fake news is deliberately written to attract attention.To learn such attractive writing styles, we adopt the forward-reference (FR) writing technique (Blom and Hansen, 2015), which draws from psychology and journalism, and is frequently used to create attractive headlines.Specifically, FR creates an information gap between readers and the news content with the headline, motivating the reader's curiosity (Loewenstein, 1994) to investigate the news content, and hence provoking the desire to click on the headline.One example is the headline "Wanna be an enviable couple?12 things a happy couple must do... It's that simple!", which drives readers to find out what those things are. Here, to understand the relation between veracity, attractiveness, and FR types in news headlines, we conducted a preliminary user study to investigate the attractiveness of fake and real news, and analyzed the FR types used in headlines in terms of veracity.Given these results and observations, we propose HonestBait, a novel framework by which to generate attractive but faithful headlines.In this framework, we use FR to remove the need to learn directly from the click-based dataset.To ensure the faithfulness of the generated headlines, we design a lexical-bias-robust textual entailment component on the generated headline and its original content to confirm that the content infers the headline.In addition, we propose PANCO, an innovative dataset which consists of pairs of fake and verified news headlines, their content, and their FR types.We conduct experiments on PANCO and evaluate the results in terms of both automatic metrics and human evaluation.In sum, the contributions of our work are threefold: • We conduct a thorough user study to understand the relation between reading preferences and FR types on fake news and verified news. • We propose a novel framework for generating attractive but faithful headlines.In human evaluations, HonestBait largely outperforms baselines on attractiveness and faithfulness. • We propose a new dataset containing pairs of fake and verified news, including their headlines, content, and FR types in headlines. Related Work 2.1 Forward Referencing as a Lure Loewenstein (1994) shows how the desire for information motivates human curiosity.Forwardreferencing has been defined as a technique for creating curiosity gaps at a discourse level for use in headlines (Blom and Hansen, 2015;Yang, 2011). A similar concept is cataphora, in which information is forwarded as a teaser at the sentence level (Baicchi, 2004;Halliday and Hasan, 1976).Kuiken et al. (2017) investigate how editors rewrite headlines for digital platforms, and analyze the linguistic features of what makes for an attractive headline.Zhang et al. (2018) address attractive headline generation as question headline generation (QHG), which assumes that interrogative sentences are more popular.Although this modality is indeed a type of FR, we argue that the interrogative style may not be suitable for all kinds of headlines, especially verified news.Hence in our work, we fully consider all kinds of FR which are commonly used and seen in social media and on digital platforms.Sample headlines exhibiting FR techniques can be found in Fig. 4 in the appendix. Headline Generation Headline generation can be viewed as a more specific summarization task.Qi et al. (2020) propose a Transformer-based, self-supervised n-gram prediction objective.Liu (2019) propose BERTSum, a variation of BERT (Devlin et al., 2019) for extractive summarization.See et al. (2017) propose an attention-based pointer generator with a copy mechanism, which has made great progress in summarization.Although its ability to copy text from the source context is powerful, using it directly for verified news often leads to bland titles.Hence we apply FRs and a sensationalism scorer to produce more satisfying results.Xu et al. (2019) propose auto-tuned reinforcement learning to generate sensational headlines using a pretrained sensationalism scorer; the resulting score is used as the reward to enhance the attractiveness.Although generating attractive headlines has been widely explored (Song et al., 2020;Jin et al., 2020), we focus more on fidelity to ensure that the semantics of the generated headline are faithful to the source content to avoid harmful hallucination. Faithful Summarization Recent work investigates how to improve the faithfulness of the generated summary or headline.Matsumaru et al. (2020) propose pretraining a textual entailment scorer to filter out noisy samples in the dataset, preventing hallucination or unfaithful generation.Maynez et al. (2020) analyze the faithfulness of current abstractive summarization systems, and discover that textual entailment is better correlated to faithfulness than standard metrics.Based on such work, one major direction is to evaluate generated summaries in terms of textual entailment rather than raw metrics such as ROUGE (Lin, 2004) or BLEU (Papineni et al., 2002).Accordingly, we propose a faithfulness scorer based on textual entailment to evaluate how well the generated headlines fit the semantics of the content. Preliminary User Study In this section, we investigate for a given topic which of the fake or real headlines users are more interested in, and how often forward references are found in interesting titles.Accordingly, we seek to test the following two hypotheses: H1: Fake news headlines motivate user reading interest more than real news headlines.H2: Forward references are commonly seen/used in headlines which interest users. We conducted the user study on both Chinese and English news to determine whether forward references were used across languages.For English headlines, we adopted FakeNewsNet (Shu et al., 2018), which contains fake and real news headlines about gossip and political news from GossipCop and PolitiFact.Since the real and fake news in Fak-eNewsNet are not paired up, we performed topical clustering to alleviate topical bias.For Chinese headlines, we directly leveraged news pairs labeled as disagreed in the WSDM fake news challenge dataset,3 which contains one fake news headline and its corresponding verified news headline. We conducted the English user study using Amazon Mechanical Turk (Crowston, 2012).Each pair was labeled by three turkers, whereas each Chinese pair was annotated by five native speakers we recruited.To test H1, annotators chose which headline they wanted to read further, with four options: first headline, second headline, both, and none.News veracity was not revealed during the study.Results show that both Chinese and English readers prefer fake news headlines.For Chinese headlines, 39.75% of fake titles were judged to be more interesting than the real ones, whereas only 23.60% of real titles won.For English headlines, the percentages are 34.57% and 30.33%, respectively.Note that in English, we are comparing real news with fake news due to the scarcity of paired verified and fake news data, whereas in Chinese, we are comparing verified news with fake news.This could be why the preference for real and fake news in English is closer than in Chinese.Even so, both Chinese and English show with statistical significance (p-values far less than 0.05) that readers prefer fake headlines.We report the complete distribution including ties, as shown in Fig. 1.This result supports H1: fake news headlines motivate reading interest more than real news headlines.To test H2, we randomly sampled 1,000 preferred and rejected headlines, respectively, from the previous user study, and asked another set of three annotators to label the FR type.Results show that 73.48% of Chinese and 85.32% of English preferred headlines utilizing FR techniques (at least one FR included in the headline), whereas in rejected headlines, the ratio is 22.35% / 17.72%.This further supports H2: FR is commonly used in interesting headlines.In conclusion, we found that fake news headlines draw more reader interest, and the use of FR techniques is a key part of what makes headlines intriguing. Methodology Having motivated the use of FR, we propose Hon-estBait, a novel framework which incorporates FR techniques and veracity verification.HonestBait consists of two stages.In the first stage, we pretrain an FR predictor and an FR proposer ( § 4.1).Both of them take verified news titles as input.The FR predictor is trained to predict which FRs a verified headline contains; hence the gold label is the FRs of the current input verified headline.The FR proposer, in turn, learns to predict which combination of FRs the corresponding fake news exhibits; the gold label is the FRs of the corresponding fake headline of the current input verified headline.The main concept in stage 1 is learning FRs from fake news to provide the direction best suited to rewrit- ing a monotonic verified headline into an interesting headline. When the FR predictor and FR proposer are ready, we proceed to stage 2 to generate attractive but faithful headlines.Figure 2 depicts the overall architecture of the second stage.The input during stage 2 consists only of verified news headlines and their content for learning headline generation, where the developed FR predictor and FR proposer together provide rewards to the learning model.First, we use a sequence generator ( § 4.2) to generate headlines from the input verified news content, and utilize the FR proposer to predict which combination of FR types is best suited to rewriting the input verified news headline.During each decoding step, we use the FR predictor to predict which FR types the currently generated headline contains, and we align the prediction from the FR proposer and the FR predictor to transform the original boring verified headlines into exciting ones.This is achieved by computing the FR type reward ( § 4.3).After decoding, we make use of a faithfulness scorer ( § 4.4) and a sensationalism scorer ( § 4.5) to compute the faithfulness and sensationalism rewards by which to evaluate the generated headline; all three rewards are then combined to make the generated results attractive but faithful.During inference, given verified news headlines and their content, HonestBait then generates attractive but faithful headlines using the above-mentioned components.Below we describe each major component in detail. FR Predictor & FR Proposer To mimic different FR types on datasets without FR type labels, we pretrain two multi-label classifiers: (1) A FR predictor, which predicts which FR type the generated headline contains; this is pretrained by taking verified news headlines as input and classifying which FR type these headlines exhibit.(2) A FR proposer, which learns what specific combination of FRs is best suited to rewriting a given verified title.This is trained by taking the verified headline as input and predicting the FR type of the corresponding fake news.Note that this setting is achievable because we have paired news data with both real and fake FR labels (see preview sample in Fig. 3 in the appendix). We implement these FR classifiers with a BERTbased encoder.Given a verified news headline, we obtain a sentence-level representation h p with the hidden state of the [CLS] token.The FR type ŷfr is predicted by a MLP classifier following a sigmoid function and a softmax operation: ŷfr = softmax(σ(W p h p + b p )), where ŷfr ∈ {0, 1} l , l is the number of the FR type, and W p , b p are trainable parameters.We pretrain these models using binary cross entropy loss, yielding a 0.91 micro-F1 score for the FR predictor and 0.65 for the FR proposer on a pretraining test set.Below we denote the FR predictor's prediction as ŷr and the FR proposer's prediction as ŷf . Predicting the fake version of FR types from the verified news headline is more challenging, as the performance of the FR proposer is lower than of the FR predictor (0.65 vs. 0.91).In practice, we could directly use the FR label of the fake news ac-quired from our user study to replace ŷf , and view this setting as an upper bound for the FR proposer accuracy to calculate R fr .However, when we are not provided with the FR labels of fake titles, we do not know which FR technique(s) should be applied to rewrite the given verified news headline.Hence, the FR proposer can be used as an auxiliary tool to help decide which FR type to use; this is especially useful when the dataset contains no FR-type labels.After pre-training the FR predictor and proposer, we proceed to the second stage. Sequence Generator In the second stage, we adopt a pointer network (See et al., 2017) as the sequence generator because of its ability to copy words from the source text.Given verified news content with M tokens X = {x 1 , x 2 , . . ., x M } and its corresponding real headline consisting of Q tokens y = {y 1 , y 2 , . . ., y Q }, the encoder encodes each token with a bidirectional LSTM (Hochreiter and Schmidhuber, 1997).We adopt Chinese word-level embeddings pretrained on the Weibo corpus (Li et al., 2018).The final distribution is combined with the probability computed by the copy mechanism, making words from source content available for generation.For the objective we use the negative log likelihood as (1) Forward Reference Reward For each decoding time step, we calculate the FR reward once: tokens generated up to the current time step y * 1:t are sent to the FR predictor to derive ŷ1:t r and calculate how well the generated text fits the FR prediction using the FR Proposer ŷf .After T steps of decoding, and after the headline is generated, we calculate the average FR reward as where D denotes a distance function-in our case the mean squared error-and R fr ∈ [0, 1] is the average FR reward.Here ŷr is the FR types exhibited by the current generated headline, which should align with the prediction from the FR proposer ŷf , which is pretrained to learn which specific combination of FRs are best suited to rewrite the given title.The closer they get, the higher R fr is. Faithfulness Scorer Inspired by research which shows that textual entailment correlates better with faithfulness than raw metrics (Falke et al., 2019), we use a pretrained faithfulness scorer to evaluate whether the generated headline distorts or contradicts the corresponding content.When pretraining, we use a verified news headline and its content as a positive example, and use a fake news headline with the corresponding real news content as a negative example. We pretrain this as a natural language inference (NLI) task (classifying entailment and contradiction).The headline and content sentence embeddings of are denoted as x f and w f .We apply a popular method to encode sentences for the NLI model (Conneau et al., 2017): where ";" denotes concatenation, and "⊙" denotes the element-wise product.The faithfulness scorer achieves an accuracy of 0.83 on the testing set. Sensationalism Scorer Apart from the FR type reward, we make use of another BERT-based binary classifier to obtain the sensationalism score, since there are headlines that are still interesting without the use of FRs (around 27% according to our collected data).We first manually reviewed 100 news for each categories in seven different new sources, selected the news categories that were consistently sensational (more than two-thirds of the articles in such a category were sensational, e.g., fashion, gossip, headlines) and collected the news headlines along with the content in these categories.We reviewed 5,000 headlines in total and collected 50,000 sensational news headlines.For non-sensational headlines, we utilized a pointer generator to obtain a summary headline, and treated this as a non-sensational title since summarization models retain only the semantics of the content.In this way we ensured a 50/50 split for sensational and non-sensational headlines for training.We trained the sensation scorer using binary cross entropy along with a softmax layer to produce a sensationalism score ∈ [0, 1]: R sen = σ(W s h s + b s ), where h s is the aggregated representation of the [CLS] token produced by BERT, and W s and b s are learnable weights. The accuracy on the test set is 0.86, indicating its ability to discriminate sensational headlines. Hybrid Training We adopted reinforcement learning (RL) (Williams, 1992) to train our model with the weighted sum of R f r , R f aith and R sen as the reward R. Following Xu et al. (2019);Ranzato et al. (2015), we used the baseline reward Rt to reduce variance, where Rt is the mean reward estimated by a linear layer for each time step t during training.The final reward and the objective are Similar to Xu et al. (2019), we computed the final loss as the combination of L MLE and L RL : where α and λ ∈ [0, 1] are hyperparameters that balance the weight of each component; the composite design here ensures that we produce headlines that satisfy all objectives.In sum, we use the FR reward to estimate whether the generated headline matches the FR type of its fake version, the faithfulness scorer to evaluate the textual entailment between the generated headline and the verified news content, and the sensationalism scorer to measure the sensationalism of the generated headline. Experiment In this section, we describe experiments conducted to evaluate HonestBait.We first describe the experimental dataset and then describe the result of human evaluation, automatic metrics, a case study, and hyperparameter analyses to further demonstrate the superiority of the proposed model. PANCO Dataset We collected Paired News with Content (PANCO), a subset of a fake news classification competition held by WSDM.The competition involved a textual entailment task in which two news headlines were given as input: the task was to predict the relationship between the headlines.Each sample in the original dataset included a fake news headline and a headline that was either agreed (two fake stories describing the same event), unrelated (two stories describing different events), or disagreed (two stories describing the same event, one of which was fake and the other was verified).We selected the disagreed pairs that contain a fake headline and its corresponding verified news headline, and augmented the dataset in the following way: (1) We used each title as a query which we submitted to Google Search to determine the source of each news story, and crawled the news content from sources which matched the title. (2) Five annotators labeled the FR type of each headline; the final label was decided by majority vote. The proposed dataset consists of a total of 7,930 paired samples containing a fake news headline and the corresponding verified news headline along with their content and FR type.To better understand the dataset, we provide a preview sample in Fig. 3 in the appendix.The main novelty of PANCO is the collection of pairs (describing the same event) of fake and verified news with headlines and their content.In addition, we provide the FR type label for both verified and fake news as additional text features for further study.We provide a previewing sample from PANCO in Table 3. Baseline and Settings We compared the proposed model with the following strong baseline for headline generation.Ptr-G for pointer generator network (See et al., 2017), an LSTM-based model with attention and a copy mechanism.Clickbait (Xu et al., 2019), which uses a CNN-based sensationalism scorer to automatically balance MLE and reward loss, and also used as a reward to generate more sensational headlines.ROUGE, which uses the same architecture as Clickbait but with the ROUGE score as a reward.BERTSum (Liu, 2019), which utilizes BERT's architecture to encode source text and perform extractive summarization.T5 (Raffel et al., 2020), a large Transformer-based model; we utilize T5 with PEGASUS (Zhang et al., 2020) pretraining to strengthen the baseline.ProphetNet (Qi et al., 2020(Qi et al., , 2021)), a Transformer-based model that utilizes future n-gram prediction as a self-supervision. For human evaluation and the case study, we also include Gold, which represents human-written verified headlines as a strong baseline.Experimental settings are detailed as follows.We first pretrained all baselines on the LCSTS dataset (Hu et al., 2015) with 480,000 steps.LCSTS is a large-scale Chinese summarization dataset containing 2,400,591 samples with paired short text and summaries.We used the pretrained weights to fine-tune all baselines on the PANCO training set for another 20,000 steps. Figure 3: A previewing sample from PANCO dataset, comprised of paired real and fake news headlines, the news content, and FR type labels for both real and fake headlines. Reference Verified news headline: The truth of using one drop of blood to test cancer.Fake news headline: Testing cancer using only one drop of blood!This is amazing. News content "The woman version of Jobs" Elizabeth Holmes became popular by proposing a revolutionary technique: using a single drop of blood to test cancer.But not for long: her lies were revealed, and she fell from favor.An expert said that liquid biopsies in clinics cannot yet be consisted the gold standard, and cannot completely replace tissue biopsy. Ptr We saved the checkpoints for all baselines every 2,000 steps, and compared them by selecting the best one on the validation set.The hyperparameters of HonestBait were also based on the validation results: λ = 0.2 and α = 0.4.Table 2: Automatic metrics of proposed model against baselines.R n is the n-gram ROUGE score, R L is the ROUGE-L score, BS is the BERT score, and FR is the ratio of the generated headlines using FR. Human Evaluation We first conducted a human evaluation to evaluate the attractiveness, faithfulness, and fluency of the generated headlines.We randomly selected 100 samples from the PANCO test data, and asked five native speakers to select headlines in response to the following questions: (1) which headline makes you want to read further?(2) which headline is more faithful to the content?(3) which headline is more fluent? The workers were given two generated titles and the story content, and were asked to select first title, second title, or tie in response to the questions. Table 3 reports the pairwise comparison results as percentages.Each number in the table is the competing model compared to the proposed Hon-estBait, following Zhao et al. (2020).For example, the output of Ptr-G is 12.50%/45.50%/42.00%better/same/worse than HonestBait in terms of attractiveness, resulting in 12.50% − 42.00% = −29.50% in the table.Results show that for both attractiveness and faithfulness, HonestBait outperforms all baselines by a large margin.We believe this is due to the use of forward referencing and the faithfulness check.Compared to the pure clickdriven attractiveness-optimized Clickbait (Xu et al., 2019), HonestBait outperforms by directly learning writing skills to avoid other impact factors of attractiveness.In addition, boosting only attractiveness makes Clickbait relatively unfaithful (-22.33%).In terms of fluency, only ProphetNet and human-written headlines outperform our model.As we did nothing specifically to improve fluency such as ProphetNet's n-stream attention, this result indicates that HonestBait maintains reasonable fluency while increasing attractiveness and faithfulness.Note that compared to human-generated real headlines, HonestBait generates more attractive headlines (+11.25%) with only a modest drop in faithfulness (-1.00%).These results show the effectiveness of HonestBait for rewriting real news headlines to promote stories, as it maintains high faithfulness while being more attractive. Automatic Metrics We used three automatic metrics for evaluation: ROUGE-n (Lin, 2004), ROUGE-L, and the BERT score (Zhang* et al., 2020).Although in general, automatic metrics are shown to be not reliable for text generation (Sulem et al., 2018;Callison-Burch et al., 2006;Schluter, 2017;Wang et al., 2018), we still provide them here for reference.The results in Table 2 still show the good abstractive ability of HonestBait with the highest 40.42 R L score.Among the baselines, ProphetNet is the strongest, with the highest R 1 and BERT scores, perhaps due to its n-stream self-attention mechanism.However, the extractive summarization model BERT-Sum performs worst here, as extracting a sentence from the article as its headline is not a common practice in general.In the last column of Table 2, we further use the FR predictor to detect which FR technique(s) the generated headlines are using, and report the percentage of generated headlines that use FR.The result shows that 80.42% of the headlines generated by HonestBait exploit FRs to make headlines more attractive, which is the highest among all models, indicating that HonestBait indeed learns to utilize FR techniques. Ablation Study To further investigate our framework, we conducted an ablation study.We compared each setting with the full framework using the evaluation protocol from § 5.3 by pairwise comparison, along with the automatic metrics for completeness.The results are shown in Table 4. Clearly, there is a significant drop in attractiveness when we remove the sensation scorer (-19.50%) or FR type reward (-16.00%), which indicates that even with the sensation scorer, attractiveness still decreases without the help of the FR reward (see setting w/o FR).That is, the FR reward indeed helps the model to learn attractive writing styles.In addition, removing the faithfulness scorer results in the largest decrease in faithfulness (-11.50%).This also shows that our faithfulness scorer prevents deviations in the generated headline.Interestingly, removing the sensation scorer increases the ROUGE score, perhaps because the sensation scorer helps to generate more diverse and interesting headlines, and thus can harm metrics which are based on word-level overlap.We also observe that removing the faithfulness scorer reduces the ROUGE score, which shows that the faithfulness scorer helps to produce headlines with more fidelity, and thus increases the word-level overlap between the generated headlines and the groundtruth.Note that as automatic metrics are still not the most important indicator of generation quality, thus we still keep sensation scorer for its improvements in terms of attractiveness and fluency even if removing the it leads to a higher ROUGE score. Case Study Table 1 shows an example illustrating headlines generated by different models.Results show that Ptr-G, Clickbait, and ROUGE extract the name "Jobs" from the article (highlighted in yellow), which is a powerful ability of the copy mechanism to alleviate the generation of unknown tokens.However, in terms of being headlines, these texts are less satisfying in that they are not understandable.BERTSum and T5 make mistakes by generating open questions without answering them, which could motivate user interest but is not faithful enough for verified news headlines.Even more, T5 focuses on the wrong point borrowed from other articles as this article is not about cancer prevention, which could be harmful (highlighted in pink). In contrast, HonestBait generates interrogative sentences to attract readers, but with an explicit clarification of the fake information, and is aligned to the content (highlighted in green). Conclusion We present HonestBait, a novel framework for generating faithful but interesting headlines from a new aspect: forward references.Moreover, we construct PANCO, a novel dataset that includes the title and content of pairs of fake and verified news, along with their forward reference types for further research.Our user study shows that verified news headlines are relatively boring, and forward references are used in most headlines liked by readers.Experimental results show that HonestBait outperforms all baselines in both automatic and human evaluations, which demonstrates its effectiveness in generating attractive but faithful headlines.We expect HonestBait to help rewrite monotonous real news headlines to increase their exposure rate to help combat fake news. Limitations Although HonestBait shows promising results for generating attractive but faithful headlines, there are still some limitations: (1) HonestBait is a monolingual model that only supports Chinese.It requires three pre-trained scorers.Also, as the FR labels are specifically difficult to obtain, it is not easy to implement in other languages. (2) Running the whole framework with a batch size of 16 takes around 22 GB GPU memory, mostly because we must load all pre-trained models into the GPU.This can be alleviated by using a distilled pre-trained model. (3) On average, HonestBait generates more faithful headlines than other baselines, but it still occasionally produces false information or unwanted results.This work is only for academic purposes and is not ready for production. Ethics Statement Given that our dataset is in Chinese and requires a profound understanding of forward referencing for annotation and evaluation, we carefully selected annotators from our lab who specialize in NLPrelated research and possess knowledge in linguistics.To ensure fairness, we provided all annotators with a payment of $6.66 per hour, which is 10% higher than the minimum hourly wage requirement in Taiwan. During the data annotation process, we introduced the concept of forward referencing to the annotators, along with relevant examples.Only annotators who achieved an accuracy rate of over 80% were eligible to perform the actual annotation task.It's important to note that we solely asked annotators to label the "type of forward reference," which is well-defined, and not to assess the accuracy or truthfulness of the news articles.With five annotators who successfully passed the pretest, combined with the relatively objective nature of labeling forward reference types, we believe any potential bias during the data annotation process is minimal. For the evaluation phase, an additional five annotators were tasked with determining the preferable title among two options, based on attractiveness, faithfulness, and fluency.These annotators are different from those who labeled the data to ensure a blind test.Although this task involves a greater level of subjectivity, we provided average statistics based on the assessments of the five annotators.Additionally, we maintained a blind test by recruiting separate evaluators and randomly shuffling the order of the two titles for each trial.This evaluation protocol aligns with standard practices employed in the research community, and we believe it effectively minimizes potential biases. It is also important to note that we are not really learning to mimic fake news, by taking fake news headlines as the ground truth reference.Instead, we seek to learn the writing techniques that are often used in fake news to attract readers.As we are aware of the risk of producing misinformation, we want to again highlight the importance of the faithfulness check.HonestBait was designed only to assist journalists as a reference to write faithful headlines that users prefer for verified news.Even if we propose using a faithfulness scorer to increase fidelity, its nature, similar to attractive headline generation systems, still exhibits the risk that HonestBait could be used by malicious users to generate sensational headlines for fake news.Additionally, HonestBait may misjudge offensive or unethical headlines to be a headline that users would prefer.Our goal is to fight fire with fire by leveraging fake news as learning material to fight against misinformation, by encouraging users to read verified news.We call on users not to abuse HonestBait to produce false information.B2.Did you discuss the license or terms for use and / or distribution of any artifacts?Not applicable.Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified?For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?Not applicable.Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?Not applicable.Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?Not applicable.Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created?Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results.For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.Left blank. C Did you run computational experiments? Section 5. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?Section 5.2. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?Section 5.2. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?Not applicable.Left blank. C4.If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?Section 5.2.D Did you use human annotators (e.g., crowdworkers) or research with human participants?Section 5.1, Section 5.4.D1.Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section 5.4. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)?Due to the space limit, we can provide details here.We recruit 5 graduate students from Taiwan to conduct the human annotation and we pay $10 per hour for them. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating?For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?Section 5.1. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?Not applicable.Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?Due to the space limit, we can provide details here.They are all graduate students from Taiwan. Figure 1 : Figure 1: Reading preferences w.r.t.real news and fake news including ties.The sample size is 8,424 / 6,497 for Chinese / English headlines. Figure 4 : Figure 4: Examples of different types of forward references.Words highlighted in orange are the main characteristics. Table 1 : Generated examples from different models.For brevity, we show part of the article and translated result. Table 3 : Pairwise comparison in terms of attractiveness (ATRC), faithfulness (FAITH), and fluency (FLCY), shown as percentages.The larger the negative value, the more HonestBait outperforms.
8,489.6
2023-06-26T00:00:00.000
[ "Computer Science" ]
The Influence of Cooling Rate on the Damping Characteristics of the ZnAl4Cu1 Alloy The paper presents the results of damping coefficient tests on the ZnAl4Cu1 alloy (ZL5). The damping coefficient has been calculated on the basis of specimen measurements obtained with the use of the signal echo method. The method consists in passing an ultrasonic wave through the tested material. The ultrasonic wave from a transmitting and receiving head passes through a specimen, bounces off its bottom surface and comes back to the measuring head in the form of a signal echo. The difference in the signal intensity between the first and the second echo in relation to the distance travelled by the ultrasound wave is a value of the material’s damping characteristics. The specimens were cast into three molds made of different materials, i.e. green sand, plaster and metal. The thermophysical properties of these materials are different, affecting the rate of heat absorption from the cast. Three series of specimens have been obtained which have different cooling rates. The specimens were then subjected to ultrasound and microscopic tests to assess the alloy structure. The internal alloy structure affects its damping properties to a great extent. INTRODUCTION Zinc-aluminum alloys, as well as ternary zinc-aluminum-copper alloys, can be used in less loaded machine parts and structural components. Due to their relatively low melting temperature, the energy consumption necessary to produce components from those alloys is reduced while they still maintain good corrosion resistance and high damping properties. Zinc alloys have the ability to dampen vibration, placing them in the group of HIDAMETS (High-Damping Metals), together with cast iron and bronzes [1,2]. A parameter indicating the damping properties of a specific material is the damping coefficient α. The coefficient's value depends on the type and composition of the alloy. In general, the damping properties of a specific alloy are mostly determined by its internal structure [3,4]. The value of the damping coefficient of an alloy with a specified chemical composition is constant. It can be changed, however, in the process of alloy modification, a process which usually affects the volume and number of precipitation of a specific compound or the size of grain. The inoculant's impact on the alloy structure is reflected in the change of the damping coefficient value of a specific alloy [5][6][7]. The cooling rate, apart from modification process, can also affect the macro and microstructure of the alloy by the changing shape and grain size of the phases [8][9][10]. The microstructure of an alloy influences its properties. The grain size and structure of the alloy components affect the ability of the alloy to dampen vibrations. A coarse-grained structure with large, branched dendrites also exhibits a better corrosion resistance tendency than finer grain structures. It is most often caused by corrosion centers at the grain boundaries. The more boundaries, the more places where a significant difference in corrosion potential can occur [11][12][13]. Zinc-aluminum based alloys, due to their lower component and production costs, often replace bronze in wear-resistant components. They work well for less loaded components operating at lower temperatures. The tribological properties of zinc alloys decrease at temperatures above 100 o C. The addition of copper increases the hardness, strength and wear resistance of Zn-Al alloys up to a content of 2-wt.%. Changes in the size of ingredients in the alloy structure also affect the wear resistance. Zinc-aluminum alloys with a finer structure display higher wear resistance and hardness [4,14,15]. This paper describes an attempt to measure the damping coefficient of the ZnAl4Cu1 (ZL5) alloy solidified at three different cooling rates. In order to obtain three different cooling rates, the tested alloy was cast into molds made of various materials characterized by different thermophysical properties. The https://journals.agh.edu.pl/jcme damping coefficient indicates the material's ability to absorb and scatter vibration. In perfect materials, the vibrating wave passes the material without any loss of its energy. In particular castings which are relatively heterogeneous in terms of their internal structure, the vibrating wave becomes attenuated. It is partially absorbed by the material and converted into other types of energy, mainly thermal, but primarily the energy of vibrations is dispersed on compounds of the alloy structure. The paper describes a test in which an ultrasonic wave has been used to determine the damping properties of the ZL5 alloy. The principle of ultrasonic wave measurements is based on physical properties of the materials being tested and also on the properties of the ultrasound waves themselves. Ultrasonic waves are very well propagated in solids such as metals. Examining an ultrasonic wave which has passed through the material or has been reflected from the specimen provides information regarding to any irregularities in material continuity. At the beginning of the usage of ultrasounds in foundry engineering, ultrasonic waves were only used to locate defects in castings. With the progress of technology, they were found to be useful in measurements of material thickness, determining microstructure and examining the ability of a specific material to dampen vibrations. Due to the use of an ultrasonic wave, tests of damping properties can be performed in a simple and fast manner. Transmitter--receiver heads are characterized by small diameters, facilitating the testing of small specimens. In order to eliminate the signal loss, the surface of a tested specimen should be smooth and even. At the point of contact between the transmitter--receiver head and the specimen, a coupling liquid needs to be used to transmit the transverse and longitudinal ultrasonic waves to the test material. The correct selection of the wavelength is another important factor. If the emitted wavelength from the measuring instrument is not matched to the diameter of the average grain, structural noise may occur, especially in the case of large grains. Structural noise is often the source of interference in damping measurements. To prevent those phenomena, it is recommended to use ultrasonic waves at least six times longer than the average grain size of the tested material [16,17]. The aim of the research is to determine the influence of the cooling rate of the ZL5 alloy on its microstructure and vibration damping properties. Typically, the literature finds the damping properties of zinc alloys cast into a metal mold and occasionally cast into one type of mold and inoculated. There are no comparisons of the test results for the damping properties of a specific alloy depending on the mold material. The mold material determines the cooling rate of the alloy, and thus affecting its properties. The damping properties depend on the alloy microstructure. The microstructure can be shaped by modifying or changing the cooling rate. In the case under consideration, it is proposed to determine the damping properties of the alloy for three different cooling rates. Alloys of this type are often cast into permanent molds, but also into sand molds. The use of a plaster mold in this case is intended to extend the results by a slow cooling. The type of mold material determines the cooling rate of the casting. The article shows the relationship between the cooling rate and the damping properties of the alloy which may be technologically useful. EXPERIMENTAL PROCEDURE The ZL5 alloy of composition shown in Table 1 was melted in a resistance crucible furnace in a graphite crucible. The furnace was heated to the temperature of 500°C. The batch was 3 kg of ZL5 alloy. After melting, the alloy was poured into three different types of molds ( Fig. 1): • a classical dry green sand mold, • a plaster mold made of casting jewelry gypsum, • a metal mold made of common steel preheated to a temperature of 100°C. The molds had a cylindrical cavity with a diameter of 40 mm and a height of 100 mm. Two specimens were obtained from each type of the mold. During pouring and solidification of the cast, temperature measurements were taken. Data collected in such a manner was subsequently used to determine cooling curves and cooling rates. The temperature was taken with the use of a type K thermocouple placed in the cast axis. The thermocouple was located at the mid-height of the cast. After cooling, specimens with a diameter of 40 mm and a height of 30 mm were cut out from the selection beneath the thermocouple of the castings. The specimens were ground with a 1000 grit abrasive paper and then tested to determine the damping coefficient. Ultrasound tests were performed with the transmitter-receiver head from the Krautkramer 2000 ultrasound testing kit. To improve contact between the transmitter-receiver head and the specimen surface, a paraffin oil was used (Fig. 2). A longitudinal ultrasonic wave with a frequency of 1 MHz was applied and the specimens were tested using the echo method. A signal emitted from the transmitter-receiver head passed through the specimen bounced off its bottom and came back to the measuring head. As a result of signal dispersion and absorption by the compounds of alloy structure, it returned in the form of a weakened echo (Fig. 3). For each two specimens of a series, 10 measurements were performed which was necessary due to the heterogeneous structure of such material as a casting. The specimens were examined at a temperature of 22°C. After the collection of all the data, the average damping coefficient α was calculated for each specimen (Eq.(1)). After the ultrasound tests, specimens were polished and etched in the Palmerton's reagent [18] for 20 s. The etching revealed the alloy structure which was observed via brightfield microscopy. RESULTS OF THE INVESTIGATION The data collected during solidification in the molds made of different materials has been presented in the form of cooling curves to be seen in Figure 4. The cooling rate of the casting for each specimen has also been determined. The data was selected in the range from the moment of pouring the mold to the moment of the start of the solidification process. In the metal mold heated to the temperature of 100°C, the alloy cooled down at the rate of approx. 9.3 K/s. In the case of the green sand mold with a quartz sand matrix, dried up before casting, the cooling rate was approx. 1 K/s. The lowest cooling rate was observed in the casting gypsum mold and it was approx. 0.37 K/s. The cooling rate is clearly visible from the slope of the cooling curves. The greater the slope and the shorter the time to reach a given temperature, the faster the cooling rate. This is clearly seen in Figure 4, when the curve for samples cooling the fastest (metal form) and the curve for samples cooling the slowest (plaster mold) are compared. In order to illustrate the shape of the curve cooling at high speed in a metal form, an additional graph is provided in Figure 4. The calculated damping coefficient α values are shown in Figure 5 and in Table 2. The measured results indicate that the best damping properties have been achieved for the specimens in the gypsum mold, which solidified under the slowest heat removal condition. The lowest value of damping coefficient α was achieved for measurements performed on specimens cast in the metal mold, i.e. where the cooling rate was the highest. The difference between the best and worst result of damping properties is 9.17 dB/m. A hardness test was also performed for the tested samples. Hardness was measured by means of the Brinnell method. The diameter of the indenter was 2.5 mm. The applied force is 147 N. The hardness measurements results are presented in Table 2. As can be seen, there is no direct correlation between the cooling rate, the damping factor, and the hardness. Only for the highest cooling rate we observe the lowest value of the damping coefficient and the highest hardness. It is undoubtedly related to the presence of the finest alloy microstructure (Fig. 6). Microscopic examination resulted in obtaining micrographs for the alloy based on different cooling rates. Microstructures are shown in Figures 6-8 in magnification 50× and 200×. The figures show the bright dendrites of the η phase (solid solution) against the background of the eutectic phase (α + η) visible by the lamellar grain pattern. The components of the microstructure of the ZnAl4Cu1 alloy poured into a dried green sand mold in magnification 200× show on Figure 9. As can be seen in the photos, the cooling rate affects all the components of the microstructure under different magnification. For the highest cooling rate of 9.3 K/s obtained in a metal form, we observe the smallest dendrites of the n solution and also the greatest fragmentation of the eutectic. Along with the reduction of the cooling rate to 0.37 K/s obtained in the gypsum form, we observe the largest dimension of the solution and eutectic dendrites. For the cooling rate of 1 K/s obtained in the dried sand form, the microstructure components are of average size. The cooling rate is determined by the type of mold material, which can also be clearly seen in the graph of the cooling curves (Fig. 4). The cooling rate directly affects the shape and size of the separate phases. This translates directly into the ability of the alloy to damp vibrations, which is presented in the comparison of the cooling rate and the values of the vibration damping coefficient in Table 2. https://journals.agh.edu.pl/jcme CONCLUSIONS The ZnAl4Cu1 alloy cast into molds made of green sand, casting gypsum and steel solidified at different cooling rates. It can be perfectly seen on the distribution of cooling curves and is also evident from the calculated cooling rates. A low cooling rate obtained in the gypsum mold amounting to 0.37 K/s reflected for the values of the damping coefficient, which reaches to the highest value of 147.50 dB/m. In the gypsum mold we also have the most extensive phase η dendrites here, and the hardness here reaches an average value of 97 HB. The highest cooling rate was obtained for specimens solidifying in the metal mold, that is 9.3 K/s and the high cooling rate influences the microstructure. As we can see in the pictures (Figs. [6][7][8], the finest microstructure is in the sample of the metal mold (Fig. 6). For the specimens solidifying in the metal mold, the damping coefficient is α = 138.33 dB/m, the lowest value of all the samples. This means that the ZL5 alloy, which cools at a high cooling rate, has the lowest damping properties. However, the highest hardness is obtained here, amounting to 104 HB. This is the lowest value of a damping coefficient in this series of experiments. The specimens cast into the mold made of green sand, solidified at the cooling rate of 1 K/s. The value of the damping coefficient is 141.67 dB/m, and is between the aforementioned results for the plaster and steel molds. By controlling the cooling rate, we are able to influence the structure of the alloy, which affects its ability to subsequently dampen mechanical vibration. The results of the obtained damping coefficient α indicate that the ability to dampen vibrations increases for the tested ZL5 zinc alloy as the cooling rate decreases.
3,515.4
2022-08-08T00:00:00.000
[ "Materials Science" ]
An Improved Collaborative Filtering Recommendation Algorithm for Big Data . With the increase of volume, velocity, and variety of big data, the traditional collaborative fi ltering recommendation algorithm, which recommends the items based on the ratings from those like-minded users, becomes more and more inef fi cient. In this paper, two varieties of algorithms for collaborative fi ltering recommendation system are proposed. The fi rst one uses the improved k-means clustering technique while the second one uses the improved k-means clustering technique coupled with Principal Component Analysis as a dimensionality reduction method to enhance the recommendation accuracy for big data. The experimental results show that the proposed algorithms have better recommendation performance than the traditional collaborative fi ltering recommendation algorithm. Introduction With the explosive increase in available data on the web and the rapid advances of information technology, big data has become a hot research topic in the field of data mining. Generally, it is commonly used to describe the exponential growth and availability of structured and unstructured data. Nowadays, many governmental and industrial communities become interested in the high potential of this innovative technology. However, it is very difficult for such communities to find relevant contents, recommender systems appear to solve present problems. Recommender system is defined as a decision making strategy for users under complex information platforms [1] in which it can effectively recommend the required information to end-users. Various techniques for developing recommender systems have been proposed, which can use either content-based filtering, collaborative filtering or hybrid methods [2][3][4][5]. In particular, the collaborative filtering recommendation algorithm (CFRA) is popular and has been used by many providers and consumers of big data such as: eBay, Amazon and Facebook. Recently, many researches have reported that applying k-means as clustering technique in collaborative recommender systems can significantly enhance the performance of traditional CFRA [6]. Moreover, it has been proved that using Principal Component Analysis (PCA) as a dimensionality reduction method can significantly improve the clustering techniques [7], therefore, it is necessary to conduct dimensions reeducation before formally conducting clustering tasks. Hence, in this paper, we propose two varieties of algorithms for an effective collaborative filtering recommendation system. The first one uses the improved k-means clustering technique while the second one uses the improved k-means clustering technique coupled with PCA as a dimensionality reduction method to enhance the recommendation accuracy for big data. The experimental results show that the proposed algorithms have better recommendation performance than the traditional collaborative filtering recommendation algorithm. The rest of this paper is organized as follows: Sect. 2 discusses some related works. Section 3 presents the collaborative filtering recommendation algorithm. Section 4 explains in details the proposed approach. Section 5 describes the experimental results. Finally, Sect. 6 concludes this study and proposes the plans for future work. Related Work In the recent years, the philosophy of big data attracts great attention from several official organizations including governments, universities, and industries in which the recommender systems are introduced to help them to find what they need via a mechanism that can make prediction depending on different criteria. One of the recommender strategies that can provide several kinds of recommendation is the open source project Apache Mahout [8]. It is primary enables free scalable implementation of machine learning methods [9,10]. Another free and open source scalable library of recommender system is MyMediaLite [11], which addresses both common rating and item prediction from positive-only feedback. The rating prediction can be a scale of 1 to 5 stars while the item prediction from positive-only implicit feedback can be purchase actions or from clicks. In [12], the authors propose a keyword-aware service recommendation method, named KASR, to indicate users' preferences and generate appropriate recommendations on MapReduce [13] for big data applications. In [14], Lee et al. propose an adaptive recommendation algorithm, ACFSC, that is focused on scalable clustering to solve the problem of scalability by composing neighborhood based on reducing time complexity. They also address the problem of sparsity by making items' and users' feature vectors incrementally learning. CSRS [15] is a customized service recommendation system for Big Data. It uses the MapReduce framework and focuses on service recommendation method to create proper recommendations based on users' preferences. In [16], Zarzour et al. propose a new collaborative filtering recommendation algorithm based on dimensionality reduction and clustering techniques. They use clustering k-means algorithm and Singular Value Decomposition (SVD) to cluster similar users and reduce the dimensionality, respectively. In [17], the authors use k-means algorithm to cluster users according to their interests and then voting algorithm to generate prediction in recommender systems. Collaborative Filtering Recommendation Algorithm In the field of recommender systems, the collaborative filtering recommendation algorithm (CFRA) is the most successful recommendation method. The behind idea of CFRA is to provide for an active user recommendations or predictions by first looking for users who share the same rating patterns with him and then using the ratings from those like-minded users found to calculate a prediction for him. In other words, CFRA can suggests new similar items or predict the interest of a certain item for an active user based on their previous likings and the preferences of other similar users. More technically, it uses a user-item rating matrix that includes the preferences for items by users for matching users with relevant performances obtained by employing a similarity function between theirs profile to make recommendations or predict the ratings of selected items [18,19]. To compute the similarity between users or items, there are several similarity measure functions. One of the most popular methods is by using Pearson Correlation Coefficient (PCC), which is defined as follows: Once the similarity is computed, the most N nearest users are selected as a group of similar users called neighborhood and predicted ratings of unrated item can be then computed. The recommendation formula is presented as follow: The main steps of the collaborative filtering recommendation algorithm (CFRA) are as follows: Step 1: Input the matrix M[m, n] of user-item rating data, active user, K; Step 2: Calculate the similarity between users by using Pearson Correlation Coefficient (PCC) and generate the similarity matrix S[m, m]; Step 3: Calculate the similarity between the active user and the clusters; Step 4: Select the first n similar users of the active user; Step 5: Calculate the prediction values of active user to every cluster by using the formula (2); Step 6: Choose the top N items of users as recommendations; Step 7: Output the recommendations. K-means Based-Collaborative Filtering Algorithm In this paper, two varieties of algorithms for collaborative filtering recommendation system are proposed. The first one uses directly the k-means clustering technique while the second one uses the k-means clustering technique after performing the PCA method. PCA aims at reducing the dimensions of the big data by extracting the most important information from the data. It can make big data mining more useful and get similar results by the reduction of dimensions [20]. K-means Algorithm In data mining, K-means is considered as one of the most widely used method of clustering [21] in which it generates automatically a set of clusters based on a collection of datasets in easiest way. The main aim of k-means is to make the similarity inter-points of the same cluster be high, while the similarity inter-clusters be low. The steps of the algorithm are as follows: Step 1: Input dataset, clusters number and K; Step 2: Select randomly initial clustering centers which is the initial value of K; Step 3: Calculate the distances between centers and objects then assign objects to the most nearest cluster; Step 4: For each cluster, calculate the average as new partition centers; Step 5: Use the new partition centers to redistribute points into new clusters; Step 6: Repeat Steps 4 and 5 until the algorithm converge to a stable partition; Step 7: Output K clusters. CFRA-Km: A Collaborative Filtering Recommendation Algorithm Based on K-means Clustering The general k-means algorithm is now personalized in order to take into consideration the recommendation requirements as well as the perdition of unknown ratings for a given active user. The specific steps are as follows: Step 1: Input the matrix M[m, n] of user-item rating data, active user, K; Step 2: Calculate the similarity between users by using Pearson Correlation Coefficient (PCC) and generate the similarity matrix S[m, m]; Step 3: Use the matrix S[m, m] as dataset and select randomly initial clustering centers which is the initial value of K; Step 4: Calculate the distances between centers and objects then assign objects to the most nearest cluster; Step 5: For each cluster, calculate the average as new partition centers; Step 6: Use the new partition centers to redistribute points into new clusters; Step 7: Repeat Steps 5 and 6 until the algorithm converge to a stable partition; Step 8: Calculate the similarity between the active user and the clusters; Step 9: Select the first n similar clusters of the active user; Step 10: Calculate the prediction values of active user to every cluster by using the formula (2); Step 11: Choose the top N items of users as recommendations; Step 12: Output the recommendations. Reducing the Dimension by PCA One of the purposes of a PCA is the analysis of big data for eliminating noises and finding patterns to reduce the dimensions of the data without loss of relevant information. To do this, it converts a collection of observations of possibly correlated variables into a collection of values of principal components by using a linear transformation called orthogonal transformation. In general, the quantity of the obtained principal components is less than or equal to the quantity of original variables. Therefore, PCA is used as a statistical method to reduce not only the dimension of the user-user ratings matrix but also to reduce the loss of information by employing eigenvalue decomposition of data covariance matrix to obtain principal components of dataset with their weights. The general steps of PCA are as follows: Step 1: Input the dataset; Step 2: Normalize the data in the dataset; Step 3: Calculate the covariance of the corresponding matrix; Step 4: Calculate the eigenvectors of the covariance matrix; Step 5: From matrix multiplication, translate the data to be in terms of the principal components. CFRA-Km-PCA: A Collaborative Filtering Recommendation Algorithm Based on K-means Clustering and PCA The first version of our k-means clustering-based collaborative filtering recommendation algorithm does not consider the effect of the dimensions reduction which may significantly influence the prediction results and make them inaccurate. Thus, PCA is applied before conducting the k-means clustering and performing the prediction step to reduce the dimension of the dataset and improve the performance of the prediction results. In other words, the collaborative filtering recommendation algorithm based on K-means clustering and PCA called CFRA-Km-PCA combines the advantages of PCA method with those of k-means clustering technique. The specific steps of CFA-Km-PCA are as follows: Step 1: Input The matrix M[m, n] of user-item rating data, active user, K; Step 2: Calculate the similarity between users by using Pearson Correlation Coefficient (PCC) and generate the similarity matrix S[m, m]; Step 3: Normalize the data in the obtained S[m, m]; Step 4: Calculate the covariance of the corresponding matrix; Step 5: Calculate the eigenvectors of the covariance matrix; Step 6: From matrix multiplication, translate the data to be in terms of the principal components. Step 7: Use the obtained principal components matrix as dataset and select randomly initial clustering centers which is the initial value of K; Step 8: Calculate the distances between centers and objects then assign objects to the most nearest cluster; Step 9: For each cluster, calculate the average as new partition centers; Step 10: Use the new partition centers to redistribute points into new clusters; Step 11: Repeat Steps 5 and 6 until the algorithm converge to a stable partition; Step 12: Calculate the similarity between the active user and the clusters; Step 13: Select the first n similar clusters of the active user; Step 14: Calculate the prediction values of active user to every cluster by using the formula (2); Step 15: Choose the top N items of users as recommendations; Step 16: Output the recommendations. Experimentation Results and Evaluation To evaluate the performance of the k-means clustering-based collaborative filtering recommendation algorithm with and without using PCA compared to traditional collaborative filtering recommendation algorithm, experimentations were conducted on real big data. The experimental dataset was obtained from Netflix [22] which contains over 17,770 movies rated by approximately 480 000 users. In this dataset, there are over 100 million ratings ranging from 1 to 5 stars. A random sample was chosen and 80% of these data were also randomly used for training, and the remaining data were selected to test the performance of the considered algorithms. In the performance evolution of recommender systems, Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) are the most widely used. Therefore, we used those metrics to evaluate the performance of recommendations in CFRA, CFRA-Km, and CFRA-Km-PCA algorithms. The formulas of RMSE and MAE are shown as follows, respectively. Figure 1 shows the experimental results in terms of RMSE metric for the proposed algorithms. As we can see from the graph, the RMSE results of the proposed CFRA-Km and CFRA-Km-PCA is low in the whole neighbors range compared to that for the CFRA algorithm. More precisely, the CFRA-Km-PCA achieves better results than both other algorithms. Figure 2 shows the experimental results in terms of MAE metric for the three algorithms. In the same way, we can observe from the graph that the MAE results of the proposed CFRA-Km and CFRA-Km-PCA is low in the whole neighbors range compared to that for the CFRA algorithm and the CFRA-Km-PCA achieves better accuracy than both other algorithms. From Figs. 1 and 2, we can conclude that the proposed algorithms, CFRA-Km and CFRA-Km-PCA, have better performance than the traditional algorithm CFRA in terms of RMSE and MAE. We can also conclude that the combination of PCA method with K-means clustering technique improved significantly the recommendation performance, which indicates that CFRA-Km-PCA is better algorithm for using in recommendation system for big data. Conclusion and Future Work In this paper, we have presented two kinds of improved collaborative filtering algorithms intended to enhance the prediction accuracy in the big data context. The first algorithm uses only the k-means clustering technique, while the second algorithm combines the advantages of both k-means clustering technique and PCA method. PCA was adapted to conduct dimensions reduction before formally conducting clustering tasks, which improved significantly the performance of k-means clustering-based collaborative filtering recommendation algorithm. The recommendation algorithms were evaluated in terms of RMSE and MAE metrics. The experimental results showed that the CFRA-Km-PCA achieved better results than both other algorithms, CFRA and CFRA-Km. In the future, we will apply our algorithms to other datasets, and study the mechanism of the dimensions reduction coupled with other clustering techniques for improving recommendation precisions.
3,479
2018-05-08T00:00:00.000
[ "Computer Science" ]
Perception of Biocontrol Potential of Bacillus inaquosorum KR2-7 against Tomato Fusarium Wilt through Merging Genome Mining with Chemical Analysis Simple Summary Bacillus is a bacterial genus that is widely used as a promising alternative to chemical pesticides due to its protective activity toward economically important plant pathogens. Fusarium wilt of tomato is a serious fungal disease limiting tomato production worldwide. Recently, the newly isolated B. inaquosorum strain KR2-7 considerably suppressed Fusarium wilt of tomato plants. The present study was performed to perceive potential direct and indirect biocontrol mechanisms implemented by KR2-7 against this disease through genome and chemical analysis. The potential direct biocontrol mechanisms of KR2-7 were determined through the identification of genes involved in the synthesis of antibiotically active compounds suppressing tomato Fusarium wilt. Furthermore, the indirect mechanisms of this bacterium were perceived through recognizing genes that contributed to the resource acquisition or modulation of plant hormone levels. This is the first study that aimed at the modes of actions of B. inaquosorum against Fusarium wilt of tomatoes and the results strongly indicate that strain KR2-7 could be a good candidate for microbial biopesticide formulations to be used for biological control of plant diseases and plant growth promotion. Abstract Tomato Fusarium wilt, caused by Fusarium oxysporum f. sp. lycopersici (Fol), is a destructive disease that threatens the agricultural production of tomatoes. In the present study, the biocontrol potential of strain KR2-7 against Fol was investigated through integrated genome mining and chemical analysis. Strain KR2-7 was identified as B. inaquosorum based on phylogenetic analysis. Through the genome mining of strain KR2-7, we identified nine antifungal and antibacterial compound biosynthetic gene clusters (BGCs) including fengycin, surfactin and Bacillomycin F, bacillaene, macrolactin, sporulation killing factor (skf), subtilosin A, bacilysin, and bacillibactin. The corresponding compounds were confirmed through MALDI-TOF-MS chemical analysis. The gene/gene clusters involved in plant colonization, plant growth promotion, and induced systemic resistance were also identified in the KR2-7 genome, and their related secondary metabolites were detected. In light of these results, the biocontrol potential of strain KR2-7 against tomato Fusarium wilt was identified. This study highlights the potential to use strain KR2-7 as a plant-growth promotion agent. Introduction Tomato Fusarium wilt, caused by Fusarium oxysporum f. sp. lycopersici (Fol), is one of the most destructive diseases, causing a considerable loss in the production of both field and greenhouse tomatoes worldwide [1]. Since Fusarium wilt is a difficult disease to control [2][3][4], control strategies including physical and cultural methods, chemical fungicides treatment, and the cultivation of resistance tomato cultivars [5] achieved limited efficacy [6]. In addition, excessive usage of agrochemicals imposed serious negative impacts on the environment, causing the pollution of soil and groundwater reservoirs, an accumulation of chemical residues in the food chain, an emergence of pesticide-resistance pathogens, and health hazards [7]. As a result, biocontrol microbes have been suggested as a promising alternative to agrochemicals in plant disease control. Numerous biocontrol microbes, especially Bacillus strains, have been commercially developed as biopesticides and biofertilizers worldwide [7]. Biocontrol microbes protect the crops from an invasion of phytopathogens via (1) direct modes of action, e.g., the antibiosis and production of antimicrobial secondary metabolites [8,9]; and (2), the indirect modes of action, including induced systemic resistance (ISR) and the competition for nutrients and space [10,11]. The investigation of biocontrol microbes through conventional genetic and biochemical approaches could not unveil the full potential of these microbes due to the absence of appropriate natural triggers or stress signals under laboratory conditions [12]. With the development of high-throughput DNA sequencing technologies and genome mining, along with MS-based analytical methods (e.g., GC/LC-MS, LC-ESI-MS, and MALDI-TOF-MS), more potential biocontrol microbes can be revealed. For instance, the Bacillus amyloliquefaciens FZB42 genome contains nine giant gene clusters synthesizing secondary metabolites which are involved in the suppression of soil-born plant pathogens. Several gene/gene clusters are implicated in swarming motility, plant colonization, biofilm formation, and the synthesis of plant growth-promoting volatile compounds and hormones [13]. A wide range of extracellular proteins and phytase in the FZB42 secretome were detected through two-dimensional electrophoresis, MALDI-TOF-MS, and the proteomics approach, indicating this strain can grow on the plant's surface and supply phosphorus for the plant under phosphorus starvation. Additionally, four members of the macrolactin family were identified in an FZB42 culture filtrate by combining mass spectrometric and ultraviolet-visible data which perfectly agree with the overall structure of the macrolactin gene cluster found in the FZB42 genome [13]. Recently, the genome analysis of plant-protecting bacterium B. velezensis 9D-6 demonstrated that this strain can synthesize 13 secondary metabolites, of which surfacin B and surfactin C were detected as antimicrobial compounds against Clavibacter michiganensis through LC-MS/MS [14]. Furthermore, the genome mining of B. inaquosorum strain HU Biol-II revealed that this bacterial genome contains eight bioactive metabolite clusters and the production of seven metabolites was confirmed through HPLC MS/MS [15]. In our previous study, the B. inaquosorum strain KR2-7 was isolated from the rhizosphere soil of the tomato (Solanum lycopersicum) and was introduced as a highly effective biocontrol agent against Fol with a biocontrol efficiency of 80% under greenhouse conditions [16]. To better understand the biocontrol mechanisms of strain KR2-7 against Fol, whole-genome sequencing was conducted to identify putative gene clusters for secondary metabolites biosynthesis and to characterize gene/gene clusters involved in plant colonization, plant growth promotion, and induced systemic resistance (ISR). Moreover, secondary metabolites and other compounds related to identified BGCs and gene/gene clusters were detected using MALDI-TOF-MS analysis to confirm the results of genome mining. Strains and Culture Conditions The fungal pathogen Fol strain Fo-To-S-V-1 used in this study was obtained from the culture collection from the Iranian Research Institute of Plant Protection. The fungus Biology 2022, 11, 137 3 of 26 was maintained on a potato dextrose agar (PDA, Merck, Germany) slant at 4 • C and was sub-cultured onto a fresh PDA plate at 27 • C for 7 days for further tests. Strain KR2-7 was maintained on nutrient agar (NA, Merck, Germany; with a 0.3% beef extract, 0.5% peptone, and 1.5% agar) plate with a periodic transfer to a fresh medium. For long-term storage, it was kept at −80 • C in lysogeny broth (LB, Merck, Germany) with 20% glycerol (v/v). Dual Culture Assay In order to investigate the antagonism efficiency of strain KR2-7 against various tomato pathogens, five destructive fungal pathogens, including Alternaria alternata f. sp. lycopersici, Athelia rolfsii, Botrytis cinerea, Rhizoctonia solani, and Verticillium albo-atrum, were selected. The antifungal activity of strain KR2-7 against each pathogen was evaluated through a dual culture assay in three replications. In the dual culture assay, strain KR2-7 was simultaneously cultured 3cm apart from the 5-mm plug of a pathogen in a 9 cm PDA plate. The control plate was inoculated only with the pathogen. Plates were incubated at 27 • C. The fungal growth was checked daily by measuring the diameter of the colony for a period of three days. The percentage of fungal growth inhibition (PFGI) was calculated by the formula (1) developed by Skidmore and Dickinson [17], where R1 is the maximum radius of the growing fungal colony in the control plate, and R2 is the radius of the fungal colony that grew in the presence of strain KR2-7: MALDI-TOF-MS Analysis of KR2-7 Secondary Metabolites The secondary metabolite analysis was performed from the whole-cell surface extract of bacterium obtained during the dual culture of KR2-7 and Fol. The bacterial surface extract was prepared according to the methodology described by Vater et al. [18]. The Dual culture was done on potato dextrose agar (PDA) instead of Landy agar. Strain KR2-7 was streaked on one side of the plate and a 5 mm plug of Fol was placed on the opposite side simultaneously and incubated at 27 • C. After 24 h, two loops of bacterial cells from the interface of the bacterium-fungus in the inhibition zone were suspended in 500 µL of 70% acetonitrile with 0.1% trifluoroacetic acid for 2 min. The suspension was gently vortexed to produce a homogenized suspension. The bacterial cells were pelleted by centrifuging at 5000 rpm for 10 min. The cell-free supernatant was transferred to a new microcentrifuge tube and stored at 4 • C for further analysis. One microliter of supernatant liquid was spotted onto the target of the mass spectrometer with an equal volume of α-cyano-4hydroxycinnamic acid (CHCA) matrix and was air-dried. The sample mass fingerprints were obtained using an ultrafleXtreme MALDI-TOF/TOF-MS (Bruker, Billerica, MA, USA) within a mass range of 100-3000 Da. The MALDI-TOF-MS analysis was performed at the school of life sciences, Chinese University of Hong Kong (CUHK), Hong Kong. The whole-cell surface extract of strain KR2-7 grown on a potato dextrose agar was used as a control. Genome Sequencing, Assembly, and Annotation The genomic DNA of strain KR2-7 was extracted using a commercial DNA extraction kit (Thermo Fisher Scientific, Waltham, MA, USA). The whole-genome sequencing was performed using the Illumina HiSeq 4000 and PacBio RSII platforms (BGI, Shenzhen, China). The quality control of raw sequences was performed by FastQC v0.11.9 and the de novo assembly was done using SPAdes v3.14.1. The genome was annotated using the NCBI Prokaryotic Genomes Automatic Annotation Pipeline (PGAAP), and Bacterial Annotation System (BASys) webserver. The proteome of KR2-7 was subjected to BLASTP against the Cluster of Orthologous Group (COGs) database at E-value < 1 × 10 −5 to identify the Cluster of Orthologous Groups (COGs) [19]. Genome Phylogeny In this study, 32 Bacillus strains belonging to various species were selected among those recorded in the NCBI GenBank database. For all the selected strains, the nucleotide and the corresponding amino acid sequences were retrieved from the GenBank database. Whole-genome alignments were performed using REALPHY (http://realphy.unibas.ch (accessed on 22 December 2021); [20]) and the phylogenetic tree was constructed using the MEGA v. 7 [21] by maximum likelihood method [22], with evolutionary distances computed using the general time-reversible model [23]. Branch validity was evaluated by the bootstrap test with 1000 replications. The average nucleotide identity (ANI) values of selected Bacillus strains were calculated using the server EzBioCloud (http://www. ezbiocloud.net/tools/ani (accessed on 21 August 2021); [24]). According to the algorithm developed by Goris et al. [25], 95∼96% cut-off value was used for the species boundary [26]. The web-based DSMZ service (http://ggdc.dsmz.de (accessed on 7 January 2022); [27]) with 70% species and sub-species cut-off was used to estimate the in silico genome-togenome distance values for the selected strains. Pathway Analysis The annotated genome was analyzed using KEGG (Kyoto Encyclopedia of Genes and Genomes) to determine the existing pathways, which were then manually validated through matching the assigned gene functions to the corresponding KEGG pathway. Genome-Wide Identification of Secondary Metabolite Biosynthesis Gene Clusters The antibiotics and secondary metabolite analysis shell (antiSMASH) is a comprehensive resource that allows the automatic genome-wide identification and analysis of secondary metabolite biosynthesis gene clusters in bacterial and fungal genomes [28,29]. Thereby, the KR2-7 genome was submitted to the antiSMASH web server (https://antismash. secondarymetabolites.org) (accessed on 22 December 2021) to detect the putative BGCs for secondary metabolites. Each identified BGC in the KR2-7 genome was aligned against corresponding BGC in B. subtilis strain 168 and B. amyloliquefacience stain FZB42 using Geneious Prime v.2021.2.2. to find out the BGCs similarity between KR2-7, 168 and, FZB42. General Genomic Features of Strain KR2-7 The assembled genome of B. inaquosorum KR2-7 contained 4 contigs, with an N50 of 2,144,057 bp and 700X sequence coverage. The KR2-7 genome was obtained with a length of 4,248,657 bp, the G+C content of 43.1%, and 4265 predicted genes consisting of 4017 protein-coding genes, 50 rRNA genes, and 83 tRNA genes. Interestingly, strain KR2-7 possesses the larger number of genes contributing to amino acid transport and metabolism (322 genes), carbohydrate transport and metabolism (278 genes), inorganic ion transport and metabolism (200 genes), and secondary metabolites biosynthesis, transport and catabolism (76) compared to the reputable biocontrol agent B. velezensis strain FZB42 ( Figure S1; Table S1). Therefore, the genome content of KR2-7 indicates that the strain has considerable potential as a biocontrol agent. The genome sequence of B. inaquosorum KR2-7 was deposited in NCBI GenBank under the accession number QZDE00000000.2. Genome Phylogeny The genomes of 31 Bacillus strains were selected for aligning with the KR2-7 genome and phylogenomic analysis. The selected strains and their corresponding genome sequence accession numbers were presented in Table 1. Selected Bacillus strains were accurately distributed on branches of the maximum likelihood phylogenomic tree ( Figure 1). Moreover, closely related Bacillus species such as B. amyloliquefaciens and B. velezensis were distributed on the same branch ( Figure 1). The genome-based phylogeny approaches well recognized B. methylotrophicus, B. amyloliquefaciens subsp. plantarum, and B. oryzicola as the heterotypic synonyms of B. velezensis [30]. Recently, three subspecies of Bacillus subtilis, including B. subtilis subsp. inaquosorum, B. subtilis subsp. Spizizenii, and B. subtilis subsp. stercoris were promoted to species status through comparative genomics. Each subspecies encompasses unique bioactive secondary metabolite genes which cause the unique phenotypes [31]. According to REALPHY results, strain KR2-7 was identified as B. inaquosorum, owing to being placed within the B. subtilis branch close to B. subtilis subsp. inaquosorum strain KCTC 13429 and strain DE111 in the phylogenomic tree ( Figure 1). Notably, the results of ANI and GGDC analysis were consistent with REALPHY results as the KR2-7 genome displayed the highest ANI values (99.26%) and the lowest GGDC values (0.0075) with respect to the genome of strain KCTC 13429 (Table 1). Interestingly, the phylogeny analysis of several B. amyloliquefaciens strains based on core-genome was consistent with the ANI and GGDC values [32]. Altogether, the aforesaid genome-based phylogeny approaches identified strain KR2-7 as Bacillus inaquosorum. Secondary Metabolites Biosynthetic Gene Clusters in the KR2-7 Genome Genome mining of the strain KR2-7 revealed that more than 700 kb (i.e., nearly 17% of the genome) is devoted to 13 putative BGCs. Of the 13 found BGCs, nine were identified to contain one polyketide synthase (PKS) for macrolactin; five non-ribosomal peptide synthetases (NRPSs) for bacillibactin, bacillomycin F, bacilysin, fengycin, and surfactin; one PKS-NRPS hybrid synthetases (PKS-NRPS) for bacillaene; one thiopeptide synthase for subtilosin A, and one head-to-tail cyclised peptide for the sporulation killing factor. The nine annotated BGCs encode secondary metabolites which contribute to plant growth promotion through the fungal/bacterial pathogen suppression, ISR, nutrient uptake, and plant colonization ( Table 2) [33][34][35][36]. The distribution of identified BGCs within the KR2-7 genome underlies its vigorous potential in plant disease biocontrol application [16]. The coding genes of secondary metabolites in KR2-7 were different from those in B. velezensis FZB42, while these genes showed more similarity with those in B. subtilis 168 (Table 2). Interestingly, the BGC of bacillomycin F in KR2-7 was absent in B. subtilis 168 and B. velezensis FZB42 (Table 2) as this gene cluster conserved in B. inaquosorum [31]. Moreover, the KR2-7 genome contains four unannotated BGCs (data not shown) which showed less similarity to compounds listed in the MIBiG database. Antifungal Secondary Metabolites Production in Strain KR2-7 The KR2-7 genome mining showed that this strain harbors three BGCs with antifungal function, including fengycin, surfacing, and bacillomycin F (a variant of iturin) belonging to Bacillus cyclic-lipopeptides (CLPs). Bacillus CLPs represented the powerful fungitoxicity properties by interfering with cell membrane integrity, permeabilizing the cell membrane, and perturbing membrane osmotic balance due to the formation of ion-conducting pores [37]. Strain KR2-7 not only suppressed the Fusarium wilt of tomato caused by Fusarium oxysporum f. sp. lycopersici [16] but also showed a broad-spectrum antifungal activity towards various phytopathogenic fungi including Alternaria alternata f. sp. lycopersici, Athelia rolfsii, Botrytis cinerea, Rhizoctonia solani, and Verticillium albo-atrum ( Figure 2). Fengycin (plipastatin), the powerful fungitoxic compound-especially against filamentous fungi [37]-is synthesized by NRPS and encoded by a 39302 bp gene cluster with five genes including ppsA-E in KR2-7, which showed a 92% and 72.05% similarity to the fengycin gene cluster of B. subtilis 168 and B. amyloliquefacience FZB42, respectively ( Table 2). The first three genes (ppsABC) each encode two amino acid modules. The fourth gene (ppsD) encodes three amino acid modules, and the last gene (ppsE) encodes one amino acid module ( Figure 3). Ions of m/z values 1471.8, 1485.7, 1487.9, 1499.9, 1501.9, 1513.8, 1515.9, 1527.8 1529.9 and, 1543.8 were observed in a whole-cell surface extract of KR2-7 grown on a dual culture plate (thereafter, dual culture cell extract) and assigned to C15 to C18 fengycin homologues while in the whole-cell surface extract of KR2-7 grown on the control plate (thereafter, control cell extract) only four aforesaid peaks (m/z 1501.9, 1515.9, 1529.9 and, 1543.8) were detected (Table 3, Figure S2). The result indicated that the KR2-7 strain secreted various fengycin homologues to inhibit the growth of Fol. More strikingly, a 37074 bp gene cluster encoding bacillomycin F was also identified immediately downstream of the fengycin gene cluster of KR2-7 ( Figure 4). The bacillomycin F is one of seven main variants within the iturin family [40] encoded by a gene cluster consisting of four genes designated ituD, ituA, ituB and, ituC. The gene cluster code a cyclic heptapeptide in which the first three amino acids are shared among iturin family members, whereas the remaining four amino acids are conserved in B. inaquosorum [31]. Furthermore, iturins are characterized by a heptapeptide of α-amino acids attached to a β-amino fatty acid chain with a length of 14 to 17 carbons [37]. They possess potent antifungal activity against a wide variety of fungi and yeast, but bounded antibacterial and no antiviral actions [41][42][43]. Furthermore, these molecules also showed strong haemolytic activity, which limits their clinical use [44]. The antifungal mechanism of iturins launches by their interaction with the target cell membrane and osmotic perturbation of the membrane, owing to the formation of ion-conducting pores. Subsequently, the change in the permeability of a membrane is conducive to the release of biomolecules, such as proteins, nucleotides, and lipids from cells, which ultimately causes cell death [44,45]. In the dual culture cell extract of KR2-7, six mass peaks assigned to C16, C18 and C19 forms of iturin were observed while they were absent in the control cell extract of KR2-7 (Table 4, Figure S3). This result indicated that strain KR2-7 produced different variants of iturins to limit the growth of Fol hyphae. [18] Similar to fengycin, surfactin was synthesized by NRPS and encoded by a srf gene cluster that spans 26073 bp in the KR2-7 genome. The gene cluster harbors four genes (srf AA-AD) and showed a 92.20% and 74.65% similarity to those of B. subtilis 168 and B. amyloliquefacience FZB42, respectively. The product of the srf gene cluster is a linear array of seven modules, six of which are encoded by srf AA and srf AB genes and the last module is encoded by a srf AC gene ( Figure 5). The fourth gene (srf AD) encodes thioesterase/acyltransferase (Te/At-domain) which triggers surfactin biosynthesis [37]. Hence, the sfp gene encodes an essential enzyme (phosphopantetheinyl transferase) for the non-ribosomal synthesis of lipopeptides and the synthesis of polyketides. The regulatory gene yczE encoding an integral membrane protein was detected within the KR2-7 genome ( Figure 5). Surfactin enables bacteria cells to interact with plant cells as a bacterial elicitor for stimulating ISR [37], especially through the activation of jasmonate-and salicylic acid-dependent signaling pathways [46]. Several studies indicated the ISR-elicitor role of surfactin against phytopathogens in various host plants, e.g., tomato [47], wheat [48], citrus fruit [49], lettuce [50], and grapevine [51]. Comparing the MALDI-TOF mass spectra of KR2-7 grown on a PDA control and dual culture revealed that surfactin contributed to the suppression of Fol as eight mass peaks assigned to C13, C14 and C15 surfactin homologs were detected in dual culture cell extracts, while only four of which were observed in the control cell extract (Table 5, Figure S3). Furthermore, the bacterium produced more surfactin to suppress Fol. Additionally, C14 and C15 surfactin tend to stimulate stronger ISR rather than those with shorter chain lengths [47]. Moreover, suppression of taxonomically diverse fungal pathogens including Fusarium oxysporum, F. moniliforme, F. solani, F. verticillioides, Magnaporthe grisea, Saccharicola bicolor, Cochliobolus hawaiiensis, and Alternaria alternata by the surfactin family demonstrated that surfactins are strong fungitoxic compounds [52][53][54][55]. Antibacterial Secondary Metabolites Production in Strain KR2-7 The KR2-7 genome contained six BGCs coding for antibacterial compounds including bacillaene and macrolactin, sporulation killing factor (skf), subtilosin A, bacilysin, and surfactin. Several studies on surfactin and its isoforms proved that these metabolites played a major role in combating bacterial plant diseases, such as fruit bloch caused by Acidovorax citrulli in melon [56], tomato wilt caused by Ralstonia solanacearum [57], and root infection by Pseudomonas syringae in Arabidopsis [58]. Moreover, surfactin produced by B. subtilis R14 exhibited pronounced antagonistic efficacy against several multidrug-resistance bacterial strains of Escherichia coli, Pseudomonas aeruginosa, Staphylococcus aureus, and Enteroccoccus faecalis [59]. Bacillaene is a polyketide known as a selective bacteriostatic agent that inhibits prokaryotic, not eukaryotic growth by disrupting protein synthesis [60]. Its antimicrobial efficacy against various bacteria (Myxococcus xanthus and Staphylococcus aureus) and fungi (Fusarium spp) have been reported [60][61][62]. In the KR2-7 genome, bacillaene was synthesized by a PKS/NRPS hybrid pathway and encoded by a giant pks gene cluster (76.355 Kbp) containing 16 genes (pksA-S and acpK) showing an 89.63% and 75.25% similarity to those of B. subtlis 168 and B. velezensis FZB42, respectively (Table 2, Figure 6B). Another polyketide, macrolactin, can be encoded by a 54.225 kbp gene cluster in strain KR2-7 and showed a 74.12% similarity to the mln cluster of B. velezensis FZB42 (Table 2, Figure 6A). Macrolactins are a large class of macrolide antibiotics that inhibited the growth of several bacteria, including Ralstonia solanacearum, Staphylococcus aureus, and Burkholderia epacian [63,64] Bacilysin (also known as tetaine) is a dipeptide suppressing a wide variety of destructive phytopathogenic bacteria, e.g., Erwinia amylovora, Xanthomonas oryzae pv. oryzae, X. oryzae pv. Oryzicola, and Clavibacter michiganense subsp. sepedonicum [65][66][67]. This bactericidal property is due to the inhibition of glucosamine-6-phosphate synthase by the anticapsin moiety of bacilysin. Its inhibition represses the biosynthesis of peptidoglycans, the essential constituents of the bacterial cell wall [68,69]. In the KR2-7 genome, bacilysin can be encoded by a 7128 bp bac gene cluster consisting of seven genes (bacA-E, ywf AG), and display high gene similarity to those of B. subtilis 168 (Table 2, Figure 7). This metabolite and its derivatives were detected neither in the KR2-7 control cell extract nor dual culture cell extract, likely due to culture conditions or the assay method. Furthermore, the KR2-7 genome encompassed two distinct gene clusters encoding bacteriocins, including subtilosin A and sporulation killing factors (SKFs). Subtilosin A is a macrocyclic anionic antimicrobial peptide originally obtained from wild-type strain B. subtilis 168 [70] but is also produced by B. amyloliquefaciens and B. atrophaeus [71,72]. This bacteriocin displayed a bactericidal effect on a broad spectrum of bacteria, including Grampositive and Gram-negative bacteria and both aerobes and anaerobes [73], possibly through an interaction with membrane-associated receptors, or binding to the outer cell membrane, and is conducive to membrane permeabilization [73][74][75]. Subtilosin A is ribosomally synthesized by an alb gene cluster containing eight genes (albA-G, sboA) spanning 6.8 kbp in the KR2-7 genome (Figure 8). The sboA gene encodes presubtilosin, and albA-G genes encode proteins whose functions are presubtilosin processing and subtilosin export [76]. The mass peaks corresponding to subtilosin A and its homologs appeared neither in the KR2-7control cell extract nor the dual culture cell extract. These peaks are detectable by altering the culture condition and/or evaluating the procedure. The KR2-7 genome also harbored a 5976 bp skf gene cluster encompassing skf ABCEFGH, and involves the production and release of killing factors during sporulation (Figure 9). During the early stages of sporulation, sporulating cells of B. subtilis exude extracellular killing factors to kill the nonsporulating sister cells whose immunity to these toxins was not developed. As a result, the nutrient from the dead cells are released and then used by the sporulating cells to resume their growth. This phenomenon is termed "cannibalism" and causes a delay in sporulation [77,78]. The SKF bacteriocin produced by the sporulating cells can destroy other soil-inhabiting bacteria. Similarly, the expression of skf genes in B. subtilis inhibits the growth of X. orzae pv. oryzae, the causative agent of rice bacterial blight [79]. 1 Figure 9. The biosynthetic gene cluster of sporulation killing factor antibacterial metabolite in strain KR2-7. Plant Colonization by Strain KR2-7 The most crucial step for a PGPR (Plant Growth Promoting Rhizobacteria) agent to survive, enhance plant growth, and suppress plant disease is the efficient colonization of plant tissues. The plant colonization process comprises two steps. In the first step, PGPR agents reach the surface of plant tissue either by passive movement in water flow or by flagellar movement. The second step is to establish the plant-bacterium interaction reliant on bacterial biofilm formation [36,80]. The KR2-7 genome harbored the gene clusters for flagellar assembly (flg cluster, flh cluster and, fli cluster) and bacterial chemotaxis (che cluster) together with other genes known to be necessary for swarming motility, including hag, two stator elements (motAB), as well as regulatory genes swrAA, swrAB, swrB and, swrC ( Table 6). In the step of efficient colonization, the PGPR agent forms bacterial biofilm and not only strengthens the plantbacterium interaction but protects the plant root system as a bio-barrier against pathogen attacks [80]. The main component of bacterial biofilm is the extracellular polymeric substances (EPS) with a chemical composition including proteins, neutral polysaccharides, charged polymers, and amphiphilic molecules [80]. The eps cluster (epsC-O) encoding exopolysaccharide of biofilm and its regulatory genes sinR and arbA (repressors) and, sinI (antirepressor), the yqxM-sipW-tasA gene cluster encoding amyloid fiber (TasA protein of biofilm) and pgcA encoding phosphoglucomutase were found in the KR2-7 genome (Table 6). Moreover, the involvement of surfactin in cell adhesion and biofilm formation due to its 3D topology and amphiphilic nature has been illustrated [81,82]. Baise et al. [58] declared that deleting surfactin gene expression in B. subtilis strain 6051 led to disability to form robust biofilm on Arabidopsis root surface, and reduced the suppression of disease caused by Pseudomonas syringae. Besides, the deficiency in surfactin production in B. subtilis, strain UMAF6614 resulted in a biofilm formation defect on melon phylloplane and partially reduced the suppression of bacterial soft root rot, bacterial leaf spot, and cucurbit powdery-mildew by the biocontrol stain [83]. Genes Involved in Bacterium-Plant Interactions Quite apart from the antagonistic mechanisms of bacterial biocontrol strains, these bacterial microorganisms are also involved in plant growth augmentation through making nutrients available for host plants, production of plant growth-promotion hormones, and the induction of systemic resistance within the plant by specific metabolite secretion [96,97]. Similar to other biocontrol microorganisms, the KR2-7 genome contains the genes/gene clusters related to plant growth promotion ( Table 6). The KR2-7 genome contained moaA-E genes encoding molybdenum cofactor and may be a relic of a nitrogen-fixing gene cluster or a cofactor for nitrogen assimilation [80]. Moreover, the genes for nitrate reduction (narG-J), nitrate transport (narK), probable transcription regulator genes (arf M), regulatory protein (fnr), an ammonium acid transporter (nrgA), and its regulator gene (nrgB), along with the nas gene cluster (nasA-F), were also identified in the KR2-7 genome. The nas gene cluster is involved in nitrite transport and reduction (Table 6). Furthermore, ions of m/z values 883.4 and 905.2 were detected in the KR2-7dual culture cell extract and were identified as bacillibactin [M + H] + and bacillibactin [M + Na] + ( Figure 10) by comparison with previously reported data [99,100]. Notably, the molecular ion peaks corresponding to bacillibactin were not observed in the control cell extract of KR2-7. Siderophores are low-molecular-weight molecules with a high affinity for ferric iron that solubilize iron from minerals and organic compounds under iron limitation conditions [101]. Siderophore-producing bacterial strains impact plant health by supplying iron to the host plants [102,103], depriving fungal pathogens of iron [104], and suppressing fungal phytopathogens, including F. oxysporum f. sp. capsici [105] and Phytophthora capsici [106]. Furthermore, it was reported that siderophores mitigate heavy metal contamination of soil through the formation of a stable complex with environmental toxic metals such as Al, Cd, Cu, Ga, In, Pb, and Zn [101]. Volatile organic compounds (VOCs) produced by PGPR agents play a significant role in promoting plant growth through the regulation of synthesis or metabolism of phytohormones [107], the induction of systemic disease resistance [108,109], and the control of plant pathogens [110]. A 2,3-butanediol and 3-hydroxy-2-butanone (acetoin) are the best-known growth-promoting VOCs that produced B. subtilis and B. amyloliquefaciens. The genome of the KR2-7 harbored als gene cluster (alsR, alsS, alsD), along with the bdhA gene is together required for the biosynthesis pathway of 2,3-butanediol from pyruvate. In this pathway, alsS encodes the acetolactate synthase enzyme, which catalyzes the condensation of two pyruvate molecules into acetolactate. Then, acetolactate decarboxylase, encoded by alsD, converts decarboxylate acetolactate into acetoin. The alsR regulates two aforesaid steps. Finally, the bdhA encoded (R, R)-butanediol dehydrogenase enzyme catalyzes 3-hydroxy-2butanone (acetoin) to 2,3-butanediol [111]. In addition, the KR2-7 genome contained ilvH, ilvB, ilvC genes and a leu gene cluster (leuABCD) which are required for the biosynthesis pathway of three branched-chain amino acids (BCAA), including leucine, isoleucine, and valine. Acetolactate is a central metabolite between 2,3-butanediol and BCAA biosynthesis and can involve in both anabolism and catabolism by acetolactate decarboxylase. It was reported that acetolactate decarboxylase is an enzyme with a dual role that can direct acetolactate flux to catabolism in favour of valine and leucine biosynthesis or can catalyze the second step of the 2, 3-butanediol anabolism pathway [112]. Bacillus spp. can enhance plant growth through the synthesis of plant growth-promoting hormones, such as auxin, indole-3-acetic acid (IAA), and gibberellic acid. The genome of KR2-7 may encompass genes/gene clusters responsible for the biosynthesis of indole acetic acid, phytase, and trehalose (Table 6). Moreover, a large variety of PGPRs produce polyamines, such as putrescine, spermine, spermidine, and cadaverine, and are known to be involved in promoting plant growth and improving abiotic stress tolerance in plants [113]. The genes coding for arginine decarboxylase (SpeA), agmatinase (SpeB), and spermidine synthase (SpeE), which direct polyamines biosynthesis, were also found in the KR2-7 genome (Table 6). Discussion Previously, the Bacillus subtilis species complex was composed of four close subspecies, i.e., subspecies subtilis, spizizenii, inaquosorum, and stercoris, which were differentiated through a phylogenetic analysis of multiple protein-coding genes and genome-based comparative analysis [114,115]. B. subtilis subsp. Inaquosorum was deemed as a distinctive taxon encompassing strains KCTC 13429 and NRRL B-14697 [116]. Recent phylogenomic studies clearly distinguished subspecies inaquosorum from subspecies spizizenii, as the estimated ANI among them was smaller than the defined ANI for species delineation (95% ANI) [117]. In addition to a low ANI value (<95%), the BGC of subtilin exclusively presents in the genomes of subspecies spizizenii, but was not characterized in subspecies inaquosorum genome [115]. Accordingly, B. inaquosorum KR2-7 was clearly differentiated from B. subtilis subsp. spizizenii W23 because of the low ANI value among them (94.18%) and the lack of subtilin gene cluster in the genome content of strain KR2-7. In addition, it was reported that B. inaquosorum is the only species to produce bacillomycin F. It was approved by detecting a unique MALDI-TOF-MS biomarker at m/z 1120 in the MALDI-TOF-MS spectra of B. inaquosorum that cannot be produced by other species [114]. Since this unique biomarker (m/z value 1120.6) was observed in the MALDI-TOF-MS spectra of strain KR2-7, it can be concluded that this strain is a B. inaquosorum. Recently, the ability of B. inaquosorum strain HU Biol-II in producing bacillomycin F was confirmed through HPLC MS/MS [15]. Most recently, subspecies spizizenii, inaquosorum, and stercoris were promoted to species status through a comparative genome study [118]. This study determined that each subspecies had unique secondary metabolite genes encoding unique phenotypes, thereby each subspecies can be promoted to species. According to the aforesaid results, strain KR2-7 was identified as a B. inaquosorum. The genome-driven data highlighted the plant-beneficial functions of strain KR2-7. This strain can efficiently colonize the plant root surface, relying on its swarming motility and biofilm formation abilities. Efficient root colonization of biocontrol bacteria is necessary for suppressing phytopathogens, and biofilm formation is an essential prerequisite for persistent root colonization [119,120]. The biofilm-deficient mutant of B. pumilus HR10 produced weakened biofilms with reduced contents of extracellular polysaccharides and proteins, and thereby could not efficiently control pine seedling damping-off disease [121]. Hence, the suppression of tomato Fusarium wilt by strain KR2-7 [16] may contribute to efficient tomato root colonization of this strain. In addition to efficient root colonization, strain KR2-7 is able to directly suppress soildwelling phytopathogens through producing eight antimicrobial secondary metabolites, e.g., fengycin, surfactin, bacillomycin F, macrolactin, bacillaene, bacilysin, subtilosin A, and sporulation killing factor. The combination of obtained data via MALDI-TOF-MS with our previous observations [16] confirmed that strain KR2-7 produced at least four bioactive metabolites (including fengycin, surfactin, macrolactin, and bacillaene) to directly protect the tomato plant from the invasion and penetration of Fol. The cyclic lipodecapeptide fengycin exhibits strong fungitoxic properties by inhibiting phospholipase A2 and aromatase functions [122], disruption of biological membrane integrity [123], deformation and permeabilization of hyphae [124,125], and induction of ISR [126]. In this context, the strong antifungal activity of B. inaquosorum strain HU Biol-II against a diverse group of fungi highly pertained to the fengycin produced by this strain. Interestingly, 97.47% of the ppsA-E gene cluster in KR2-7 was similar to the fengycin gene cluster in strain HU Biol-II [15]. Fengycin produced by B. subtilis SQR9 and B. amyloliquefaciens NJN-6 significantly inhibited the growth of F. oxysporum [127,128]. Moreover, fengycin BS155 isolated from B. subtilis BS155 destroyed Magnaporthe grisea through damaging the plasma membrane and cell wall, disruption of mitochondrial membrane potential (MMP), chromatin condensation, and the induction of reactive oxygen species (ROS) [129]. In addition to fengycin, the contribution of other secondary metabolites in the biocontrol of various pathogens has been reported. The supernatant of B. subtilis GLB191, consisting of surfactin and fengycin, highly controlled grapevine downy mildew caused by Plasmopara viticola by means of direct antagonistic activity and the stimulation of plant defence [51]. Furthermore, the strong antifungal effect of B. velezensis strains Y6 and F7 against Ralstonia solanacearum and F. oxysporum was attributed to the production of fengycin, iturin, and surfactin, among which iturin played a key role in the suppression of F. oxysporum [130]. The biocontrol mechanism of B. amyloliquefaciens DH-4 against Penicillium digitatum, the causal agent of citrus green mold, was secreting a cocktail of antimicrobial compounds consisting of macrolactin, bacillaene, iturins, fengycin, and surfactin [100]. Additionally, strain KR2-7 can exert hormones, such as IAA, phytase, and trehalose for root uptake and rebalance hormones in the host plant to boost growth and stress response. Phytate (inositol hexa-and penta-phosphates) is the predominant form of soil organic phosphorus, which is unavailable for plant uptake due to the rapid immobilization of phosphorus and the lack of adequate phytase levels in plants [131]. Phytase is a phosphatase enzyme responsible for the transformation of organic phytate to inorganic phosphate, which is acquirable for plant roots. Similarly, phytase-producing Bacillus strains can effectively enhance plant growth through the liberation of reactive phosphorus from phytate and make this element available for plant uptake. In the presence of phytate, the comparison of the culture filtrate of B. amyloliquefaciens strain FZB45 with those of a phytase-deficient mutant provided evidence that the phytase activity of strain FZB45 enhanced the growth of corn seedling [132]. The bacterization of Brassica juncea with Bacillus sp. PB-13 considerably boost phosphorus content and growth parameters in 30-day-old seedlings [133]. More recently, the soil inoculation of Bacillus strain SD01N-014 resulted in the enhancement of soil phosphorus content and the promotion of maize seedling growth [134]. Accordingly, extracellular phytase activity of strain KR2-7 mediated with phy gene can be expectable. In addition, the presence of genes involved in the biosynthesis of IAA and trehalose in the KR2-7 genome (Table 6) is an indication of this strain's potential in the mitigation of salt toxic stress on plants. Inoculation of tomato plants subjected to salt stress with OxtreS (trehalose over-expressing strain) mutant of Pseudomonas sp. UW4 considerably boosted the dry weight, root and shoot length, and chlorophyll content of the tomato plant [135]. Moreover, canola seedlings treated with over-expressed IAA transformant of UW4 represented longer primary root with an increased number of root hairs than seedlings treated with wild-type UW4 [136]. The growth promotion of root hairs by IAA improves the assimilation of water and nutrients from the soil, which in turn raises plant biomass [136]. Similarly, Japanese cypress seedlings inoculated with B. velezensis CE 100 showed significant increases in growth parameters and biomass due to the production of indole-3-acetic acid (IAA) by CE 100 strain [137]. Conclusions According to genome-driven data, along with chemical analysis results, strain KR2-7 most likely exploits four possible modes of action to control tomato Fusarium wilt, as shown in Figure 11: (1) Inhibition of the pathogen growth through the diffusion of antifungal and antibacterial secondary metabolites and biofilm formation; (2) Stimulation of ISR in tomato via the production of surfactin and volatile organic compounds; (3) Promotion of plant health and growth by producing plant growth promotion hormones and polyamines, supplying iron for tomato, depriving the pathogen of iron, and relieving heavy metal stress in the soil as a result of siderophore bacillibactin activity; (4) Efficient colonization of plant roots. The described modes of action were highly based on the identified gene clusters encoding secondary metabolites and characterized gene/gene clusters involved in plant colonization, plant growth promotion, and ISR. Furthermore, future studies using integrated omics approaches and the mutagenesis of strain KR2-7 are required to approve the aforesaid possible modes of action of strain KR2-7 and exact functions of the putative genes and gene clusters in the suppression of fungal pathogen Fol. Figure 11. Schematic presentation of putative biocontrol mechanism of strain KR2-7 against Fol. (A) An untreated tomato plant in which Fol (yellow 16-point star) penetrated root tissue, colonized and blocked vascular system to prevent water and nutrients from being transferred to plant organs. It caused yellowing began with bottom leaves, followed by wilting, browning, and defoliation. Growth is typically stunted, and little or no fruit develops. (B) Strain KR2-7 (blue rod) reaches to tomato root and colonizes on the root surface through its motility potential and biofilm formation. As a result of root colonization, strain KR2-7 diffuses a wide variety of antifungal and antibacterial secondary metabolites to establish a protective zone (green dash line semicircular) in tomato rhizosphere. Strain KR2-7 directly limits the invasion of Fol fungal pathogen through diffused antifungal secondary metabolites and also control the bacterial pathogens of tomato by means of produced antibacterial secondary metabolites. Meanwhile, volatile organic compounds and surfactin stimulate tomato systemic resistance to provide ISR-mediated protection (yellow dash line arrow) against phytopathogens. Moreover, the tomato growth is enhanced assisted by growth-promoting hormones, polyamines, and siderophore bacillibactin. Figure 11. Schematic presentation of putative biocontrol mechanism of strain KR2-7 against Fol. (A) An untreated tomato plant in which Fol (yellow 16-point star) penetrated root tissue, colonized and blocked the vascular system to prevent water and nutrients from being transferred to plant organs. It caused yellowing began with bottom leaves, followed by wilting, browning, and defoliation. Growth is typically stunted, and little or no fruit develops. (B) Strain KR2-7 (blue rod) reaches to tomato root and colonizes on the root surface through its motility potential and biofilm formation. As a result of root colonization, strain KR2-7 diffuses a wide variety of antifungal and antibacterial secondary metabolites to establish a protective zone (green dash line semicircular) in the tomato rhizosphere. Strain KR2-7 directly limits the invasion of Fol fungal pathogen through diffused antifungal secondary metabolites and also control the bacterial pathogens of tomato by means of produced antibacterial secondary metabolites. Meanwhile, volatile organic compounds and surfactin stimulate tomato systemic resistance to provide ISR-mediated protection (yellow dash line arrow) against phytopathogens. Moreover, tomato growth is enhanced assisted by growth-promoting hormones, polyamines, and siderophore bacillibactin.
9,290.4
2022-01-01T00:00:00.000
[ "Biology", "Engineering", "Agricultural And Food Sciences" ]
Effect of Inductively Coupled Plasma Etching Parameters on n-Al0.5Ga0.5N Ohmic Contact High-Al-content n-AlGaN ohmic contact is very important for deep ultraviolet optoelectrical devices. However, it often suffers from the etching damages formed in inductively coupled plasma (ICP) etching. In this paper, the effects of ICP etching parameters on n-Al0.5Ga0.5N ohmic contact, including RF power, ICP power, and etching gas, were systematically investigated and analyzed by X-ray photoelectron spectroscopy and circular transmission line model. Finally, n-Al0.5Ga0.5N ohmic contact was achieved with a low specific contact resistivity of 8.7×10-4 Ω·cm2, and AlGaN-based UVC light-emitting diodes (LEDs) showed a low operation voltage of only 5.6 V at the injection current density of 16 A/cm2. Effect of Inductively Coupled Plasma Etching Parameters on n-Al 0.5 Ga 0.5 N Ohmic Contact Shanshan Yang , Meixin Feng , Yuzhen Liu, Wenjun Xiong , Biao Deng, Yingnan Huang, Chuanjie Li , Qiming Xu, Yanwei Shen, Qian Sun , and Hui Yang Abstract-High-Al-content n-AlGaN ohmic contact is very important for deep ultraviolet optoelectrical devices.However, it often suffers from the etching damages formed in inductively coupled plasma (ICP) etching.In this paper, the effects of ICP etching parameters on n-Al 0.5 Ga 0.5 N ohmic contact, including RF power, ICP power, and etching gas, were systematically investigated and analyzed by X-ray photoelectron spectroscopy and circular transmission line model.Finally, n-Al 0.5 Ga 0.5 N ohmic contact was achieved with a low specific contact resistivity of 8.7×10 -4 Ω•cm 2 , and AlGaN-based UVC light-emitting diodes (LEDs) showed a low operation voltage of only 5.6 V at the injection current density of 16 A/cm 2 .Index Terms-N-AlGaN, ohmic contact, ICP etching, XPS. I. INTRODUCTION A LGAN materials are very suitable for the fabrication of ultraviolet (UV) optoelectronic devices, such as UV LED, which can be widely used in sterilization, water purification, UV curing, specialty lighting, bio-phototherapy etc. [1], [2], [3], [4].However, in UV optoelectrical devices, with the increase of Al molar fraction in AlGaN material, the energy bandgap increases, which makes that low-resistivity ohmic contact is more and more difficult to obtain [5], [6], [7].On the other hand, to fabricate UV LEDs or laser diodes, ICP dry etching is often utilized to expose the buried n-AlGaN layer for the fabrication of ohmic contact [4], [8], [9], [10], [11], [12].Therefore, it is very crucial to study the effect of etching parameters on n-AlGaN ohmic contact to achieve a low specific contact resistivity for the fabrication of high-performance UV optoelectronic devices [13], [14], [15], [16]. During the dry etching process, the distribution and state of surface elements in AlGaN material changes, leading to the generation of defects, such as vacancies and dangling bonds [17], [18], [19].For n-type GaN, the nitrogen vacancy induced by dry etching often acts as a donor, and then the Fermi energy level in n-GaN moves upward after plasma treatment, which facilitates the formation of n-type ohmic contact [20], [21], [22], [23].However, this situation is different for n-type AlGaN with high Al content, where the etching damage tends to become a defect with deep energy level, making ohmic contact very difficult and hence affecting the electrical characteristic of the device [24], [25], [26], [27].Previous studies reported that plasma pretreatment of n-AlGaN before metal contact was used to change the material quality and morphology of the etched surface.However, few efforts have been made to systematically study the effects of plasma etching parameters on the ohmic contact of n-AlGaN with high aluminum components [8], [28], [29], [30]. This paper methodically investigates the impact of ICP etching parameters on high-Al-content n-AlGaN metal contact.CTLM and XPS were used to characterize the specific contact resistivity, Ga 3d core level and surface stoichiometry of n-AlGaN sample.As a result, an optimal ICP etching condition was given and showed a low-specific contact resistivity for n-Al 0.5 Ga 0.5 N. Furthermore, this optimal ICP etching condition was utilized in practical AlGaN-based UVC LEDs. II. EXPERIMENTAL METHODS Fig. 1 illustrates a schematic diagram of the n-Al 0.5 Ga 0.5 N samples used in this study.The epitaxial layers were grown by using metal-organic chemical vapor deposition (MOCVD) method.and consisted of an AlN/AlGaN multilayer buffer, a 2-μm-thick unintentionally doped AlGaN layer, and a 2-μm-thick Si-doped n-Al 0.5 Ga 0.5 N layer with a Si doping concentration of 1×10 19 cm -3 .Hall measurement showed that TABLE I VARIOUS ICP ETCHING CONDITIONS FOR EIGHT N-ALGAN SAMPLES the effective electron concentration of n-Al 0.5 Ga 0.5 N layer was 1×10 19 cm -3 .The manufactured n-AlGaN wafer was cut into eight individual samples to study the effect of ICP etching parameters on n-Al 0.5 Ga 0.5 N ohmic contact as shown in Table I. Prior to metal deposition, the samples were treated by using the ICP dry etching method in the Oxford Plasmalab system at 20 °C for 4 min with a chamber pressure of 10 mTorr, and the detailed ICP etching parameters were shown in Table I.After plasma treatment, all samples were immersed in diluted HCl solution (HCl: H 2 O = 1:2) for 10 min and then rinsed with deionized water for 5 min.After that, the circular transmission line model (CTLM) patterns were transferred to the sample surface by photolithography, followed by the sputtering of Cr/Ti/Al/Ti/Ni/Au (40/30/100/70/60/80 nm) contact metal stack by magnetron sputtering as soon as possible (at intervals of less than 5 min) and the lift-off process.After rapid thermal annealing in N 2 for 2 min at a temperature of 950 °C, the samples were subjected to current-voltage (I-V) measurements by using a four-point probe technique. CTLM patterns were used to measure the specific contact resistivity of the samples.For the CTLM patterns, the inner radius (r) was 150 μm, and the spacing (R) between the inner and outer radii varied from 10 to 80 μm.I-V tests were performed by using a KEITHLEY-2400 sourcemeter analyzer, and surface stoichiometry ratio of the dry etched samples was quantitatively characterized by using XPS with an Al Kα X-ray source energy of 1486.6 eV (ULVAC-PHI 5000 Versaprobe II).The peak energy positions of the XPS spectra were calibrated based on the position of the C1s peaks with the binding energy of 284.8 eV, and the emission angle was set to 45°.The XPS spectra were obtained by using the Shirley-type background TABLE II SPECIFIC CONTACT RESISTIVITY AND SURFACE STOICHIOMETRY OF EIGHT SAMPLES deduction method, and fitted with a combination of Gaussian and Lorentzian line shapes.The atomic force microscopy (AFM) image was measured by Veeco Dimension 3100. III. RESULTS AND DISCUSSION RF power plays an important role in ICP etching.As increasing the RF power, the plasma energy is raised, the physical attack of the plasma on the etched sample surface is enhanced, and thus the etching rate is increased.Therefore, we firstly studied the impact of RF power on n-Al 0.5 Ga 0.5 N ohmic contact by comparing samples 2, 4, and 6.As shown in Fig. 2(a), sample 2 exhibits a non-ohmic contact, while samples 4 and 6 demonstrate ohmic contact behaviors.In order to reveal the underlying mechanism, XPS analysis was performed on these samples prior to contact metal deposition.As shown in Fig. 2(b), when RF power was 100 W, the Ga 3d peak energy of n-AlGaN sample was 19.44 eV.As decreasing RF power to 50 W, the Ga 3d peak energy increased to 20.18 eV, indicating that sample 4 has a higher Fermi energy level than that of sample 2 and hence lower barrier height at the sample surface, which helps the current conduction through the tunneling and forms a low-resistivity ohmic contact.It also implies that less compensation centers of the host type were introduced, which is further proved by the result of surface stoichiometric ratio obtained from XPS analysis.As shown in Table II, the nitrogen element content was increased from 45.3% to 54.3%, indicating less nitrogen vacancies, which usually function as defects with deep energy level to compensate the surface electron concentration.These results are also consistent with the specific contact resistivity (ρ с ).As compared with sample 2, sample 4 showed an ohmic contact behavior with a specific contact resistivity of 8.7×10 -4 Ω•cm 2 obtained from the CTLM model.As further reducing RF power to 20 W, the Ga 3d peak energy of n-AlGaN sample was 19.88 eV, illustrating that the Fermi energy level of sample 6 moves downward as compared with sample 4, which increases the barrier and inhibits the current conduction.It indicates more nitrogen vacancies and higher specific contact resistivity for sample 6, which is confirmed by the results shown in Table II.As compared with sample 4, the nitrogen element content of sample 6 reduced to 52.2%, and the specific contact resistivity of sample 6 increased to 9.4×10 -4 Ω•cm 2 .In ICP etching, the molecules are accelerated by an RF source to obtain energy to bombard the sample to realize etching, and the RF power directly affects the plasma density and energy to break the Ga-N bond and Al-N bond on the surface, which induces the AlGaN etching, but also forms etching damages, such as nitrogen vacancy (V N ) [31], [32].Too high or too less RF power is not suitable to form low-resistivity ohmic contact. The etching gas play a role in ICP etching [33].Therefore, and we studied the impact of the etching gas on n-AlGaN ohmic contact by comparing samples 3 and 4. As shown in Fig. 3(a), sample 3 exhibits non-ohmic contact, while sample 4 demonstrates ohmic contact behavior, which is mainly due to the nitrogen vacancy formed by the dry etching.As shown in Table II, sample 4 shows a higher nitrogen element content as compared with sample 3, indicating less nitrogen vacancy to compensate the surface electron concentration and hence lower barrier height, which is further experimentally proved by the Ga 3d core levels.As shown in Fig. 3(b), comparing with sample 3, sample 4 shows a higher binding energy, indicating that the energy difference between the surface Fermi energy level and valence band edge is larger, and the surface barrier height is smaller, which favors the formation of ohmic contact.In ICP etching, with inductively coupled glow discharge, the etching gas mainly becomes active free radicals, substable molecules, elements, etc.These molecules interact with AlGaN surface physically or chemically.Since physical etching easily generates nitrogen vacancy due to a smaller weight of nitrogen atom as compared with Al or Ga atoms, chemical etching is much more profitable.As compared with BCl 3 , Cl 2 could contribute larger part of chemical etching, thus pure Cl 2 etching gas is more favorable to n-AlGaN ohmic contact. ICP power is another key parameter in ICP etching, and we studied the effect of ICP etching power on n-AlGaN ohmic contact by comparing samples 6 and 7.As shown in Fig. 4(a), both samples 6 and 7 exhibit ohmic contact behavior.However, sample 6 had larger current and smaller resistance under the same voltage, corresponding to a lower specific contact resistivity and higher nitrogen element content for sample 6 as shown in Table II.Fig. 4(b) shows the XPS spectra of Ga 3d core levels for samples 6 and 7.It can be found that the Ga 3d binding energy of sample 6 was 0.84 eV larger than that of sample 7, indicating less surface nitrogen vacancies and lower barrier height, which contributes to a lower specific contact resistivity for sample 6.In ICP etching, under the action of a high-frequency electric field, ICP power can excite the plasma, generating electrons and ions, which etch the sample surface.A high ICP power could reduce the nitrogen vacancy and be much more preferrable for n-AlGaN ohmic contact. Based on the above analysis, it can be seen that sample 4 showed the optimal ICP etching condition and the lowest specific contact resistivity of 8.7×10 -4 Ω•cm 2 , which is even lower that the unetched n-AlGaN sample 8 as shown in Fig. 5(a) and Table II.Fig. 5(b) shows XPS spectra of Ga 3d core levels for samples 4 and 8.The Ga 3d binding energy of sample 4 was 0.76 eV higher than that of sample 8, indicating higher surface Fermi energy level and less N vacancies, which is consistent with lower nitrogen element content as shown in Table II.It illustrates that after ICP etching, the surface electron concentration was increased, and hence the Fermi energy level moved upward to the conduction band, a low-resistivity ohmic contact was formed on the high-Al-content AlGaN sample.It should be noted that in our ICP etching, the etching gases used are Cl 2 and BCl 3 without any oxygen element, and hence the oxygen element nearly has no impact on the etching process, including the etching rate.The oxygen element obtained from XPS measurements is mainly due to the surface contamination while taking out from the ICP equipment.This is shown in Table II, the contents of oxygen element for all the measured samples was about 10%, and showed no close relationship with the measured specific contact resistance.Therefore, we thought the content of oxygen element in these samples has little effect on the measured results. In order to verify the above experimental result, we applied the optimal ICP etching condition used for sample 4 in the practical AlGaN-based UVC LED device.The ICP and RF powers were 300 and 50 W, respectively, and the etching gas was pure Cl 2 .The detailed epitaxial structure, fabrication process, and measurement method could be found in our previous work [34].Fig. 6(a) shows the AFM image of the etched sample, showing a surface roughness of 0.4 nm, which is almost equal to that of the as-grown sample.As shown in Fig. 6(d), for a UVC LED device with a dimension of 305 × 508 μm 2 , at the injection current of 20 mA, corresponding to a current density of 16 A/cm 2 , the operation voltage was as lower as 5.6 V, and the differential series resistance was only 16.6 Ω, which was much lower than that reported in literatures [2], [35].The output power was 4.8 mW, corresponding to a high wall-plug efficiency of 4.3%. IV. CONCLUSION In summary, we have systematically studied the effects of ICP etching parameters on n-Al 0.5 Ga 0.5 N ohmic contact by XPS and CTLM.The experimental results showed that an appropriate RF power of 50 W, pure Cl 2 etching gas, and high ICP power of 300 W could contributes to low nitrogen vacancy concentration, high surface Fermi energy level and Ga 3d core level, corresponding to a small specific contact resistivity of 8.7×10 -4 Ω•cm 2 for n-Al 0.5 Ga 0.5 N. Additionally, this optimized ICP etching condition was utilized in practical AlGaN-based UVC LED, showing a low operation voltage of only 5.6 V at the injection current density of 16 A/cm 2 . Fig. 5 . Fig. 5. (a) The measured resistance as a function of ln (R/r) for samples 4 and 8. (b) XPS spectra of Ga 3d core levels for samples 4 and 8. Fig. 6 . Fig. 6. (a).AFM image of the sample surface after ICP etching.(b) The size of 305 × 508 µm 2 UVC LED device luminescence diagram.(c) Electroluminescence spectrum of the as-fabricated AlGaN-based UVC LED.(d) Light output power, voltages and Wall plug efficiency of UVC LED at various injection currents.
3,521
2024-10-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Two-Step Phase Retrieval Algorithm Using Single-Intensity Measurement Aiming at the problem that the single-intensity phase retrieval method has poor reconstruction quality and low probability of successful recovery, an improvedmethod is proposed in this paper. Ourmethod divides the phase retrieval into two steps: firstly, the GS algorithm is used to recover the amplitude in the spatial domain from the single-spread Fourier spectrum, and then the classical GS algorithm using two intensity measurements (one is recorded and the other is estimated from the first step) measurements is used to recover the phase. Finally, the effectiveness of the proposed method is verified by numerical experiments. Introduction Most of the information in the optical field is contained in the phase, such as the depth and shape of the object and so on.The oscillation frequency of the optical field is as high as 10 15 Hz.However, existing detection devices can only record intensity, directly [1].Phase retrieval method recovers lost phase from recorded intensity measurements with some prior knowledge, which plays an important role in the imaging field.Phase retrieval problem arises in numerous of areas, such as crystallography [2][3][4], optics [5][6][7][8][9], astronomical imaging [10], microscopy [11,12], biomedical [13], and holographic imaging [14,15]. In 1972, Gerchberg and Saxton first proposed an alternate projection-based phase retrieval algorithm, the Gerchberg-Saxton (GS) algorithm [16].The main idea of GS algorithm is to use the intensity of the spatial domain and the Fourier domain to recover the phase of the optical field.Subsequently, Fienup proved that the GS algorithm had obvious errordecreasing properties, Error Reduction (ER) algorithm, and Hybrid Input-Output (HIO) algorithm were proposed [17].At present, ER algorithm and HIO algorithm are considered to be the most effective methods in the field of phase retrieval [18].Since the above algorithms are only suitable for a positive linear transformation system, nobody cares about any linear transformation system.Therefore, Yang and Gu proposed the amplitude-phase detection theory in arbitrary linear transformation systems, namely, the Yang-Gu (Y-G) algorithm [19].In 2015, Guo et al. optimized the iterative algorithm and proposed two improved GS iterative phase retrieval algorithms, which were the spatial phase perturbation GS algorithm and the combined GS HIO algorithm.For the two improved algorithms, the squared value of the squared error value rapidly drops to an acceptable value, and the lost phase can be successfully recovered in the spatial domain and Fourier domain, which means that both algorithms can jump out of the local minimum value and convergence to the global minimum [9]. The GS algorithm was originally proposed in connection with the problem of reconstructing phase given only twointensity measurements in the spatial domain and Fourier domain, i.e., given two intensity measurements (one is recorded in the spatial domain and the other is estimated in the Fourier domain) measurements.Unfortunately, such two intensity measurements at two different planes cannot be measured in some cases.Therefore, a GS algorithm using single-intensity measurement with some prior knowledge combined was proposed to recover the phase.However the single-intensity phase retrieval method has the drawbacks of poor reconstruction quality and low probability of successful recovery.This paper proposes an improved method based on single-intensity measurement, Two-Step Phase Retrieval (TSPR) from single-intensity measurement algorithm.The main idea of the TSPR algorithm is to first recover the amplitude (i.e., the square root of the intensity) in the spatial domain using the Single-Intensity Phase Retrieval (SIPR) algorithm with some prior knowledge and then recover the phase in the spatial domain using the Two-Intensity Phase Retrieval (TIPR) algorithm.Experimental results demonstrate that our proposed TSPR algorithm can effectively enhance the quality of phase recovered and the probability of successful recovery. Basic Principles The complex amplitude (, ) is Its Fourier transform form is where (, ) and (, ) denote the complex amplitude in the spatial domain and The flow chart of the classical GS algorithm is shown in Figure 1.This algorithm was first proposed and was easily used to solve the phase retrieval problem using two intensity measurements.exp(i 0 (x, y)) denotes the initialized phase distribution function.The algorithm repeatedly iterates between the spatial domain and the Fourier domain until the error satisfies the termination condition [20]. Two-Step Phase Retrieval Using Single-Intensity Measurement The TIPR algorithm uses the two intensity measurements recorded in the spatial domain and the Fourier domain, respectively, to recover the amplitude and phase in the spatial domain with high quality.Although the TIPR algorithm can achieve good results, in some case, it is difficult to obtain two intensity measurements simultaneously in the both spatial and Fourier domains.Therefore, the SIPR algorithm is proposed, and it can recover the phase from single-intensity measurement in the Fourier domain with some prior knowledge.However, the SIPR method has low probability of successful recovery and poor quality.As shown in Figure 2, the purpose of the experiment is to test the SIPR method's performance to recover the amplitude and phase.The coded aperture is generated by a uniform distribution.The sampling rate of the coded aperture is used as a variable parameter in the experiment.The sampling rate gradually increased from 0.1 to 0.7; the step length is 0.05.The experiment runs 500 times independently to compute the probability of successful recovery.If the SNR of the phase reconstructed is greater than 25 dB, the reconstruction is deemed as successful.The probability of successful recovery versus sampling rate for both amplitude and phase is shown in Figure 2. As depicted in Figure 2, it can be found that the SIPR method can achieve high probability of successful recovery of the amplitude, but the probability of successful recovery of the phase is very low and unstable.Hence, our TSPR method is proposed to solve the phase retrieval problem from single-intensity measurement.Firstly, SIPR method is used to recover the amplitude in the spatial domain from the intensity in the Fourier domain and the support set in the spatial domain.Then, TIPR method using two intensity measurements (one of the two intensity measurements is recorded in the Fourier domain and the other is estimated in the spatial domain from the first step) is used to recover the phase in the spatial domain. The flow chart of TSPR method is shown in Figure 3, which includes the following steps: (1) initialize the amplitude 1 and phase 0 (x, y) in the spatial domain, i.e., use allone amplitude and randomized phase.(2) Perform Fourier transform on the complex amplitudes and replace the amplitude after the Fourier transform in the Fourier domain with the recorded Fourier amplitudes 2 .(3) Perform the inverse Fourier transform on the synthesized complex amplitude in the Fourier domain and multiply it with the support in the spatial domain.(4) Repeat steps (2)-(3) until the amplitude converges in the spatial domain.(5) The classical TIPR algorithm using the recorded Fourier amplitude 2 and the estimated amplitude | (x, y)| from the first step is used to iteratively recover the phase in the spatial domain. The complex amplitude of the initialization is as follows: The optical setup of imaging process of the TSPR method is shown in Figure 4.The coded aperture and CCD are placed on the front and back focal planes of the lens, respectively. Firstly, the complex optical field (, ) is filtered through the coded aperture to obtain a complex optical field (, ) and then imaged by a Fourier lens; finally the intensity is recorded in the CCD plane.The intensity recorded at the CCD plane is expressed as follows: The main purpose of this paper is to recover the phase of the complex optical field (, ) from the recorded intensity measurement . Numerical Experiments In order to verify the effectiveness and superiority of our proposed method, three experiments are presented in this section.The first simulation experiment is the single reconstruction experiment.The second simulation experiment is to test the performance of TSPR algorithm under different coded apertures.The final simulation experiment is to compare the performance of the different phase retrieval algorithms. . .Experiment : Single Reconstruction Experiment.The purpose of the experiment is to verify the feasibility of our proposed TSPR method.Two grayscale images ("Lena" and "Cameraman" with 256 × 256 pixels) are chosen in the experiment.The two images are multiplied by the coded aperture (256 × 256 pixels) to obtain the amplitude and phase, respectively.Then the complex amplitude is synthesized by amplitude and phase.The coded aperture is 0/1 random distribution.The sampling rate in this experiment is 0.4, and the phase retrieval results are shown in Figure 5. the coded aperture to be retrieved.The trend of the amplitude and phase with the number of iterations in our TSPR method is shown in Figure 6.In the first step (i.e., SIPR) of TSPR, the amplitude can be well recovered, but the phase recovered is poor.As the number of iterations increases, the SNR of the phase recovered can only reach about 19dB.However, in the second step (i.e., TIPR) of the TSPR, the SNR of the phase recovered increases as high as 45 dB.Compared with SIPR method, TSPR method can greatly improve the SNR of phase reconstructed. . .Experiment : Reconstruction Experiments under Different Coded Apertures.The coded aperture in Experiment 1 is a uniform random sampling pattern.The purpose of Experiment 2 is to test the performance of the TSPR method under different coded apertures. In Figure 7, three different types of the coded apertures are selected, such as Uniform Random (UR) sampling, Radial Line (RL) sampling, and Variable Density (VD) sampling.The sampling rate of the coded aperture is used as a variable parameter in the experiment, and the sampling rate is set from 0.1 to 0. same parameters runs 500 times independently.Compute the recovered Average Signal-to-Noise Ratio (ASNR) and probability of successful recovery.The ASNR and probability of successful recovery versus sampling rate under different coded apertures are shown in Figures 8 and 9, respectively.As shown in Figure 8, as the sampling rate increases, the TSPR algorithm achieves higher ASNR for phase recovered than the SIPR algorithm under three different coded apertures.In Figure 9, the best choice for coded aperture is uniform random sampling pattern, which achieve the optimal performance.The probability of successful recovery has also been significantly improved and has a good stability.The reason is that the support set of the uniform random sampling pattern is relatively uniform distributed in the spatial domain, while the support set of the other two coded apertures is more concentrated to the center, so the phase cannot be retrieved well. . .Experiment : Reconstruction Experiment under Different Sampling Rates.The purpose of Experiment 3 is to compare the performance of phase retrieval between our TSPR method and the other two algorithms (TIPR algorithm and SIPR algorithm) under different sampling rates.The coded aperture uses a uniform random sampling pattern with a sampling rate varies from 0.1 to 0.7 and the step size is 0.05.Under the same parameters, experiments runs 500 times independently.The ASNR and probability of successful recovery versus sampling rate for different algorithms are presented in Figures 10 and 11, respectively. As shown in Figures 10 and 11, the TIPR method is the best, the TSPR method is the second, and the SIPR method is the worst.The reason is that TIPR method has the most known information, i.e., the two intensity measurements at two different planes.The difference between the TSPR method and the SIPR method is that the TSPR method draws on the idea of the TIPR method.Firstly, use the SIPR method to estimate the amplitude in the spatial domain, then use the TIPR method to recover the phase.In summary, the TSPR method is superior to the SIPR method. Conclusion This paper proposes an improved TSPR algorithm to solve the problem of poor quality and low success rate of SIPR method.TSPR algorithm is completed in two steps: firstly, SIPR algorithm is used to recover the amplitude in the spatial domain, then TIPR algorithm is used to recover the phase in the spatial domain.Finally, the effectiveness and superiority of the TSPR method are verified.The experiment comparison results demonstrate that the proposed method can effectively improve the ASNR and probability of successful recovery under the same parameters.TSPR algorithm can recover the lost phase from known amplitude in the Fourier domain and support set in the spatial domain. Fourier domain, respectively, |(, )| and |(, )| denote the amplitude in the spatial domain and Fourier domain, respectively, (, ) and (, ) denote the phase in the spatial domain and Fourier domain, respectively.Phase retrieval refers to recovering the losing phase (, ) in the spatial domain from two amplitudes |(, )| and |(, )|. Figure 3 : Figure 3: Flow diagram of the TSPR algorithm. Figure 4 :Figure 5 : Figure 4: The optical setup of imaging process. Figure 5 (Figure 6 :Figure 7 : Figure 6: Amplitude and phase versus the number of iterations. Figure 5 ( d) is the intensity recorded in the Fourier domain.Figures 5(e) and 5(f) are the amplitude and phase recovered using the SIPR algorithm, and the Signal-to-Noise Ratio (SNR) is 343.04 dB and 19.85 dB, respectively.Figures 5(g) and 5(h) are the amplitude and phase recovered using the TSPR algorithm, and their SNR is 343.04dB and 45.58dB, respectively.Comparing Figures 5(f)-5(h), it is easy to find that the quality of phase recovered is greatly improved by our TSPR method. Figure 11 : Figure 11: Probability of success versus sampling rate.
3,060.4
2018-01-01T00:00:00.000
[ "Physics", "Engineering" ]
A Large Neighborhood Search Algorithm with Simulated Annealing and Time Decomposition Strategy for the Aircraft Runway Scheduling Problem : The runway system is more likely to be a bottleneck area for airport operations because it serves as a link between the air routes and airport ground traffic. As a key problem of air traffic flow management, the aircraft runway scheduling problem (ARSP) is of great significance to improve the utilization of runways and reduce aircraft delays. This paper proposes a large neighborhood search algorithm combined with simulated annealing and the receding horizon control strategy (RHC-SALNS) which is used to solve the ARSP. In the framework of simulated annealing, the large neighborhood search process is embedded, including the breaking, reorganization and local search processes. The large neighborhood search process could expand the range of the neighborhood building in the solution space. A receding horizon control strategy is used to divide the original problem into several subproblems to further improve the solving efficiency. The proposed RHC-SALNS algorithm solves the ARSP instances taken from the actual operation data of Wuhan Tianhe Airport. The key parameters of the algorithm were determined by parametric sensitivity analysis. Moreover, the proposed RHC-SALNS is compared with existing algorithms with excellent performance in solving large-scale ARSP, showing that the proposed model and algorithm are correct and efficient. The algorithm achieves better optimization results in solving large-scale problems. Introduction Air traffic is becoming increasingly congested, due to the relatively fixed airspace capacity.Air traffic throughput is expected to be twice as high as in 2019 by 2040 [1], and aircraft delays are likely to increase further.Airports are the key nodes of the air route network.Improving the operation efficiency of the airport would greatly reduce the delay of the air route network.Airport operations need to meet both aircraft demand and aircraft punctuality rate requirements.The runways are more likely to become the bottlenecks in airport operations, especially in large and busy airports in high demand [2].To solve the runway congestion problem, the traditional methods to increase the runway capacity include airport renovation and expansion and increasing the number of runways.However, the above methods require a lot of human and material costs and take a long time to implement.The aircraft runway scheduling during the air traffic flow management (ATFM) process can rationalize the runway sequence of departure and arrival aircraft without increasing the infrastructure cost, thus realizing the efficient use of the runway capacity and reducing the related delays. The aircraft runway scheduling problem (ARSP) is a huge challenge for air traffic management.Runway sequencing is a tactical traffic management technique in which air traffic controllers assign runway use sequences for arrival and departing aircraft.In practice, air traffic controllers often make decisions based on aircraft operations situation in order to resolve conflicts in runway use.While the controller's own decisions can improve runway utilization efficiency, the limitations of the controller's own control experience often result in unnecessary aircraft delays during peak hours.Therefore, for the existing airport system, using the limited capacity to optimize aircraft landing and takeoff can scientifically allocate the available runway resources, and effectively reduce aircraft delays and improve airport operational efficiency.In this paper, we design a large neighborhood search algorithm with simulated annealing and time decomposition strategy to solve the ARSP.The effectiveness of the RHC-SALNS algorithm is tested by comparing the actual operation data of Wuhan Tianhe Airport with the existing algorithms with excellent performance in solving large-scale ARSP. Problem Description The classical ARSP can be described as follows: A group of departing aircraft are ready to take off on the airport surface, and a group of arrival aircraft are ready to land.Each departing aircraft has an estimated take-off time window, and each arrival aircraft has an estimated landing time window.According to the aircraft wake turbulence types, runway resources are assigned to each aircraft based on meeting the runway safety separation requirements, i.e., the take-off or landing time of the aircraft.Time deviation cost is incurred when the aircraft assigned runway usage time deviates from the estimated runway usage time.The departure or landing time can be optimized by runway sequencing to reduce the overall time deviation cost [3].The ARSP covers the airport terminal area, and its structure is schematically shown in Figure 1, where arrival (departure) aircraft enter (leave) the terminal area from different arrival (departure) fixes according to the instrument approach (departure) routes. practice, air traffic controllers often make decisions based on aircraft operations situation in order to resolve conflicts in runway use.While the controller's own decisions can improve runway utilization efficiency, the limitations of the controller's own control experience often result in unnecessary aircraft delays during peak hours.Therefore, for the existing airport system, using the limited capacity to optimize aircraft landing and takeoff can scientifically allocate the available runway resources, and effectively reduce aircraft delays and improve airport operational efficiency.In this paper, we design a large neighborhood search algorithm with simulated annealing and time decomposition strategy to solve the ARSP.The effectiveness of the RHC-SALNS algorithm is tested by comparing the actual operation data of Wuhan Tianhe Airport with the existing algorithms with excellent performance in solving large-scale ARSP. Problem Description The classical ARSP can be described as follows: A group of departing aircraft are ready to take off on the airport surface, and a group of arrival aircraft are ready to land.Each departing aircraft has an estimated take-off time window, and each arrival aircraft has an estimated landing time window.According to the aircraft wake turbulence types, runway resources are assigned to each aircraft based on meeting the runway safety separation requirements, i.e., the take-off or landing time of the aircraft.Time deviation cost is incurred when the aircraft assigned runway usage time deviates from the estimated runway usage time.The departure or landing time can be optimized by runway sequencing to reduce the overall time deviation cost [3].The ARSP covers the airport terminal area, and its structure is schematically shown in Figure 1, where arrival (departure) aircraft enter (leave) the terminal area from different arrival (departure) fixes according to the instrument approach (departure) routes.The ARSP schematic is shown in Figure 2, where the serial numbers indicate the landing sequence of arrival aircraft or the departure sequence of departure aircraft.For arrival aircraft, the starting adjustment boundary of the landing sequence is generally the boundary of the airport terminal area, and the ending adjustment boundary is generally determined by a key point such as the distance or time point before the specified landing.For departure aircraft, the starting adjustment boundary of the departure sequence is the point at which the aircraft is parking on the apron, and the ending adjustment boundary is the point at which the aircraft is ready to join the departure queue before entering the runway entrance.The area between the starting and ending adjustment boundaries of the runway use sequence is called the aircraft runway scheduling area, i.e., the sequencing The ARSP schematic is shown in Figure 2, where the serial numbers indicate the landing sequence of arrival aircraft or the departure sequence of departure aircraft.For arrival aircraft, the starting adjustment boundary of the landing sequence is generally the boundary of the airport terminal area, and the ending adjustment boundary is generally determined by a key point such as the distance or time point before the specified landing.For departure aircraft, the starting adjustment boundary of the departure sequence is the point at which the aircraft is parking on the apron, and the ending adjustment boundary is the point at which the aircraft is ready to join the departure queue before entering the runway entrance.The area between the starting and ending adjustment boundaries of the runway use sequence is called the aircraft runway scheduling area, i.e., the sequencing algorithm is applied between these two boundaries to schedule the taking off and landing based on information such as expected departure time, expected landing time, and aircraft type [4]. Aerospace 2023, 10, x FOR PEER REVIEW 3 of 34 algorithm is applied between these two boundaries to schedule the taking off and landing based on information such as expected departure time, expected landing time, and aircraft type [4].ARSP is the problem of determining the aircraft taking-off or landing sequence by optimizing some specific performance objectives subject to various constraints [5].ARSP is an NP-hard problem because of the need to consider the differences in the effects of aircraft type (takeoff or landing) and wake turbulence safety separation.At the same time, the scheduling results for ARSP need to meet the requirements of timeliness.ARSP, as a tactical traffic management problem, requires aircraft runway sequencing decisions to be given within a short time frame, thus attracting extensive attention from scholars [6,7].Aircraft runway scheduling timeliness mainly focuses on the computation time of the algorithm.The scheduling performance needs to meet certain requirements in order to make the algorithm give the runway scheduling results in a certain limited time.In the process of aircraft runway scheduling, in order to find the balance between scheduling efficiency and timeliness, it is often necessary to lose some optimization indices to a certain extent, so that the model solution time can meet the timeliness requirements.Therefore, the main concern of this paper is how to improve the solution efficiency and meet the requirement of ARSP timeliness while ensuring the accuracy of the ARSP solution. Literature Review Relevant scholars have previously carried out research on the ARSP problem, and at present, certain results have been achieved.The research on ARSP can be traced back to 1964, when Ratcliffe [8] elaborated on the runway scheduling concept of first-come-firstserved (FCFS), i.e., scheduling aircraft according to their expected landing and taking-off times.Although FCFS can reduce aircraft delays while ensuring relative fairness, FCFS principles may lead to excessive delays for individual aircraft [9], which is not efficient for reducing aircraft delay losses, shortening the total aircraft operation time, and making full use of runway capacity.Therefore, most scholars regard the aircraft runway scheduling of arrival and departure aircraft as a resource allocation problem and use optimization theory to build a model to solve it.The optimization object of the model is the aircraft within the planning time period; the constraints of the model can be categorized and summarized as runway safety separation constraints [10], landing and taking-off time window constraints [11], runway capacity constraints [12], aircraft turnaround time constraints [13], and aircraft priority constraints [14].The optimization indices of the model are ARSP is the problem of determining the aircraft taking-off or landing sequence by optimizing some specific performance objectives subject to various constraints [5].ARSP is an NP-hard problem because of the need to consider the differences in the effects of aircraft type (takeoff or landing) and wake turbulence safety separation.At the same time, the scheduling results for ARSP need to meet the requirements of timeliness.ARSP, as a tactical traffic management problem, requires aircraft runway sequencing decisions to be given within a short time frame, thus attracting extensive attention from scholars [6,7].Aircraft runway scheduling timeliness mainly focuses on the computation time of the algorithm.The scheduling performance needs to meet certain requirements in order to make the algorithm give the runway scheduling results in a certain limited time.In the process of aircraft runway scheduling, in order to find the balance between scheduling efficiency and timeliness, it is often necessary to lose some optimization indices to a certain extent, so that the model solution time can meet the timeliness requirements.Therefore, the main concern of this paper is how to improve the solution efficiency and meet the requirement of ARSP timeliness while ensuring the accuracy of the ARSP solution. Literature Review Relevant scholars have previously carried out research on the ARSP problem, and at present, certain results have been achieved.The research on ARSP can be traced back to 1964, when Ratcliffe [8] elaborated on the runway scheduling concept of first-comefirst-served (FCFS), i.e., scheduling aircraft according to their expected landing and takingoff times.Although FCFS can reduce aircraft delays while ensuring relative fairness, FCFS principles may lead to excessive delays for individual aircraft [9], which is not efficient for reducing aircraft delay losses, shortening the total aircraft operation time, and making full use of runway capacity.Therefore, most scholars regard the aircraft runway scheduling of arrival and departure aircraft as a resource allocation problem and use optimization theory to build a model to solve it.The optimization object of the model is the aircraft within the planning time period; the constraints of the model can be categorized and summarized as runway safety separation constraints [10], landing and taking-off time window constraints [11], runway capacity constraints [12], aircraft turnaround time constraints [13], and aircraft priority constraints [14].The optimization indices of the model are generally aircraft delay time [15], total aircraft operation time [16], and aircraft emission indicators [17,18].Different combinations of different objectives could produce different optimization results.Generally, ARSP can be formulated as similar problems in other research fields [19].Carr [9] introduced the traveling salesman problem (TSP) models and Bennell [5] introduced the traveling repairman problem (TRP) models to build the ARSP model.Both of them considered the wake turbulence separation between aircraft and the runway occupancy time as the significant constraints.Beasley [20] and Bertsimas [21] proposed mixed integer programming models (MIP), which treat the aircraft sequence and the time assigned as decision variables and use artificial variables to transform the model into a linear problem.Ceberio [22] introduced the idea of solving the permutation flow shop scheduling problem (PFSP) into ARSP, which deeply integrated ARSP research with practical applications.Balakrishnan [16] treats the sequence of aircraft runway usage as edges in a network and introduces a nodal network model into ARSP.Lieder [23] established a dynamic programming model for runway scheduling of arrival aircraft based on runway state transfer function and runway state transfer cost for different aircraft wake turbulence.As the research progresses, in order to make the ARSP model more applicable to the complex operation environment, scholars have added the winter de-icing operation mode [7], the interactions between runways in complex runway configurations [24,25], and the influence of arrival and departure traffic distribution on controllers' strategies [26] into the ARSP model to improve the practicality. With the continuous research on ARSP, the models have gradually matured.The optimization model tends to be more complex, the number of model constraints is increasing, and the model optimization objective function is more refined, which puts forward higher requirements for the solution algorithm of the ARSP model.There are two main types of algorithms for solving ARSP models: exact solving algorithms and heuristic solving algorithms.Traditional exact solution algorithms include the simplex method, dual simplex method, branch-and-bound method, and commercial solvers such as CPLEX and Gurobi.They have been shown to consume a large amount of time in solving large-scale ARSP due to the large solution space [21].Therefore, many scholars adopt the simplified exact solution algorithm to reduce the size of the solution space, so as to improve the operational efficiency of ARSP.Balakrishnan [16] considered the aircraft as nodes in a network and aircraft runway usage sequences as edges in a network and used node attributes to determine constraints to reduce the size of the solution space.Sölveling [2] improved the branch-and-bound algorithm by defining pruning rules, which can dynamically change the number of samples for estimating the upper and lower bounds during the operation.De Maere [27] studied and proved that the performance of scheduling is only related to the wake turbulence separation of different aircraft types, so the pruning rule of aircraft scheduling was proposed to keep the original order of aircraft with the same wake turbulence separation.This pruning method can reduce the size of the solution space and reduce the computing time of the exact algorithm.Maximilian [7] combined constraint programming (CP) with a column generation algorithm to further reduce the solution space.Usually, heuristic algorithm solving can quickly obtain feasible solutions with high quality.Using heuristic algorithms to solve large-scale ARSP can shorten the solution time and improve computing efficiency while ensuring the solution accuracy.Heuristic algorithms for solving ARSP problems include genetic algorithms [28], simulated annealing algorithms [29], ant colony algorithms [30], etc.We were also motivated by the fact that meta-heuristic frameworks are very adaptable, enabling other meta-heuristic algorithms to be easily hybridized with them.Salehipour [31] combined a variable neighborhood search algorithm with a simulated annealing algorithm.The ARSP is initially solved using the genetic algorithm, and the aircraft sequence was fine-tuned under the constraints until the termination condition of the algorithm is met.Sabar [32] developed the iterated local search (ILS) algorithm, which avoids the algorithm to compute results into local optima by defining perturbation operators.In the same year, Shihao Wang [33] added a linear differential decreasing annealing strategy to the traditional simulated annealing particle swarm algorithm (SA-PSO) to solve the problems of slow convergence speed.In 2019, Liu Qi [34] proposed the STW-GA dynamic algorithm by combining the sliding time window algorithm with the dual-structured chromosome genetic algorithm, which can ensure the fairness of aircraft scheduling while reducing the total delay time.Junfeng Zhang [35] developed a multi-objective imperial competition algorithm to solve the arrival aircraft scheduling problem.In 2021, Lily Wang [36] combined the sliding time window algorithm with the particle swarm optimization algorithm to develop a combined TW-PSO algorithm, which obtained better optimization results while reducing the number of iterations of the algorithm. We also note that the time decomposition strategy can divide the large-scale problem into several subproblems for solving to achieve the optimal solution [37].Among them, the receding horizon Control (RHC) strategy has been shown to be a very effective optimization strategy for large-scale optimization problems with complex constraints [13].In the RHC strategy framework, the original optimization problem is partitioned into several subproblems, thus increasing the problem solution rate.Hu [38] established a dynamic arrival aircraft scheduling model based on the RHC.The arrival aircraft scheduling model has good robustness, and the RHC strategy can achieve an optimal solution quickly.Using the RHC strategy to make relevant decisions is required to look forward to multiple time horizons.When optimizing the current time horizon, the aircraft information is optimally scheduled forward over multiple time horizons, but only the results are implemented on the current horizon and the same process is repeated on the next horizon.Clarke [39] combined the RHC strategy with predictive techniques to update each piece of operational information of aircraft in real time to recalculate the uncertainty delays every 15 min.Kjenstad [13] applied the RHC strategy by dividing the length of time horizon into 10-to-15 min and adjusting the number of time horizons to find a combination of parameters with fast speed and high accuracy.The feature that the RHC framework is more adaptable also draws our attention, and it can hybridize with other heuristic strategies to improve the efficiency of the problem solving.For example, Qiqian Zhang [40] combined the RHC strategy with a genetic algorithm (RHC-GA) to guarantee the solution quality and improve the solving speed. The above-mentioned literature contain information regarding some preliminary research conducted on the ARSP model and solution algorithm and achieved great research results.It should be noted that the research on the ARSP model has been gradually sophisticated, which makes the model can be increasingly used in a wide range of applications.However, the corresponding solution algorithms do not achieve the purpose of solving the model quickly, accurately, and efficiently.The corresponding solution algorithm cannot give optimal solution results in a short time frame due to the large solution space and excessive complexity.Although some of the aforementioned works deal with improving the algorithmic solution efficiency of ARSP, the overview shows that most of the studies enhance the solution efficiency of ARSP with a single heuristic strategy, while few of them investigate the advantages of frameworks that combine multiple heuristic strategies.Regarding the solution algorithm, we believe that the advantages of various heuristic strategies should be combined to meet the decision-making needs of airports, air traffic controllers, and airlines in practice.To this end, the main contributions of this paper are the following two points: (1) This research aims to improve the neighborhood construction method, which is one of the cores of the simulated annealing algorithm.We propose the large neighborhood search simulated annealing algorithm (SALNS).The breaking process, reorganization process, and local search process are introduced to improve the efficiency of the algorithm operations.Combining the large neighborhood search with a simulated annealing algorithm can fully utilize the advantages of both.(2) We aim to combine the RHC framework with the SALNS algorithm, and propose the RHC-SALNS algorithm to further improve the efficiency of the algorithm for solving ARSP models. Organisation of the Paper The remainder of this paper is organized as follows: Section 2 introduces an optimal scheduling model for ARSP.Section 3 explains the calculation process and critical steps of the algorithm.In Section 4, the instances of Wuhan Tianhe Airport are used to demonstrate the effectiveness of our proposed algorithm.Section 5 discusses the conclusions. Mathematical Model We investigate an ARSP model that considers both arrival aircraft and departure aircraft, aiming to improve the performance by reducing weighted aircraft delays.It is important to note that airports with multiple runway configurations are widely available in China, and at the same time, the runway operation mode of independent parallel operation is the development trend of Chinese airports.In this paper, we chose a public runway configuration with independent parallel operation (04L|22R) at Wuhan Tianhe Airport for analysis.The assumptions for the ARSP are outlined below. Assumption 1.The uncertainty factor of aircraft operation is not considered; the runway configuration and operating mode will not change during the ARSP planning time period.Assumption 2. The operations of multiple runways are independent; the aircraft takeoff and landing operations are not affected by the aircraft of other runways.Assumption 3.For each combination of runway and parking position at the airport ground area, there will be a pre-determined transition path between them. Notations To ensure the lowest aircraft delays, the runway scheduling optimization model for arrival and departure aircraft, based on the concept proposed by Bertsimas [21] and Hancerliogullari [3], F , is the set of aircraft that requiring scheduled during the planning time period, where F = A ∪ D ∪ AD; A is a set of arrival aircraft that land at the airport and stay until planning time period; AD is a set of arriving-departing aircraft that arrive at the airport and depart from it after a turnaround duration, i.e., the aircraft have continuous consecutive voyage at this airport; D is a set of departing aircraft that is parked at the airport at the beginning of the planning time period and depart later.In order to simplify the descriptions, we divide the set AD into two sets, where A * denotes the set of arrival aircraft in the AD set and D * denotes the set of aircraft in the AD set: AD = A * ∪ D * .For aircraft during the planning period, we sort their expected runway usage time in ascending order, i.e., for ∀i ∈ F , aircraft are first sorted by comparing their E a i (for arrival) and E d i (for departure).As an example, if q, g ∈ D ∪ D * , E d q > E d g , then q > g.In this paper, the runway operation mode is an independent parallel operation mode.The two runways can be considered as two independent single runway systems, where the taking-off and landing operations on the runways do not affect each other.Therefore, in our ARSP model, we will focus on the aircraft runway assignment variables [41,42], as well as the landing time and taking-off time assignment variables. Additionally, many current studies ignore the process of aircraft turnaround operations, also referred to aircraft ground handling.In this paper, we consider the minimum turnaround separations, which are defined as the time required to unload the aircraft after its arrival at the gate and to prepare it for departure again. The relevant sets, parameters, and variables required for the model are shown in Table 1. F The set of aircraft that requiring scheduled during the planning time period A The set of arrival aircraft that land at the airport and stay until planning time period: A ⊆ F D The set of departing aircraft that are parked at the airport at the beginning of the planning time period: D ⊆ F AD The set of arriving-departing aircraft: AD = A * ∪ D * , AD ⊆ F N The set of runways available for aircraft, where N = {n|n 1 , n 2 }, The estimated landing time of aircraft: The estimated takeoff time of aircraft: q,q ∈ D ∪ D * S ij The minimum runway usage separation time between aircraft: i, j,i, j ∈ F Maximum acceptable delay time of arrival aircraft i: i ∈ A ∪ A * η d q Maximum acceptable delay time of departing aircraft q: q ∈ D ∪ D * κ i Runway occupancy time of aircraft i: i ∈ F ξ iq 1 if departing aircraft q and arrival aircraft i are a pair of connected aircraft, 0 otherwise: i ∈ A * , q ∈ D * χ iq The minimum connection time between the takeoff time of departing aircraft q and landing time of arrival aircraft: i,i ∈ A * , q ∈ D * M Extremely large values, applied to simplify the model Variables t i The allocated runway usage time of aircraft: i,i ∈ F α ij is 1 if aircraft i uses the runway before aircraft j, 0 otherwise: i, j ∈ F Constraints 2.2.1. Safety Separation Constraints For aircraft using the same runway, minimum separation requirements must be met to comply with the safety regulations implemented by the Federal Aviation Administration (FAA) and the International Civil Aviation Organization (ICAO) [43,44], as shown in Equation (1).There is one and only one sequence of any two aircraft using the runway, as shown in Equation (2).Each aircraft can only use one runway, as shown in Equation (3). Time Window Constraints The landing time assigned to each arrival aircraft must be within the time window defined by the expected landing time and the maximum acceptable delay, as shown in Equation ( 4).The take-off time assigned to each departing aircraft must be within the time window defined by the expected take-off time and the maximum acceptable delay, as shown in Equation (5). Runway Occupancy Time Constraints Each runway can only be occupied by one aircraft at the same time.For aircraft using the same runway continuously, the runway use separation is the greater of S ij and κ i .For medium-sized aircraft, S ij will generally be greater than κ i . Flight Turnaround Constraint Let departing aircraft q and arrival aircraft i be a pair of connected aircraft, that is to say, arrival aircraft i will be pushed back from the gate by the identification of departing aircraft q after the turnaround process of i.Then, the difference between the assigned take-off time of departing aircraft q and the assigned landing time of arrival aircraft i must be larger than the minimum connection coefficient χ iq . Objectives Minimum weighted aircraft delays: where . φ i is the weight of arrival aircraft delay.If the aircraft is speeded up before its arrival, the actual runway time will be earlier than the estimated runway time.Here, we define the negative delay as "gray delay", meaning that the assigned time is ahead of the estimated time.Note that, in order to ensure flight punctuality as much as possible, the negative delays in the presence of time advance of arrival are also counted as delays in terms of absolute value.Since advanced or delayed operations are not symmetrical in their performance effect, the values should be set according to the actual operation of the airport.In the case of this paper, by investigating the actual operation of the airport, φ i = 0.6 when "gray delay" occurs and φ i = 1 when delay occurs.µ is the delay cost factor of aircraft, where µ i = G/P i .P i is the priority factor of aircraft i.G is the maximum value in the priority table, as shown in Table 2. The three dimensional priority table is designed to reflect the scheduling priority factors.Specifically, for each aircraft i ∈ F , the corresponding characteristic parameters of voyage consecutiveness, aircraft type, and arrival or departing aircraft are denoted as O i , R i , and J i , respectively.Aircraft scheduling should not only consider the current aircraft, but also whether it will affect the next departing aircraft, with consecutive voyages at this airport. The aircraft with consecutive voyages should be given a higher priority.Second, the type of aircraft reflects the number of seats guaranteed, and empirically, a higher priority is generally given to larger aircraft types.It is worth noting that we consider the overall performance of airside traffic management.We consider the arrival (departing) peaks of airport operations.The higher priority on arrival (departing) peaks will be given to the arrival (departing) aircraft.The specific, detailed description of the priorities refers to the literature [45].The values of each parameter are shown as follows: the voyage of aircraft i is consecutive otherwise the type of aircraft i is super the type of aircraft i is heavy the type of aircraft i is medium the type of aircraft i is light i is an arrival (departing) aircraft at airport arrival (departing) peaks otherwise The priority factor P i of the aircraft i can be calculated as follows: where Taking a light aircraft with inconsecutive voyages and which does not at peak hours as an example, i.e., O i = 2, R i = 3, J i = 2, the priority of the aircraft is P i = P(2, 3, 2) = 31.5.Similarly, the priority of each aircraft can be obtained as shown in Table 2. The Proposed Method In this paper, a large neighborhood search algorithm with simulated annealing and receding horizon control strategy (RHC-SALNS) is proposed for solving ARSP.The simulated annealing algorithm is a simulation of the physical process of the high temperature annealing of a solid material, in which a solid material is heated to a high enough temperature and then cooled down gradually.During the warming process, the internal energy of the solid material is large and the internal particles become disordered.As the temperature decreases, the internal energy decreases and the internal particles become stable and can reach equilibrium at each temperature.The internal energy is reduced to a minimum when the temperature of the object drops to a certain base-state temperature.The simulated annealing algorithm exploits the similarity between combinatorial optimization and physical annealing processes to find the global optimal solution in the solution space by combining the probabilistic sudden jump property.In this paper, we use the simulated annealing algorithm framework to design a large neighborhood search method to replace the original neighborhood construction method.The breaking, reorganization, and local search processes were proposed to form a new solution. Algorithm Design In the RHC strategy framework, the original optimization problem is partitioned into several subproblems to improve the solving rate.The RHC strategy framework has been widely studied and matured by scholars [37,38].The RHC strategy algorithm is described in detail in Section 3.5.The SALNS algorithm is used to optimize runway scheduling for aircraft on each time horizon.The flowchart of our proposed RHC-SALNS algorithm is shown in Figure 3.The simulated annealing (SA) algorithm was first proposed by Steinbrunn [46].It is a general optimization algorithm with better solution results than other heuristics and has the advantage of being insensitive to the initial parameter settings.However, as a type of heuristic algorithm, simulated annealing algorithms are also prone to falling into the local optimal solution.The SALNS algorithm uses the simulated annealing algorithm framework to generate the algorithm parameters and initialize the initial solution.The algorithm uses a large neighborhood search to construct neighborhood solutions in the process of generating new solutions.The algorithm accepts the neighborhood solution by calculating the objective function value using the Metropolis criterion [46] until the stop condition is satisfied.The specific steps of the RHC-SALNS algorithm are as follows. Step 1: Initialize the algorithm initial temperature T = T 0 , algorithm termination temperature T L , cooling coefficient λ, number of internal cycles β, number of non-updating generations µ = 0, and maximum number of non-updating generations θ notimp ; encode and generate the initial solution X (see Section 3.2, Section 3.3 for details), and then σ = 0. Step 2: Calculate the objective function value Φ(X) of the current initial solution X and make the optimal solution B = X, and the optimal solution objective function value is Φ(B). Step 4: The initial solution is broken (see Section 3.4.1 for details), reorganized (see Section 3.4.2for details), and locally searched (see Section 3.4.3for details) to generate the neighborhood solution X . Step 5: Calculate the objective function value Φ(X ) of the neighborhood solution X ; accept the neighborhood solution according to the Metropolis criterion, σ = σ + 1. Step 7: T = λT.If the solution under the current temperature is the same as the solution under previous temperature, then µ = µ + 1. Step 8: If T < T L , or µ = θ notimp , stop the algorithm and output the optimal solution B and the objective function value Φ(B); otherwise, return to Step 3. Aerospace 2023, 10, x FOR PEER REVIEW 10 of 34 annealing algorithm framework to generate the algorithm parameters and initialize the initial solution.The algorithm uses a large neighborhood search to construct neighborhood solutions in the process of generating new solutions.The algorithm accepts the neighborhood solution by calculating the objective function value using the Metropolis criterion [46] until the stop condition is satisfied.The specific steps of the RHC-SALNS algorithm are as follows.Step 2: Calculate the objective function value ( ) of the current initial solution X and make the optimal solution B X = , and the optimal solution objective function value is ( ) Code We use the real number encoding method.In the planning period, there are c aircraft with the number {1, 2, • • • , c} and s runways with the number {c + 1, c + 2, • • • , c + s}, where the aircraft were numbered and sequenced in the ascending order proposed in Section 2.1.Figure 4 shows the solution representation code of 10 aircraft of two runways, where 11, 12 represent runway 04 and 22L, respectively, and the code shown in Figure 4 can be converted into the aircraft sequence of runway 04: 1 → 2 → 4 → 6 → 8 → 5 → 7 ; the aircraft sequence of runway 22L is 3 → 10 → 9 . Code We use the real number encoding method.In the planning period, there are c aircraft with the number{1, 2, , } c  and s runways with the number{ 1, 2, , , where the aircraft were numbered and sequenced in the ascending order proposed in Section 2.1.Figure 4 shows the solution representation code of 10 aircraft of two runways, where 11,12 represent runway 04 and 22L, respectively, and the code shown in Figure 4 can be converted into the aircraft sequence of runway 04: 1 2 4 6 8 5 7 → → → → → → ; the aircraft sequence of runway 22L is 3 10 9 → → . Initialisation The algorithm needs to generate the initial solution of the aircraft runway sequence in the initialization process.In the initialization process, a first-come-first-served greedy random initialization method is used to ensure the diversity of the initial solution due to the high efficiency of the large neighborhood search method proposed in this paper.The greedy random initialization could reduce the complexity, which can further improve the efficiency of the algorithm.The specific steps of greedy random initialization are as follows. Step 1: Randomly distribute the aircraft to all runways. Step 2: Based on the runway allocation result of Step 1, the aircraft sequence of each runway is initialized to first-come-first-served. Take one of the runways as an example; select the aircraft with the smallest expected runway usage time as the first to use that runway. Step 3: The aircraft that is closest to the last assigned aircraft in terms of runway usage time is not scheduled as the next aircraft to use the runway.Repeat until all the aircraft are scheduled. Step 4: Based on the generated aircraft sequence above, the runway usage time of the aircraft is assigned according to the constraints in Section 2.2. Algorithm 1 shows the pseudo-code of the initial solution generation method. Algorithm 1.Initial solution generation method 1: Let  be the number of aircraft and  be the number of runways; 2: Let ij S be the safety separation between aircraft i and aircraft j ; The solution after the breaking and reorganization process; 4: Sort the aircraft based on their estimated runway usage times , and add them to , where Initialisation The algorithm needs to generate the initial solution of the aircraft runway sequence in the initialization process.In the initialization process, a first-come-first-served greedy random initialization method is used to ensure the diversity of the initial solution due to the high efficiency of the large neighborhood search method proposed in this paper.The greedy random initialization could reduce the complexity, which can further improve the efficiency of the algorithm.The specific steps of greedy random initialization are as follows. Step 1: Randomly distribute the aircraft to all runways. Step 2: Based on the runway allocation result of Step 1, the aircraft sequence of each runway is initialized to first-come-first-served. Take one of the runways as an example; select the aircraft with the smallest expected runway usage time as the first to use that runway. Step 3: The aircraft that is closest to the last assigned aircraft in terms of runway usage time is not scheduled as the next aircraft to use the runway.Repeat until all the aircraft are scheduled. Step 4: Based on the generated aircraft sequence above, the runway usage time of the aircraft is assigned according to the constraints in Section 2.2. Algorithm 1 shows the pseudo-code of the initial solution generation method. Algorithm 1.Initial solution generation method 1: Let |F | be the number of aircraft and |N | be the number of runways; 2: Let S ij be the safety separation between aircraft i and aircraft j; 3: Set k ← 1; z ← 1 S 0 ← The solution after the breaking and reorganization process; 4: Sort the aircraft based on their estimated runway usage times E a i , E d q , i ∈ A ∪ A * , q ∈ D ∪ D * , and add them to Randomly select a runway n from N and add SS(k) to RS(n); Large Neighborhood Search The large neighborhood search consists of three main processes-the breaking process, reorganization process, and local search process-which are implemented sequentially for the current solution when constructing the neighborhood solution.The breaking and reorganization processes are able to search within a large structure of the solution space, and thus have a higher probability of finding the optimal solution compared to the other traditional methods.In addition, the complexity of the algorithm is reduced due to the simpler greedy random insertion method in the reorganization process, which balances the efficiency and effectiveness of the large neighborhood search process.After obtaining the initial solution of the greedy randomized algorithm, the large neighborhood search can effectively find the optimal solution or the near-optimal solution.The large neighborhood search process is shown in Figure 5. space, and thus have a higher probability of finding the optimal solution compared to the other traditional methods.In addition, the complexity of the algorithm is reduced due to the simpler greedy random insertion method in the reorganization process, which balances the efficiency and effectiveness of the large neighborhood search process.After obtaining the initial solution of the greedy randomized algorithm, the large neighborhood search can effectively find the optimal solution or the near-optimal solution.The large neighborhood search process is shown in Figure 5.The specific steps of each breaking method are as follows: 1 BP adjacent breaking. Step 1: Select a random aircraft as the root. Step 2: Calculate the maximum number of removed aircraft 1 U , denotes upward rounding and 1 P is the probability of adjacent breaking. Step 3: Step 4: Remove the root aircraft and the 1 1 Q − closest aircraft from the root aircraft from the original sequence and add them to the Insert array. 2 BP maximum saving breaking. Step 1: Calculate the cost of aircraft delay savings j Δ for each aircraft j after it is removed from the aircraft queue: Breaking process Breaking Process The breaking process of the solution consists of four types of methods, namely, adjacent breaking (BP 1 ), maximum saving breaking (BP 2 ), random breaking (BP 3 ), and single point breaking (BP 4 ).During the execution of the breaking process, BP 1 , BP 2 , BP 3 , BP 4 are executed sequentially, and the broken aircraft are stored in the insert array for reorganization. The specific steps of each breaking method are as follows: BP 1 adjacent breaking. Step 1: Select a random aircraft as the root. Step 2: Calculate the maximum number of removed aircraft U 1 , U 1 = P 1 * |F n | , where denotes upward rounding and P 1 is the probability of adjacent breaking. Step 3: Step 4: Remove the root aircraft and the Q 1 − 1 closest aircraft from the root aircraft from the original sequence and add them to the Insert array. BP 2 maximum saving breaking. Step 1: Calculate the cost of aircraft delay savings ∆ j for each aircraft j after it is removed from the aircraft queue: Step 2: Calculate the maximum number of removed aircraft U 2 , U 2 = P 2 * |F n | , where denotes upward rounding andP 2 is the probability of maximum saving breaking. Step 3: Randomly select the number of removed aircraft Q 2 , Q 2 is a random number between [0, U 2 ]. Step 4: Randomly select Q 2 aircraft according to the roulette method, where the larger the ∆, the greater the chance of the aircraft being selected for removal.Then, removal the selected Q 2 aircraft to the insert array.BP 3 random breaking. Step 1: Calculate the maximum number of removed aircraft U 3 , U 3 = P 3 * |F n | , where denotes upward rounding and P 3 is the probability of random breaking. Step 2: Randomly select the number of removed aircraft Q 3 , Q 3 is a random number between [0, U 3 ]. Step 3: Randomly select Q 3 aircraft, remove all the selected aircraft, and add them to the insert array.BP 4 single point breaking. Step 2: If θ < P 4 , then randomly remove an aircraft from the aircraft sequence and add it to the insert array.P 4 is the single point breaking probability. The proposed breaking process is presented in Algorithm 2. Algorithm 2. Breaking process 1: Let K be the number of the proposed breaking process; 2: Set i ← 1 ; 3: RS ← The solution after the initial solution generation process; 4: while i < K do 5: BR ← Performs the initial solution breaking process using BP i ; 6: Put the breaking aircraft into the Insert array 7: i = i + 1; 8: end while 9: ReturnBR and Insert array Reorganization Process All removed aircraft are reinserted into the aircraft runway sequence during the reorganization process.The aircraft will be placed in random order and greedily randomly inserted into the runway sequence.The specific reorganization steps are as follows. Step 1: Randomly disorder the aircraft to be inserted in the Insert array. Step 2: If there is no point to be inserted, end; otherwise, find the aircraft positions available for insertion on each runway according to the aircraft runway usage time constraint in Section 2.2. Step 3: Calculate the increase in cost for all positions available for insertion, randomly select the position with the smallest or second smallest cost, and return to Step 2. The proposed reorganization process is presented in Algorithm 3. If a better solution appears, the current step is repeated until no better solution is produced [32]. NS 1 means to perform a binomial swap on the aircraft queue of the same runway, randomly select a runway, select two aircraft in this aircraft queue, swap the sequence of aircraft located between two aircraft in the aircraft queue under the constraints, check all possible swaps between these aircraft, generate a new aircraft runway queue, calculate the new objective function value, and keep the current solution if the objective function value decreases. NS 2 means to conduct a binomial swap of aircraft sequence of different runways, randomly select a runway, select two aircraft in this aircraft queue, select the closest time between the two aircraft in the other runway queue, swap the aircraft that are located between the two aircraft with the aircraft located between the two aircraft in the other runway queue under the constraints, calculate the new objective function value, and keep the current operation if the objective function value decreases. NS 3 means to transform the aircraft sequence of the same runway, randomly select a runway, check all aircraft on that runway and insert them before or after the aircraft with the closest runway usage i = 1; 9: else 10: i = i + 1; 11: end if 12: end while 13: Return V 0 Time Decomposition Approach During actual airport operations, air transport decision makers are primarily concerned about the efficiency of solving the ARSP model.Therefore, the computation time of the algorithm is critical.A hybrid algorithm combining heuristic methods and accurate algorithms may have more potential than traditional heuristics or accurate algorithms for the complexity of the problem [47].Therefore, we propose an algorithm combining RHC strategy and SALNS to solve the ARSP model. Receding horizon control strategy has been shown to be an effective optimization strategy for large-scale optimization problems with complex constraints [48].In the RHC strategy framework, the original optimization problem is partitioned into several subproblems.The RHC strategy is required to look forward C time horizons, i.e., when optimizing the current kth time window, the optimal scheduling looks forward C time horizons, but only implements the results in the kth horizon and repeats the same process on the next optimization stage.We give the specific procedure of the RHC-SALNS in the following. Aircraft Status Classification In order to determine the aircraft involved in a given time horizon, we propose a classification rule for the aircraft status.For each aircraft, the status is assigned based on the earliest landing time/taking-off time and the scheduled landing time/ taking-off time.The necessary parameters to determine the aircraft status include: H SY is the length of an operating interval. C is the length of receding horizon.T 0 (k) denotes the beginning time of the receding horizon at the kth operating interval; the operating time interval is [T 0 (k), T 0 (k) + C • H SY ) when the optimization scheduling is at the kth stage. t i (k) denotes the scheduled runway usage time of aircraft i is in the [T 0 (k), T 0 (k) + C • H SY ) interval after the kth optimization scheduling stage. Suppose that we optimize the kth horizon; the aircraft will be classified and marked with a status according to the following rule: Completed aircraft: t i (k) < T 0 (k): this status means that the aircraft has been optimized in a previous horizon and has no interaction with the aircraft that need to be optimized in the current horizon. Ongoing aircraft: this status means that the aircraft has been optimized in the previous horizon, but it will be still interacting with aircraft that need to be optimized.Aircraft of this status also need to be optimized because their decision variables are not fixed. Active aircraft: this status means that, as the horizon slides afterward, the planned aircraft will go through the statuses from active to ongoing and then completed. Planned aircraft: this status means that as the time window receding backwards, the planned aircraft will go through the statuses from active to ongoing and then completed. Receding Horizon Control Strategy Before we describe the proposed RHC-SALNS algorithm, some notations are introduced: Ω(k) is the set of aircraft participating in the optimization scheduling of the kth stage.Θ(k) is the set of aircraft that have completed scheduling after the completion of the kth optimization scheduling stage.Π(k) is the set of aircraft that have not completed scheduling after the completion of the kth optimization scheduling stage.Y(k) is the set of aircraft that is the weighted sum of the delay time of aircraft that completed scheduling in the kth optimization scheduling stage.For each aircraft, q ∈ D ∪ D * : E d q (k) denotes the estimated take-off time at the kth operating stage.For each aircraft, i ∈ A ∪ A * : E a i (k) denotes the estimated landing time at the kth operating stage.The sets, parameters, state of each aircraft, and the position of the receding time horizons are shown in Figure 6. afterward, the planned aircraft will go through the statuses from active to ongoing and then completed. Planned aircraft: this status means that as the time window receding backwards, the planned aircraft will go through the statuses from active to ongoing and then completed.is the set of aircraft that ( ) is the weighted sum of the delay time of aircraft that completed scheduling in the kth optimization scheduling stage.For each aircraft, E k denotes the estimated take-off time at the kth operating stage.For each aircraft, E k denotes the estimated landing time at the kth operating stage.The sets, parameters, state of each aircraft, and the position of the receding time horizons are shown in Figure 6. Step 2: After the completion of the th k stage of aircraft scheduling, those aircraft with (10). The aircraft of different statuses under the receding time horizon frame. Step 1: Generate the initial aircraft queue in the order of the values of E a i and E d i ; set the E d i of the first aircraft to T 0 (0) if the first aircraft i ∈ D ∪ D * ; set the E a i of the first aircraft to T 0 (0) if the first aircraft i ∈ A ∪ A * .Let k = 0; set the value of C; initialize Ω(k), Θ(k), and Y(k). Step 2: After the completion of the kth stage of aircraft scheduling, those aircraft with t i (k) ≤ T 0 (k) + H SY are put into the set Θ(k).Φ(k) is calculated with reference to Equation (10). Freeze the existing scheduling results of those aircraft witht i (k) > T 0 (k) + H SY , put them into the set Π(k), and update the constraints with reference to Equation (11). Step 3: The aircraft that, in Ω(k + 1), is optimally scheduled in the k + 1st stage using the SALNS algorithm with reference to Equation (12), where Step 4: Step 5. Otherwise, go to Step 2. The proposed RHC-SALNS is presented in Algorithm 5. Algorithm 5. RHC-SALNS Require: S ij matrix, T 0 (0), for aircraft q ∈ D ∪ D * its E d q , maximum acceptable delay time η d q .For aircraft i ∈ A ∪ A * its E a i , maximum acceptable delay time η a i , N .1: Generate initial parameter value, set use SALNS algorithm schedule aircraft runway operation 5: set Θ(k) ← choose i from Ω(k) which 6: Experimental Results and Comparisons In this section, the actual operational data of Wuhan Tianhe Airport in 2020 are used to evaluate the performance of the proposed RHC-SALNS algorithm for the ARSP.For the key parameters in the RHC-SALNS, we performed a parameter sensitivity analysis to determine the optimal combination of parameters.Additionally, we verified the effectiveness of using the time decomposition strategy in improving the efficiency.In further experiments, the algorithm computational results are compared with other algorithms (STW-GA, RHC-GA, TW-PSO, SA-PSO, and ILS). Experimental Setup This section discusses the instances that have been used to evaluate the proposed RHC-SALNS and the parameter settings. To investigate the performance RHC-SALNS, we run our proposed algorithm with ARSP scenarios of different sizes, different planning durations, and different numbers of aircraft.We use four instances of ARSP at Wuhan Tianhe Airport for our case study.The main characteristics of these instances are shown in Table 3.In the four instances, the percentage of wake turbulence categories and arrival/departure properties is shown in Figure 7. This section discusses the instances that have been used to evaluate the proposed RHC-SALNS and the parameter settings. To investigate the performance RHC-SALNS, we run our proposed algorithm with ARSP scenarios of different sizes, different planning durations, and different numbers of aircraft.We use four instances of ARSP at Wuhan Tianhe Airport for our case study.The main characteristics of these instances are shown in Table 3.In the four instances, the percentage of wake turbulence categories and arrival/departure properties is shown in Figure 7.The RHC-SALNS was coded in MATLAB R2016a and run on a PC with Intel i7-9700KF8C8T, 3.60 GHz, and 16.0 GB RAM. SALNS Parameters Sensitivity Analysis This section evaluates the proposed RHC-SALNS and the parameter settings.The proposed RHC-SALNS algorithm has many parameters that need to be determined in advance, including the length of the time horizon and the number of receding horizons in the RHC framework, the number of internal cycles β in the simulated annealing framework, and the probability of the breaking process in the large neighborhood search algorithm.Please take note that in this section, only the values of the parameters in the neighborhood search component (breaking process) and the simulated annealing framework are properly combined; the parameters for RHC are discussed in Section 4.3.In order to determine the better parameters, we determined the values of all parameters one by one by manually changing the value of one parameter while fixing the values of the other parameters, which are set to the default values.The default values of these five parameters (β, P 1 , P 2 , P 3 , P 4 ) are 500, 0.2, 0.5, 0.3, and 0.4, respectively.If we obtain an expectation value, it will be applied in the next experiments for sensitivity analysis.Then, the best values of all parameters are recorded.In this process, we randomly select an instance to perform the parameter sensitivity analysis.The instance used is Instance 2. In addition to the above parameters, other parameters throughout our experiments were set as follows: initial temperature T 0 = 10000, termination temperature T L = 0.1, cooling coefficient α = 0.96, maximum number of non-updating generations N notimp = 150, H SY = 15 min, C = 2. During the sensitivity analysis, indices (i.e., the sum of weighted aircraft delays) are recorded to assess performance.To calculate these indices for the sensitivity analysis, we performed more than 30 experiments with default parameters, recording the minimum value of the objective function as the default optimal solution.We conducted 30 separate independent experiments with different combinations of parameters and represent the resulting data distribution in box plot format [49].Details of the values of the weighted sum of delay time and deviation over 30 independent runs and comparisons between different parameter settings are given in Figures 8 and 9.In order to find the optimal combination of parameters for the algorithm, we used the Wilcoxon rank sum test [50] to analyse whether the computational results with different parameters are significantly different (with a 5% significant level).Figure 8 and Figure 9 show the p-values under different parameters.It can be seen that for the breaking probabilities 1 3 4 ,, P P P , there is no significant difference in the calculated results under each In order to find the optimal combination of parameters for the algorithm, we used the Wilcoxon rank sum test [50] to analyse whether the computational results with different parameters are significantly different (with a 5% significant level).Figures 8 and 9 show the p-values under different parameters.It can be seen that for the breaking probabilities P 1 , P 3 , P 4 , there is no significant difference in the calculated results under each combination of parameters.For P 2 , we can find that, when P 2 ≥ 0.6, with the continues increasing of P 2 , there is no significant difference in the algorithm calculation results under each parameter, so the parameter of P 2 ≥ 0.6 should be chosen.Similarly, we should choose β ≥ 200 for the choice of parameter β.The SALNS algorithm parameters we use in the following calculation process are set in Table 4. Receding Horizon Control Analysis It has been demonstrated that the performance of the adopted RHC strategy is closely related to its key parameters H SY and C [13].H SY , as well as C need to be analyzed according to the corresponding problem.Based on the conclusions related to Section 4.2, we use the parameters of Table 4 for our analysis.We randomly select Instance 2, Instance 3, and Instance 4 to analyze the experimental results.The length of the time horizon H SY is set to 30 and 15 min, respectively.The number of receding horizons C is set to 2 and 3, respectively.Then, the aircraft within each time horizon are scheduled optimally using the SALNS algorithm.We compared the optimization results and computation time with different combinations of parameters in Table 5.We have bolded the fastest solution time and the optimal objective function value for each instance in Table 6.We can note that the larger the receding horizon H SY or the number of receding horizons C, the longer the solution time of the algorithm; however, the results are not necessarily better.Although the choice of parameters influences the objective function values, the differences are small.For example, for Instance 4, the optimal value of H SY = 30 min and C = 2 is only 1.15% larger than that of H SY = 30 min and C = 3, but the operation time is reduced by 46.7%.Therefore, considering the actual application, the decision maker can choose the combination of parameters suitable for the actual scheduling scenario by considering the algorithm solution time and the solution result.We also note that the RHC-SALNS algorithm solves significantly better than the SALNS algorithm, improving the objective values by 14.8%, 12.7%, and 10.74% with the optimal objective parameter settings.This is due to the significantly sped up computation time of the RHC-SALNS algorithm, which can find better solutions within the specified time frame.Numerous experiments have shown that the runtime of the scheduled formulation can generally be controlled within 6 s/aircraft [51].Therefore, the RHC-SALNS algorithm can meet the demand for aircraft operation scheduling decisions at the tactical level.Meanwhile, we add the computational results of SALNS algorithm without RHC strategy for comparison, as shown in Table 5.Take the case of H SY = 30 min and C = 2 for Instance 2 as an example, the number of aircraft with different statuses over 12 horizons are counted as shown in Figure 10.Similarly, take the case of H SY = 30 min and C = 2 as an example; the performance of objectives in each time horizon during the RHC-SALNS evolution in 12 horizons are plotted as shown in Figure 11.The values of the objective function in each planning period and solution time for each instance are shown in Table 6. From the statistics of aircraft status, as shown in Figure 10, we find that, in the first receding horizon, all aircraft in the planning period are active aircraft.However, as the horizon is receding backward, the unplanned aircraft from the previous period are accumulated in the planning time period.This is because, considering the runway separation constraint of between aircraft, a number of aircraft will be delayed during the peak hours, resulting in them not being able to complete the scheduling within the planning time.Meanwhile, we add the computational results of SALNS algorithm without RHC strategy for comparison, as shown in Table .5.Take the case of 6. From the statistics of aircraft status, as shown in Figure 10, we find that, in the first receding horizon, all aircraft in the planning period are active aircraft.However, as the horizon is receding backward, the unplanned aircraft from the previous period are accumulated in the planning time period.This is because, considering the runway separation constraint of between aircraft, a number of aircraft will be delayed during the peak hours, resulting in them not being able to complete the scheduling within the planning time.It can be shown that the SALNS algorithm completes convergence in each planning period from Figure 11.At the beginning of the algorithm generation, the objective function value changes significantly.For the 1st, 3rd, 6th, 7th, 9th, and 12th time horizons, the SALNS algorithm can complete convergence in about 200 generations.The optimal effect can reach 15.45-24.36%.We also note that for the 4th, 5th, 8th, 10 th , and 11th time horizons, the SALNS algorithm can complete convergence within 250 generations.The optimal effect can reach 14-26%. RHC-SALNS Compared to State of the Art Methodologies In this subsection, we compare our proposed RHC-SALNS algorithm with existing state-of-the-art by analyzing four instances.The algorithms that we have selected for comparison experiments include STW-GA [34], RHC-GA [40], TW-PSO [36], SA-PSO [33], and ILS [32] algorithms.These comparison algorithms are all currently available advanced algorithms that incorporate heuristic strategy frameworks and have been shown to improve solution efficiency of ARSP.A total of 30 independent experiments are conducted under each case, and the result data distribution are represented in box plot format.The boxplot of the weighted sum values of the delay time and comparisons between different algorithm are given in Figure 12.It can be shown that the SALNS algorithm completes convergence in each planning period from Figure 11.At the beginning of the algorithm generation, the objective function value changes significantly.For the 1st, 3rd, 6th, 7th, 9th and 12th time horizons, the SALNS algorithm can complete convergence in about 200 generations.The optimal effect can reach 15.45-24.36%.We also note that for the 4th, 5th, 8th, 10th and 11th time horizons, the SALNS algorithm can complete convergence within 250 generations.The optimal effect can reach 14-26%. RHC-SALNS Compared to State of the Art Methodologies In this subsection, we compare our proposed RHC-SALNS algorithm with existing state-of-the-art by analyzing four instances.The algorithms that we have selected for comparison experiments include STW-GA [34], RHC-GA [40], TW-PSO [36], SA-PSO [33], and ILS [32] algorithms.These comparison algorithms are all currently available advanced algorithms that incorporate heuristic strategy frameworks and have been shown to improve solution efficiency of ARSP.A total of 30 independent experiments are conducted under each case, and the result data distribution are represented in box plot format.The boxplot of the weighted sum values of the delay time and comparisons between different algorithm are given in Figure 12.Now, we compare the performance of the calculated results of six algorithms (RHC-SALNS, STW-GA, RHC-GA, TW-PSO, SA-PSO, and ILS) from the mean and standard deviation (STD) values in Table 7.For Instances 1-3, the mean and STD results obtained by the RHC-SALNS algorithm do not significantly differ from other algorithms.However, for Instance 4, we can see that the mean values of the RHC-SALNS results are all better than other algorithms.The mean values of the results can be more optimized by 0.77%, 0.18%, 0.19%, 1.5%, and 0.38%.Additionally, from the analysis of Instance 4, we can see that the standard deviation of RHC-SALNS results is also greatly smaller than the other algorithms.The STD of RHC-SALNS results can be reduced by 52.6%, 19.7%, 58.6%, 57.4%, and 13.9% compared to the other five algorithms.The results also indicate that the RHC-SALNS algorithm shows excellent performance as the instance size increases, and the algorithm is more general and effective.From Table 8, we can see that the execution time of the algorithms does not differ much for Instance 1 and Instance 2, but for Instance 3 and Instance 4, we can see the stability of our proposed optimization algorithm in terms of efficiency, and the computational speed can be maintained at a high level. state-of-the-art by analyzing four instances.The algorithms that we have selected for comparison experiments include STW-GA [34], RHC-GA [40], TW-PSO [36], SA-PSO [33], and ILS [32] algorithms.These comparison algorithms are all currently available advanced algorithms that incorporate heuristic strategy frameworks and have been shown to improve solution efficiency of ARSP.A total of 30 independent experiments are conducted under each case, and the result data distribution are represented in box plot format.The boxplot of the weighted sum values of the delay time and comparisons between different algorithm are given in Figure 12.Now, we compare the performance of the calculated results of six algorithms (RHC-SALNS, STW-GA, RHC-GA, TW-PSO, SA-PSO, and ILS) from the mean and standard deviation (STD) values in Table 7.For Instances 1-3, the mean and STD results obtained by the RHC-SALNS algorithm do not significantly differ from other algorithms.However, for Instance 4, we can see that the mean values of the RHC-SALNS results are all better than other algorithms.The mean values of the results can be more optimized by 0.77%, 0.18%, 0.19%, 1.5%, and 0.38%.Additionally, from the analysis of Instance 4, we can see that the standard deviation of RHC-SALNS results is also greatly smaller than the other algorithms.The STD of RHC-SALNS results can be reduced by 52.6%, 19.7%, 58.6%, 57.4%, and 13.9% compared to the other five algorithms.The results also indicate that the RHC-SALNS algorithm shows excellent performance as the instance size increases, and the algorithm is more general and effective.From Table 8, we can see that the execution time of the algorithms does not differ much for Instance 1 and Instance 2, but for Instance 3 and Instance 4, we can see the stability of our proposed optimization algorithm in terms of efficiency, and the computational speed can be maintained at a high level.The results in Table 7 show that the large neighborhood search algorithm helps to improve the algorithm performance.To verify this on a more formal basis, we also used the Wilcoxon rank sum test with a 5% significance, and we plotted the p-value heat map as shown in Figure 13.The results in Table 7 show that the large neighborhood search algorithm helps to improve the algorithm performance.To verify this on a more formal basis, we also used the Wilcoxon rank sum test with a 5% significance, and we plotted the p-value heat map as shown in Figure 13.From Figure 13, we can see that for Instances 1-3, the RHC-SALNS algorithm cannot be significantly different from the other algorithms at the 5% significance level, and the algorithm computation results match the results of other proven advanced algorithms.However, for Instance 4, we can see that the RHC-SALNS algorithm is significantly better than the other algorithms.The results show that using the large neighborhood search process does help to obtain excellent results for large-scale instances.The breaking, reorganization, and local search processes in the RHC-SALNS can help us to improve the computational performance of the framework.The breaking, reorganization, and local search processes can also ensure the framework becomes more robust.From Figure 13, we can see that for Instances 1-3, the RHC-SALNS algorithm cannot be significantly different from the other algorithms at the 5% significance level, and the algorithm computation results match the results of other proven advanced algorithms.However, for Instance 4, we can see that the RHC-SALNS algorithm is significantly better than the other algorithms.The results show that using the large neighborhood search process does help to obtain excellent results for large-scale instances.The breaking, re-organization, and local search processes in the RHC-SALNS can help us to improve the computational performance of the framework.The breaking, reorganization, and local search processes can also ensure the framework becomes more robust. Analysis of Scheduling Results for Application In this subsection, we analyse the usability of the algorithm for application.In the following, we explore our proposed optimized framework in terms of the controllers' implementability and air traffic management rules, respectively. Firstly, the case of H SY = 15 min and C = 2 for Instance 2 was taken as an example.We compare the results of our proposed RHC-SALNS algorithm with the FCFS algorithm [9].One hour of aircraft scheduling results was randomly selected for visualization, as shown in Figure 14.We can see that the runway usage time of aircraft optimized by the RHC-SALNS algorithm is more compact, while satisfying the safety separation requirement.The maximum number of runway usage position exchanges is within 3. For Instance 2, comparing the result of the RHC-SALNS algorithm to FCFS, the scheduling result objective function value is reduced by 27% and the maximum number of position exchanges is 3, compounding the control load requirement [16]. Analysis of Scheduling Results for Application In this subsection, we analyse the usability of the algorithm for application.In the following, we explore our proposed optimized framework in terms of the controllers' implementability and air traffic management rules, respectively. Firstly, the case of We compare the results of our proposed RHC-SALNS algorithm with the FCFS algorithm [9].One hour of aircraft scheduling results was randomly selected for visualization, as shown in Figure 14.We can see that the runway usage time of aircraft optimized by the RHC-SALNS algorithm is more compact, while satisfying the safety separation requirement.The maximum number of runway usage position exchanges is within 3. For Instance 2, comparing the result of the RHC-SALNS algorithm to FCFS, the scheduling result objective function value is reduced by 27% and the maximum number of position exchanges is 3, compounding the control load requirement [16]. in the model will be discussed, which are important parameters for the optimal scheduling of aircraft operations by traffic management authorities in different regions of China.Among them, the setting of parameter a i  is closely related to the fuel volume of the arrival aircraft and needs to be determined depending on each aircraft's different situation.However, for the d q  parameter, the parameter setting is not the same across re- gions in China's traffic management system.In the following, we reset the d q  parameter and set it to 5 min, 10 min, 15 min, 20 min, 30 min, 45 min, and 60 min, respectively, based on regional management experience.In the experiments, we investigated the actual operation of the airport and set the We further analyzed the performance of the proposed optimization framework in the application of air traffic management rule-making.The settings of parameters η a i and η d q in the model will be discussed, which are important parameters for the optimal scheduling of aircraft operations by traffic management authorities in different regions of China.Among them, the setting of parameter η a i is closely related to the fuel volume of the arrival aircraft and needs to be determined depending on each aircraft's different situation.However, for the η d q parameter, the parameter setting is not the same across regions in China's traffic management system.In the following, we reset the η d q parameter and set it to 5 min, 10 min, 15 min, 20 min, 30 min, 45 min and 60 min, respectively, based on regional management experience.In the experiments, we investigated the actual operation of the airport and set the η a i parameter to a normal distribution with a mean value of 15 min and a variance of 5 min to simulate different situations of each arrival aircraft.Under each η d q parameter setting, 100 Monte Carlo simulation experiments were conducted.The experimental results are shown in Table 8, as well as Figure 15. From Figure 15, we can see that the departure delay of the aircraft is within 2 min regardless of the value of η d q .When η d q ≥ 15 min, the maximum departure delay of the aircraft is within 15min.Additionally, as we can see from Table 8, when η d q ≤ 10 min, the average delay time of the departing aircraft is greater than the case of η d q ≥ 15 min.This is due to the fact that limiting the maximum acceptable departure delay may reduce the possibility of aircraft sequencing and increase the average delay time.We can also see from Figure 15 that the maximum departure delays of aircraft are all within 15 min when η d q ≥ 15 min.Our proposed optimization framework can provide a theoretical basis for airport managers and air traffic controllers to develop air traffic management rules related to the maximum acceptable delay time of departing aircraft due to runway constraints at their own airports under deterministic conditions.However, in the real operation process, due to many factors, it is still necessary to consider and combine multiple operational characteristics and constraints.From Figure 15, we can see that the departure delay of the aircraft is within 2 min regardless of the value of d q η .When , the maximum departure delay of the aircraft is within 15min.Additionally, as we can see from Table 8, when η ≥ .This is due to the fact that limiting the maximum acceptable departure delay may reduce the possibility of aircraft sequencing and increase the average delay time.We can also see from Figure 15 that the maximum departure delays of aircraft are all within 15 min when 15 min d q η ≥ .Our proposed optimization framework can provide a theoretical basis for airport managers and air traffic controllers to develop air traffic management rules related to the maximum acceptable delay time of departing aircraft due to runway constraints at their own airports under deterministic conditions.However, in the real operation process, due to many factors, it is still necessary to consider and combine multiple operational characteristics and constraints.In the following, we discuss the scenarios of our algorithm in practical applications for the characteristics of real operations.The studies related to ARSP seek to determine the sequence of aircraft landing and taking off in order to optimize given objectives, subject to a variety of constraints.While optimally sequencing the landing and taking off of aircraft may increase the runway capacity in theory, it may not always be possible to implement these solutions in practice.For this reason, the challenge lies in putting the theory into practice, which involves simultaneously handling the safety, efficiency, robustness In the following, we discuss the scenarios of our algorithm in practical applications for the characteristics of real operations.The studies related to ARSP seek to determine the sequence of aircraft landing and taking off in order to optimize given objectives, subject to a variety of constraints.While optimally sequencing the landing and taking off of aircraft may increase the runway capacity in theory, it may not always be possible to implement these solutions in practice.For this reason, the challenge lies in putting the theory into practice, which involves simultaneously handling the safety, efficiency, robustness and competitiveness, and environmental issues [52].We consider practical applications of the algorithm and validate our optimization algorithm framework in terms of the efficiency. Similarly, the case of H SY = 15 min and C = 2 for Instance 2 was taken as an example.When an aircraft is allocated a departure time, the aircraft has to take off at an interval of (−5, +10) minutes above the departure time.That is, the sequencing process will have this maximum period to modify the sequence.It will be a separate 15 min period for each aircraft.In the following, we conduct 100 aircraft scheduling Monte Carlo experiments (in each Monte Carlo experiment, we give two other additional possible flight ordering results).During the experiment, we took into account that each departing aircraft must comply with the window of (−5, +10) minutes over the allocated departure time. Firstly, we only input the arrival aircraft parameters (E a i , η a i ) in the algorithm, which is based on practical work experience.We set the η a i parameter to a normal distribution with a mean value of 15 min and a variance of 5 min to simulate different situations of each arrival aircraft.After fixing the runway usage order for the incoming flights, we add the parameters related to the departing flights and optimize them using the RHC-SALNS algorithm.Considering the individual 15 min period for each departing aircraft, we randomly add a uniform distribution of (−5, +10) minutes above the allocated departure time to simulate the actual departure time of the aircraft and use the RHC-SALNS algorithm to assign the departure time of the departing aircraft twice.When allocating the departure time of the departing aircraft, we also need to observe constraints 1-7 and run our proposed RHC-SALNS optimization framework to recalculate the departure time of the aircraft with the objective of minimizing the number of adjustments in the position of the departing aircraft.During each Monte Carlo experiment, we added two different aircraft departure time perturbations to verify the efficiency of the algorithm.The schematic diagram of the process is shown in Figure 16. time of the departing aircraft, we also need to observe constraints 1-7 and run our proposed RHC-SALNS optimization framework to recalculate the departure time of the aircraft with the objective of minimizing the number of adjustments in the position of the departing aircraft.During each Monte Carlo experiment, we added two different aircraft departure time perturbations to verify the efficiency of the algorithm.The schematic diagram of the process is shown in Figure 16.9.As shown in Figure 17, we visualize other two possible sequencing results for one runway in one experiment after considering the 15 min time window.From Table 9, we can see that the average execution time of the algorithm for the aircraft departure time window considering the 15min time window is still maintained at around 1.5 s/aircraft, which meets the efficiency requirement of the aircraft scheduling algorithm.Moreover, our proposed algorithm can give a variety of aircraft sequencing results, considering the 15min time window.In the actual operation process, controllers can use the multiple aircraft sequencing results given as a reference and be flexible in using them.At the same time, because the algorithm is efficient in terms of execution time, the personnel concerned can re-enter the relevant parameters for optimal aircraft scheduling after aircraft perturbation occurs, increasing the possibility of aircraft sequencing.This experiment can also verify the efficiency of the scheduling algorithm from the side, which can quickly provide a variety of possible aircraft sequencing solutions and a variety of filings for the air traffic controllers. craft sequencing results given as a reference and be flexible in using them.At the same time, because the algorithm is efficient in terms of execution time, the personnel concerned can re-enter the relevant parameters for optimal aircraft scheduling after aircraft perturbation occurs, increasing the possibility of aircraft sequencing.This experiment can also verify the efficiency of the scheduling algorithm from the side, which can quickly provide a variety of possible aircraft sequencing solutions and a variety of filings for the air traffic controllers. Conclusions With the rapid growth of the global economy and the rapid development of the air transportation industry, the aircraft demand of large airports will continue to increase with the airport renovation and expansion, and it is relevant to study large-scale aircraft sequencing algorithms.Additionally, aircraft scheduling also needs to consider the practicality of the algorithm.We can apply the algorithm not only to tactical traffic management, but also with the tactical traffic management phase to inform the air traffic controllers in the early planning stage to develop the scheduling plans. Therefore, we propose an algorithm incorporating multiple heuristic strategies for the aircraft runway scheduling problem.A large neighborhood search algorithm is embedded in the framework of the simulated annealing algorithm to further improve the scope of the algorithm in order to construct neighborhoods in the solution space.The large neighborhood search process contains breaking, reorganization, and local search processes.Starting from an initial solution, it is improved iteratively by alternating between three different stages.A receding horizon control strategy is used to partition the largescale problem into several subproblems for solving and to improve the efficiency of the problem solution.We analyze the algorithm parameters under typical examples and compare the performance of the proposed RHC-SALNS with other state-of-the-art algorithms. Conclusions With the rapid growth of the global economy and the rapid development of the air transportation industry, the aircraft demand of large airports will continue to increase with the airport renovation and expansion, and it is relevant to study large-scale aircraft sequencing algorithms.Additionally, aircraft scheduling also needs to consider the practicality of the algorithm.We can apply the algorithm not only to tactical traffic management, but also with the tactical traffic management phase to inform the air traffic controllers in the early planning stage to develop the scheduling plans. Therefore, we propose an algorithm incorporating multiple heuristic strategies for the aircraft runway scheduling problem.A large neighborhood search algorithm is embedded in the framework of the simulated annealing algorithm to further improve the scope of the algorithm in order to construct neighborhoods in the solution space.The large neighborhood search process contains breaking, reorganization, and local search processes.Starting from an initial solution, it is improved iteratively by alternating between three different stages.A receding horizon control strategy is used to partition the large-scale problem into several subproblems for solving and to improve the efficiency of the problem solution.We analyze the algorithm parameters under typical examples and compare the performance of the proposed RHC-SALNS with other state-of-the-art algorithms.The experimental results show that the proposed RHC-SALNS algorithm produces good results compared to other algorithms with hybrid heuristics.RHC-SALNS also outperforms the state-of-the-art methods in large-scale instances.However, in this paper, we apply the existing combined arriving-departing aircraft scheduling maturity theoretical model to study an efficient algorithmic framework for solving the large-scale problem at the theoretical level.We investigate an efficient sequencing algorithm that can be more effectively applied to tactical air traffic management.In the practical application process, the algorithm can provide effective decision support for air traffic management decisions because its execution time can reach 1 s/aircraft in solving the large-scale problem.In future work, we intend to apply it to more complex runway scheduling models and test the proposed RHC-SALNS on other combinatorial optimization problems. F The set of aircraft that requiring scheduled during the planning time period N The set of runways available for aircraft, where N = {n|n 1 , n 2 }, A The set of arrival aircraft that land at the airport and stay until planning time period, A ⊆ F D The set of departing aircraft that are parked at the airport at the beginning of the planning time period, D ⊆ F AD The set of arrival-departing aircraft, AD = A * ∪ D * , AD ⊆ F |F | The number of aircraft |N | The number of runways The safety separation between aircraft i and aircraft j Maximum acceptable delay time of arrival aircraft i, i ∈ A ∪ A * η d q Maximum acceptable delay time of departing aircraft q, q ∈ D ∪ D * κ i Runway occupancy time of aircraft i, i ∈ F ξ iq 1 if departing aircraft q and arrival aircraft i are a pair of connected aircraft, 0 otherwise, i ∈ A * , q ∈ D * χ iq The minimum connection time between the takeoff time of departing aircraft q and landing time of arrival aircraft i, i ∈ A * , q ∈ D * E a i The estimated landing time of aircraft i, i ∈ A ∪ A * E d q The estimated takeoff time of aircraft q, q ∈ D ∪ D * M Extremely large values, applied to simplified the model φ i Delay constraint weights of arrival aircraft t i The allocated runway usage time of aircraft i, i ∈ F i is 1 if the aircraft i uses the runway n, 0 otherwise, i ∈ F , n ∈ N α ij α ij is 1 if aircraft i uses the runway before aircraft j, 0 otherwise.The length of an operating interval C The length of receding horizon T 0 (k) The beginning time of the receding horizon at the kth operating interval The scheduled runway usage time of aircraft i is in the [T 0 (k), T 0 (k) + C • H SY ) interval after the kth optimization scheduling stage Ω(k) The set of aircraft participating in the optimization scheduling of the kth stage Θ(k) The set of aircraft that have completed scheduling after the completion of the kth optimization scheduling stage Π(k) The set of aircraft that have not completed scheduling after the completion of the kth optimization scheduling stage Y(k) The set of aircraft that E d q (k) or E a i (k) in interval [T 0 (k), T 0 (k) + C • H SY ) E d q (k) Estimated take-off time is at the kth operating stage E a i (k) Estimated landing time is at the kth operating stage Figure 4 . Figure 4. Example of a solution representation code with ten aircraft and two runways. do 6 :Figure 4 . Figure 4. Example of a solution representation code with ten aircraft and two runways. Figure 5 . Figure 5.The diagram of the large neighborhood search process with two runways.3.4.1.Breaking Process The breaking process of the solution consists of four types of methods, namely, adjacent breaking ( 1 BP ), maximum saving breaking ( 2 BP ), random breaking ( 3 BP ), and single point breaking ( 4 BP ).During the execution of the breaking process, 1 2 3 4 , , , BP BP BP BP are executed sequentially, and the broken aircraft are stored in the insert array for reorganization.The specific steps of each breaking method are as follows: Figure 5 . Figure 5.The diagram of the large neighborhood search process with two runways. Algorithm 3 . Reorganization process 1: Let w be the number of the breaking aircraft; 2: BR ← The solution after the breaking process; 3: I A ← Randomly disrupt the aircraft in the insert array; 4: while w = 0 do 5: PI ← the positions on each runway available for aircraft I A(w) according to the safety interval and the time window constraint in Section 2.2; 6: Cos w ← Calculate the insertion cost of each insertable point and sort them in ascending order 7: BR ← Randomly insert the aircraft I A(w) at the insertion point PI(Cos w (1)) and PI(Cos w (2)); 8: w = w − 1; 9: end while 10: Return BR as V 0 3.4.3.Local Search Process The local search process contains four steps, namely, NS 1 , NS 2 , NS 3 , NS 4 .NS 1 and NS 3 try to change the runway usage time of the aircraft; NS 2 and NS 4 try to change the runway of the aircraft.When the aircraft runway scheduling solution goes through the process of destruction and reorganization, four local search processes are executed sequentially. 3. 5 . 2 . Receding Horizon Control Strategy Before we describe the proposed RHC-SALNS algorithm, some notations are introduced: ( ) k Ω is the set of aircraft participating in the optimization scheduling of the th k stage.( ) k Θ is the set of aircraft that have completed scheduling after the completion of the kth optimization scheduling stage.( ) k Π is the set of aircraft that have not completed scheduling after the completion of the kth optimization scheduling stage.( ) k ϒ Figure 6 .Step 1 : Figure 6.The aircraft of different statuses under the receding time horizon frame.Step 1: Generate the initial aircraft queue in the order of the values of a i E and d i E ; set the d i E of the Figure 7 . Figure 7. Aircraft characteristics in four instances.The RHC-SALNS was coded in MATLAB R2016a and run on a PC with Intel i7-9700KF8C8T, 3.60 GHz, and 16.0 GB RAM. Figure 7 . Figure 7. Aircraft characteristics in four instances. Aerospace 2023 , 36 Figure 8 . Figure 8. Boxplot of the values of weighted sum of delay time over 30 independent runs between different parameter settings (top four) and p-value heat maps comparing the algorithm computed results (bottom four).Some of the data in the figures are present using scientific notation in Matlab, e.g., "7.209e-05" means "7.209 × 10 -5 ". 9 Figure 8 . Figure 8. Boxplot of the values of weighted sum of delay time over 30 independent runs between different parameter settings (top four) and p-value heat maps comparing the algorithm computed results (bottom four).Some of the data in the figures are present using scientific notation in Matlab, e.g., "7.209e-05" means "7.209 × 10 −5 ". Figure 8 .Figure 9 . Figure 8. Boxplot of the values of weighted sum of delay time over 30 independent runs between different parameter settings (top four) and p-value heat maps comparing the algorithm computed results (bottom four).Some of the data in the figures are present using scientific notation in Matlab, e.g., "7.209e-05" means "7.209 × 10 -5 ". Figure 9 . Figure 9. Box plots with respect to the deviation of the optimal solution for testing different values of parameters (a) and p-value heat maps (b).Some of the data in the figures are present using scientific notation in Matlab, e.g., "3.334e-09" means "3.334 × 10 −9 ". Figure 10 . Figure 10.Number of active aircraft and total aircraft involved in each horizon ( an example, the number of aircraft with different statuses over 12 horizons are counted as shown in Figure 10.Similarly, take the case of an example; the performance of objectives in each time horizon during the RHC-SALNS evolution in 12 horizons are plotted as shown in Figure 11.The values of the objective function in each planning period and solution time for each instance are shown in Table Figure 10 . Figure 10.Number of active aircraft and total aircraft involved in each horizon (H SY = 30 min, C = 2). Figure 11 . Figure 11.The performance of objectives in each schedule time horizon during the RHC-SALNS evolution. Figure 11 . Figure 11.The performance of objectives in each schedule time horizon during the RHC-SALNS evolution. Figure 12 . Figure 12.Box plots with respect to the performance of objectives for different algorithms over 30 independent runs. Figure 13 . Figure 13.p-value heat maps comparing the algorithm computed results.Some of the data in the figures are present using scientific notation in Matlab, e.g., "2.597e-08" means "2.597 × 10 -8 ". 2 C = for Instance 2 was taken as an example. Figure 14 . Figure 14.Comparison of runway usage time for different algorithms.We further analyzed the performance of the proposed optimization framework in the application of air traffic management rule-making.The settings of parameters a i  and  parameter to a normal distribution with a mean value of 15 min and a variance of 5 min to simulate different situations of each arrival aircraft.Under each d q  parameter setting, 100 Monte Carlo simulation experiments were con- ducted.The experimental results are shown in Table8, as well as Figure15. Figure 14 . Figure 14.Comparison of runway usage time for different algorithms. Figure 15 . Figure 15.The departing aircraft delays distribution under different d q η settings. of the departing aircraft is greater than the case of 15 min d q Figure 15 . Figure 15.The departing aircraft delays distribution under different η d q settings. Figure 16 . Figure 16.Schematic diagram of the steps to perform the experiment. Figure 16 . Figure 16.Schematic diagram of the steps to perform the experiment.The average execution time of the algorithm obtained for 100 Monte Carlo experiments and the value of the optimization objective function are shown in Table9. Figure 17 . Figure 17.Visualization of possible optimization of aircraft scheduling results. Figure 17 . Figure 17.Visualization of possible optimization of aircraft scheduling results. Table 1 . Notations of the ARSP model. Table 2 . Three-dimensional priority considering three characteristic parameters. If E b ≥ E a + S ab , then assign E b to t b ; 7: k = k + 1; 8: end while 9: while z < |N | do 10: for each two aircraft a and b(b > a) belong to RS(z) 11: time, generate a new aircraft runway queue, calculate a new objective function value, and keep the current solution if the objective function value decreases.NS 4 means to transform the aircraft sequence of different runways, randomly select a runway, check all aircraft on that runway and insert them before or after the aircraft on different runways with the closest distance to their runway usage time, generate a new aircraft runway queue, calculate a new objective function value, and keep the current solution if the objective function value decreases.The proposed local search process is presented in Algorithm 4. Let K be the number of the local search process; 2: Set i ← 1 ; 3: V 0 ← The solution after the breaking and reorganization process; 4: Table 5 . Optimization results comparison including the computational time for different parameters (H SY , C) setting and the associated final objective values. Table 6 . Optimization results comparison including the computational time for H SY = 30min, C = 2 and the associated final objective values. Table 6 . Optimization results comparison including the computational time for 30 min = and the associated final objective values. Table 7 . The computational results of RHC-SALNS compared to the state of the arts. Table 7 . The computational results of RHC-SALNS compared to the state of the arts. Table 8 . The average delay time for departing aircraft and STD under different η d q settings. Table 8 . The average delay time for departing aircraft and STD under different d q η settings. Table 9 . The result average delay time of 100 Monte Carlo experiments for Instance 2.
21,849.8
2023-02-14T00:00:00.000
[ "Engineering" ]
Understanding the Effect of Side Reactions on the Recyclability of Furan–Maleimide Resins Based on Thermoreversible Diels–Alder Network We studied the effect of side reactions on the reversibility of epoxy with thermoreversible Diels–Alder (DA) cycloadducts based on furan and maleimide chemistry. The most common side reaction is the maleimide homopolymerization which introduces irreversible crosslinking in the network adversely affecting the recyclability. The main challenge is that the temperatures at which maleimide homopolymerization can occur are approximately the same as the temperatures at which retro-DA (rDA) reactions depolymerize the networks. Here we conducted detailed studies on three different strategies to minimize the effect of the side reaction. First, we controlled the ratio of maleimide to furan to reduce the concentration of maleimide groups which diminishes the effects of the side reaction. Second, we applied a radical-reaction inhibitor. Inclusion of hydroquinone, a known free radical scavenger, is found to retard the onset of the side reaction both in the temperature sweep and isothermal measurements. Finally, we employed a new trismaleimide precursor that has a lower maleimide concentration and reduces the rate of the side reaction. Our results provide insights into how to minimize formation of irreversible crosslinking by side reactions in reversible DA materials using maleimides, which is important for their application as novel self-healing, recyclable, and 3D-printable materials. Introduction Epoxies are an important class of thermoset polymer with high thermal stability, chemical resistance, and mechanical strength; applications of epoxies include structural materials, electronics, paints, and adhesives [1]. However, removal and recycling of conventional epoxy resin is challenging due to its irreversibly crosslinked nature. Irreversibly crosslinked epoxy polymers cannot be reshaped, reprocessed, or recycled due to their insoluble and non-meltable features. Implementation of thermally reversible Diels-Alder (DA) crosslinked network in epoxy is one of the popular choices for the development of self-healing and recyclable epoxies [2][3][4][5]. At higher temperatures (>120 • C), the DA bonds break as a result of the retro Diels-Alder reaction (rDA), which breaks down the 3D network structure into a flowable polymer melt. The forward Diels-Alder (fDA) reaction occurs at lower temperatures (<80 • C), which reforms the bonds and solidifies the epoxy again. Diels-Alder (DA) reactions between furan and maleimide have been widely applied in developing thermally recyclable epoxy due to the processing advantages of mild reaction conditions, few by-products, and catalyst-free requirements [6]. However, development of a truly recyclable epoxy employing the DA chemistry is challenging due to the irreversible crosslinking imparted by side reactions occurring at high temperatures (>110 • C). One of the most important and well-known side reactions is homopolymerization of maleimide moieties [7][8][9][10]. This has been evidenced from electronspin resonance (ESR) studies to occur without initiator due to free radical generation of the maleimide itself, either through thermal homolysis of the C=C double bond or more likely a donor-acceptor complex of maleimides [11,12]. Another possible side reaction can occur between maleimide and amine groups left in the system through a Michael's addition reaction [13]. In fact, many of the early works with (bis)maleimide resins studied their reactions with aliphatic amines, which were often used as chain extenders to reduce the brittleness of the thermosets prepared with the neat (bis)maleimides [10,14]. The recent history and future directions of bismaleimide research are well covered in a review [15]. In the context of DA materials, a popular choice for adding furan moieties to a prepolymer backbone is through the epoxy-amine reaction. This is typically a high conversion (>80%) reaction but can still be incomplete, leaving free amine in the system that could eventually react with the maleimide groups [13,16,17]. Although our focus is the maleimide homopolymerization, we expect that other irreversible crosslinks would impact the recyclability likewise. This puts an emphasis on checking for unreacted reactant and including purification steps in developing any reaction scheme. While the effect of these side reactions is often acknowledged as a limitation on the development of an epoxy capable of repeated recyclability, the detailed studies of these side reactions are very limited. In this study, we first confirmed that the irreversible crosslinking in epoxy with DA adducts happening at high temperature is indeed due to the homopolymerization of maleimide. Then, we studied several strategies to reduce the effect of the side reaction. The first strategy was to vary the stoichiometric ratio of furan and maleimide to tune the crosslinking density of the reversible epoxy and to delay the onset of the side reaction. Second, a free radical inhibitor such as hydroquinone was employed to suppress the maleimide homopolymerization, which is known to be a free radical-initiated reaction [17][18][19][20]. Here, we provide a detailed study on the effects of hydroquinone and the side reaction through rheology of the reversible epoxy, which are missing in the literature. The third strategy was to replace a bismaleimide compound with a trismaleimide compound (shown in Scheme 1). In the stoichiometric case, the molar concentration of maleimide groups in the trismaleimide resin is 80% that of the bismaleimide one, reducing the reaction rate of maleimide homopolymerization. Another benefit of having an additional maleimide group on the molecule is that the larger trismaleimide molecules are less mobile. Consequently, the degree of homopolymerization would be further limited by steric hindrance. All these strategies proposed to minimize the maleimide side reaction have significant effects on the rheological and mechanical properties of the reversible epoxy. Materials and Methods Synthesis of DA Polymers. All compounds were purchased from Sigma-Aldrich and used as-is without further purification unless noted. For the furan-functionalization step to produce a four-arm furan prepolymer (FA4), a stoichiometric equivalent of furfuryl glycidyl ether (FGE) was reacted with Jeffamine ED-600 from Huntsman (1.17 g/1.00 g) at 80 °C for 24 h. Completion of the reaction was confirmed by 1 H NMR. Precursor properties are given in Table S1. Various stoichiometric ratios of maleimide-to-furan (r = [M]/[F]) samples were prepared (0.4 to 1.0) using 0.627 g 1,1′-(methylene-4,1-phenylene)bismaleimide (MPM) to 1.00 g of FA4 prepolymer for the r = 1 sample. Maleimide and furan concentrations in samples with different ratios are given in Table S2, along with the precise ratios. Additional synthesis details can be found in our previous report [21]. Samples with stoichiometric ratio of 1.0 with hydroquinone were prepared by mixing in chloroform (mix ratio of 1.59:1.00:0.05:10 by weight of FA4:MPM:hydroquinone:chloroform). Hydroquinone and MPM were first dissolved in chloroform followed by the addition of FA4. Solvent was removed using a rotary evaporator followed by vacuum drying at 80 °C for 24 h (~100 torr). The trismaleimide compound (3M) was synthesized by the procedure of Marref et al. [17]. Briefly, 7.29 g mol Jeffamine T-403 (Huntsman, Akron, OH, USA), 4.41 Scheme 1. Reaction schemes of (a) FA4-MPM and FA4-3M and (b) possible side reactions: (i) maleimide homopolymerization and (ii) Michael's addition reaction between maleimide and unreacted amine. Materials and Methods Synthesis of DA Polymers. All compounds were purchased from Sigma-Aldrich and used as-is without further purification unless noted. For the furan-functionalization step to produce a four-arm furan prepolymer (FA4), a stoichiometric equivalent of furfuryl glycidyl ether (FGE) was reacted with Jeffamine ED-600 from Huntsman (1.17 g/1.00 g) at 80 • C for 24 h. Completion of the reaction was confirmed by 1 H NMR. Precursor properties are given in Table S1. Various stoichiometric ratios of maleimide-to-furan (r = [M]/[F]) samples were prepared (0.4 to 1.0) using 0.627 g 1,1 -(methylene-4,1-phenylene)bismaleimide (MPM) to 1.00 g of FA4 prepolymer for the r = 1 sample. Maleimide and furan concentrations in samples with different ratios are given in Table S2, along with the precise ratios. Additional synthesis details can be found in our previous report [21]. Samples with stoichiometric ratio of 1.0 with hydroquinone were prepared by mixing in chloroform (mix ratio of 1.59:1.00:0.05:10 by weight of FA4:MPM:hydroquinone:chloroform). Hydroquinone and MPM were first dissolved in chloroform followed by the addition of FA4. Solvent was removed using a rotary evaporator followed by vacuum drying at 80 • C for 24 h (~100 torr). The trismaleimide compound (3M) was synthesized by the procedure of Marref et al. [17]. Briefly, 7.29 g mol Jeffamine T-403 (Huntsman, Akron, OH, USA), 4.41 g of maleic anhydride (>99%, TCI, Portland, OR, USA), and 6.2 g anhydrous dimethylformamide were reacted at 115 • C for 10 min under N 2 purge. Then at 90 • C, 0.6 g triethylamine, 0.03 g of Ni(II) acetate (Acros Organics, NJ, USA), and 6.2 g of acetic anhydride were added and further reacted at 90 • C for 30 min. After cooling down, the mixture was added to 300 mL of chloroform in a separatory funnel, where the organic phase was washed two times with 0.06 M acetic acid, four times with chilled water, and two times with 0.06 M sodium hydroxide. The organic phase was dried with magnesium sulfate, concentrated with a rotary evaporator, and then further dried in vacuum at 50 • C. Typically, yields of 30-50% 3M were achieved. Fourier Transform Infrared (FTIR) Spectroscopy was performed to confirm the reactions associated with the synthesis, the side reaction, and effect of inhibitors. FTIR spectra were obtained using a Nicolet Is50 FTIR. A DiaMaxATR high thruput heated attachment was used for in-situ temperature experiments. Pressure was applied to the solid samples, and the temperature was set. An omnic macro was written to take samples every~30 s for the first 15 min to observe the short-time effects. After 20 min, spectra were taken every 10 min to observe long-time effects. Spectra were taken at a resolution of 4 and 16 scans per spectra. Oscillatory shear rheometry was conducted with a Rheometrics ARES-M instrument. The linear viscoelastic range at different temperatures was determined in the usual manner by strain sweep tests and observing the range of linear torque response. Unless specified otherwise, tests were conducted using 8.0 mm parallel plates at ramps of 2.0 • C/min and 1.0 Hz frequency. Auto-strain was also enabled with a minimum and maximum torque range of 50 to 250-300 µN-m, using up to 100% strain at liquefaction and 50% adjustment of current strain. Since the polymers de-crosslink and soften at elevated temperatures, auto-compression was enabled with 0.0 ± 0.10 N. Differential scanning calorimetry (DSC) was conducted using a TA Instruments DSC Q2000 instrument with a Refrigerated Cooling Accessory (RCS90) and nitrogen purge gas (50 mL/min). To improve bottom-of-pan contact, samples were flattened with a mortar and pestle. After testing, lids were removed to visually inspect the samples and their contact. Nominal testing parameters were 10 • C/min and 6-12 mg of sample in lidded Tzero aluminum hermetic pans. Temperature and enthalpic calibration were completed with a high-purity indium standard at used ramp rates, and prior to testing samples on a given day, a cooler conditioning was performed along with heat flows zeroed at 0 and 150 • C. Ultraviolet-visible spectrometry (UV-Vis) scans of the neat MPM and MPM with hydroquinone thin films were performed using a Thermo Scientific Evo260 Ultraviolet Visible Spectrum Spectrometer over the range 300 to 600 nm. A spin coater was used to form MPM thin films using a solution of 10 wt% MPM dissolved in chloroform. The thin films of MPM with hydroquinone were made with a solution of 10 wt% MPM and 1.0 wt% HQ in chloroform. To study the side reaction, the thin films were placed in an oven at 150 • C for different time up to 180 min. Spectra analysis followed the method of Okihara et al. [22]. Effect of Stoichiometric Ratios on the Side Reaction The DA epoxy was prepared through the reaction steps in Scheme 1a. The Tables S1 and S2). For the different FA4-MPM stoichiometric ratios, [M 0 ] was found to be range between 1.25 mol/L (for r = 0.4) and 2.51 mol/L (for r = 1.0). Figure 1a shows the rheology data and Figure 1b shows the DSC data for thermosets in which the FA4 and MPM ratio was varied. Thermal analysis is sensitive and can also inform on the structure of DA add Two isomers of the DA adduct exist ("endo-" and "exo-") of differing stability, lead separate activation energies and temperatures at which de-bonding o [7,8,13,16,17,20,26,27]. The two high temperature endothermic features seen in the thermograms of Figure 1b correspond to the rDA reactions of the endo-and exo-st As temperature increased, softening of the polymer occurred first due to glass transition and followed by flow behavior due to the depolymerization by the rDA reactions. As the ratio of maleimide to furan decreased, shear moduli dropped, and the glass transition temperature (T g ) decreased because of reduced crosslinking density. Interestingly, heating past 150 • C, a rapid increase in the shear moduli occurred. At these elevated temperatures (>150 • C), the thermomechanical behavior changed significantly, restricting further flowability of the polymers. If the polymer was cooled from high temperature (Figure 1a), the modulus remained glassy during the temperature ramp indicating that permanent crosslinking occurred. The NMR data of furan precursor confirmed there was no unreacted amine in the system ( Figure S1), eliminating the possibility of a Michael addition reaction between maleimide and unreacted amine. Therefore, the irreversible crosslinking observed in this system may be attributed to the formation of succinimide chains from self-polymerization of maleimide [23]. This is also later confirmed from FTIR data. A prior DSC study [9] on neat MPM showed homopolymerization beginning just above the melting point of 162 • C [24] and peaking at~230 • C in a heating ramp of 10 • C/min. Isothermal rheology on MPM showed [9] a significant viscosity increase after 8 min at 183 • C. In brief, prior DSC and rheology studies show that neat MPM homopolymerization is clearly observed for temperatures above its melting point (162 • C). In the DA epoxy, we find that maleimide homopolymerization can occur at lower temperatures because of the lack of crystallinity. Figure 1a clearly shows the effect of varying the maleimide-to-furan ratio, r, on the rheology of reversible epoxy and onset of irreversible crosslinking by the side reaction. In the rheology, the T g expectedly is higher for the higher stoichiometric samples, but the lower ratios are stiffer in the rubbery region (r = 1.0 vs. r = 0.6 and 0.8). The origins of this behavior could be attributed to the dynamic interplay between mobility and the extent of side reaction at higher temperatures. Lower ratios have higher mobility (lower T de-gel ), and because these samples were initially cooled from 140 • C ( Figure S2), the extent of side reaction could be increased for the lower ratios. In the case of the lower maleimide to furan ratio (r = 0.4) sample, we see a delayed onset of the irreversible crosslinking. At this low ratio of maleimide to furan, it is reasonable to expect that the concentration of MPM is sufficiently low to diminish the effects of the side reaction of maleimide. For the same reason, we do not expect the side reaction in elastomeric materials prepared with little (maleimide) crosslinker to have a pronounced effect on their recyclability. For example, for rubber materials, which are generally prepared with minimal crosslinker (<5%) to strengthen but retain flexibility, we suspect the side reaction would be sluggish even at elevated temperatures (120 • C or more) because of the reduced mobility and low maleimide concentration. Rubbers can remain stiff at elevated temperatures because of their high molecular weights (50,000+ g/mol) and resulting chain entanglements. For instance, in one study using DA rubbers, the authors used between 0 and 8.1 wt% bismaleimide precursor as crosslinker for the furan-grafted rubbers, whose shear moduli remained around 0.01 MPa across 120-180 • C implying that no significant maleimide self-reaction occurred [25]. Thermal analysis is sensitive and can also inform on the structure of DA adducts. Two isomers of the DA adduct exist ("endo-" and "exo-") of differing stability, leading to separate activation energies and temperatures at which de-bonding occurs [7,8,13,16,17,20,26,27]. The two high temperature endothermic features seen in the DSC thermograms of Figure 1b correspond to the rDA reactions of the endo-and exo-stereoisomers, which parallel the rheology of Figure 1a. As the temperature is ramped from low to high, the FA4-MPM samples pass the glass transition which increases in temperature with the stoichiometric ratio r (see Table 1 below). Subsequently, for all FA4-MPM samples regardless of r value, a shoulder in the thermograms appears at~100 • C and a peak occurs at~140 • C due to the rDA reactions of each stereoisomer. This temperature range (100 • C to 140 • C) corresponds to the rapid drop in modulus in Figure 1a as the network depolymerizes. In addition, the magnitude of the endo-and exo-peak areas correlates with the stoichiometric ratio r, since the larger the r, the greater the concentration of DA bonds for the rDA reaction to de-bond. This can be quantified by shifting and scaling the different thermograms (from Figure 1a to Figure 2a) so that the rDA response regions overlay over 50-150 • C. This scale factor is the relative heat capacity response due to the rDA reaction, and this ratio should be proportional to the concentration of DA groups which, in turn, is expected to be proportional to the molar concentration of maleimide groups. A plot of the scale factor vs. the ratio of maleimide concentration to that of the FA4-MPM (r = 1.0) is shown in Figure 2c. The resulting linear relationship supports the abovementioned interpretation of the thermograms (i.e., rDA reactions occur predominantly at these temperatures, and not some other thermal event). Additionally, to compare the effect of trismaleimide (3M) and bismaleimide (MPM) compounds, the FA4-3M thermogram was shifted by 11 • C and scaled to overlay the FA4-MPM(r) case, which shows excellent agreement. This 11 • C delay in the onset of the rDA reaction is likely the result of differing flexibility of the MPM and 3M reactants. A more detailed analysis of this behavior would involve the variation of the rDA/fDA equilibrium coefficient [2]. For the further studies, all DA polymers were prepared with the equal molar stoichiometric ratio, i.e., r = 1.0. to overlay the FA4-MPM(r) case, which shows excellent agreement. This 11 °C delay in the onset of the rDA reaction is likely the result of differing flexibility of the MPM and 3M reactants. A more detailed analysis of this behavior would involve the variation of the rDA/fDA equilibrium coefficient [2]. For the further studies, all DA polymers were prepared with the equal molar stoichiometric ratio, i.e., r = 1.0. Table 1 shows a summary of softening points from the abovementioned DSC and rheology data. Figure S3 shows DSC and rheology data run in triplicate; for measurements from the same instrumentation and same technique, Tg values do not vary more than 2-3 °C. Variation as large as 10 °C or more of the Tg among DSC and Rheometry data as shown in Table 1 is not unexpected. The glass transition is a "dynamic" transition and sensitive to measurement techniques and instrumentation. Hence, meaningful comparison of the Tg's of two polymers requires the same type of instrument (e.g., DSC) and experimental protocol. In a previous report in which we used a similar Jeffamine base material, we found Tg to vary by up to 20 °C [28]. Listed in the table also is a "flow" temperature which Table 1 shows a summary of softening points from the abovementioned DSC and rheology data. Figure S3 shows DSC and rheology data run in triplicate; for measurements from the same instrumentation and same technique, T g values do not vary more than 2-3 • C. Variation as large as 10 • C or more of the T g among DSC and Rheometry data as shown Table 1 is not unexpected. The glass transition is a "dynamic" transition and sensitive to measurement techniques and instrumentation. Hence, meaningful comparison of the T g 's of two polymers requires the same type of instrument (e.g., DSC) and experimental protocol. In a previous report in which we used a similar Jeffamine base material, we found T g to vary by up to 20 • C [28]. Listed in the table also is a "flow" temperature which corresponds to a complex viscosity of 10,000 Pa-s from the abovementioned rheology data. There have been a number of papers using the complex viscosity from rheometry to guide experiments with inkjet 3D printing of raw epoxy or DA resins [29][30][31]. Our laboratory standard of 10,000 Pa-s as the flow point is based on the maximum viscosity for the printing of similar polymeric resins. To understand the effect of the side reaction, we pursued detailed FTIR studies. Figure 3 shows FTIR spectra of FA4-MPM sample treated for different time at 150 • C, where the zero time was taken when the temperature stabilized at 150 • C. The absorption peak at 1775 cm −1 is indicative of DA adducts [32]. The furan and maleimide concentrations were monitored by observing the change in the peak at 1010 cm −1 in Figure 3a and the peak at 690 cm −1 in Figure 3b, respectively [33]. At 150 • C, the rDA reactions occurred rapidly to depolymerize the DA network and free up a portion of the furan and maleimide from the adducts. Interestingly, over time at 150 • C, the DA adduct peak continued to decrease and the furan peak increased, suggestive of ongoing reactions. Ultimately, the magnitudes of these peak areas plateaued probably due to vitrification and the resulting frozen fDA/rDA equilibrium. In addition, the maleimide peak at 690 cm −1 increased initially due to the free maleimide resulting from rDA reaction, but then continuously decreased. Continuous and rapid consumption of maleimide moieties over time implies the evolution of a side reaction at 150 • C. Polymers 2023, 15, x FOR PEER REVIEW 8 magnitudes of these peak areas plateaued probably due to vitrification and the resul frozen fDA/rDA equilibrium. In addition, the maleimide peak at 690 cm −1 increased tially due to the free maleimide resulting from rDA reaction, but then continuously creased. Continuous and rapid consumption of maleimide moieties over time implies evolution of a side reaction at 150 °C. (a) (b) To further confirm the nature of the side reaction, pure MPM powder was analy with FTIR after treatment at two different temperatures 120 °C and 150 °C at diffe times (as shown in Figure 4). The maleimide peak near 1140 cm −1 from Figure 3 can be attributed to the C-N-C stretch, which shifts to 1180 cm −1 in forming succinimide m eties generated due to free radical-initiated homopolymerization of maleimide [34]. spectra of the MPM sample exposed to 120 °C in Figure 4a showed no visible evolutio a succinimide peak. However, MPM samples treated at 150 °C showed gradual evolu of a broad peak at 1180 cm −1 indicating the formation of succinimide. Other spe changes are observed, such as the merging of doublet peaks at 1145 cm −1 ; however, a tional changes could also be attributed to phase change as the sample becomes glas during homopolymerization. To further confirm the nature of the side reaction, pure MPM powder was analyzed with FTIR after treatment at two different temperatures 120 • C and 150 • C at different times (as shown in Figure 4). The maleimide peak near 1140 cm −1 from Figure 3 can also be attributed to the C-N-C stretch, which shifts to 1180 cm −1 in forming succinimide moieties generated due to free radical-initiated homopolymerization of maleimide [34]. The spectra of the MPM sample exposed to 120 • C in Figure 4a showed no visible evolution of a succinimide peak. However, MPM samples treated at 150 • C showed gradual evolution of a broad peak at 1180 cm −1 indicating the formation of succinimide. Other spectral changes are observed, such as the merging of doublet peaks at 1145 cm −1 ; however, additional changes could also be attributed to phase change as the sample becomes glassier during homopolymerization. spectra of the MPM sample exposed to 120 °C in Figure 4a showed no visible evoluti a succinimide peak. However, MPM samples treated at 150 °C showed gradual evol of a broad peak at 1180 cm −1 indicating the formation of succinimide. Other sp changes are observed, such as the merging of doublet peaks at 1145 cm −1 ; however, tional changes could also be attributed to phase change as the sample becomes gl during homopolymerization. Effect of a Free Radical Inhibitor on Side Reaction Maleimide homopolymerization is a radical-initiated polymerization and can be inhibited by using a radical scavenger such as hydroquinone [35]. Quinones are known for readily accepting or donating free radicals to form a more stable radical species [36]. To confirm hydroquinone's inhibiting ability, we first studied the effect of hydroquinone on neat MPM reactions. We used UV-Vis spectroscopy to track the progression of MPM homopolymerization with and without hydroquinone over time at 150 • C ( Figure 5). The UV-Vis absorbance peak of maleimide groups without hydroquinone is in the range 290 to 310 nm depending on molecular details [22,37,38]. We found the maleimide peak for MPM to be slightly higher at 325 nm and to gradually decrease as maleimide groups were consumed by homopolymerization until the sample became glassy and the reactions became diffusion-limited. The addition of hydroquinone slowed down the homopolymerization as shown in Figure 5b. Effect of a Free Radical Inhibitor on Side Reaction Maleimide homopolymerization is a radical-initiated polymerization and can be inhibited by using a radical scavenger such as hydroquinone [35]. Quinones are known for readily accepting or donating free radicals to form a more stable radical species [36]. To confirm hydroquinone's inhibiting ability, we first studied the effect of hydroquinone on neat MPM reactions. We used UV-Vis spectroscopy to track the progression of MPM homopolymerization with and without hydroquinone over time at 150 °C ( Figure 5). The UV-Vis absorbance peak of maleimide groups without hydroquinone is in the range 290 to 310 nm depending on molecular details [22,37,38]. We found the maleimide peak for MPM to be slightly higher at 325 nm and to gradually decrease as maleimide groups were consumed by homopolymerization until the sample became glassy and the reactions became diffusion-limited. The addition of hydroquinone slowed down the homopolymerization as shown in Figure 5b. at 150 °C over time to track progression of MPM homopolymerization. Spectra are divided by a constant to make peak height at early time equal to one, and a constant is added so spectra are equal at 380 nm following the method of Okihara et al. [22]. Spectra were taken at 6, 10, 15, 20, 30, 40, 60, 120, and 180 min. We further studied the effect of hydroquinone on maleimide homopolymerization in the reversible FA4-MPM polymer with DA adducts. Absorbance peak areas assigned to the maleimide, furan, and DA cycloadduct in FTIR spectra were tracked over time when the sample was subjected to 150 °C. Peak areas were then normalized to values between 0 and 1 for maleimide, furan, and adduct peaks using We further studied the effect of hydroquinone on maleimide homopolymerization in the reversible FA4-MPM polymer with DA adducts. Absorbance peak areas assigned to the maleimide, furan, and DA cycloadduct in FTIR spectra were tracked over time when the sample was subjected to 150 • C. Peak areas were then normalized to values between 0 and 1 for maleimide, furan, and adduct peaks using A n is the normalized peak area absorbance, A is the peak area absorbance at a certain time, A min is the minimum peak area absorbance observed during the studied time frame, and A max is the maximum peak area absorbance observed. Figure 6a,b show that the presence of hydroquinone does not change the rate of adduct consumption and furan production. Within the first 20 to 30 min, the rapid decrease of cycloadducts, or rapid increase of free furans and maleimides, is because of the rDA reaction being predominant over fDA at these temperatures, but by tracking the longtime behavior of the maleimides, hydroquinone's inhibiting ability is confirmed. The concentration of maleimide did not visibly change after~30 min in samples containing hydroquinone, whereas samples without hydroquinone had a decreasing concentration. As expected, these results demonstrate the ability of hydroquinone to reduce the rate of maleimide homopolymerization in reversible Furan-maleimide systems. ymers 2023, 15, x FOR PEER REVIEW expected, these results demonstrate the ability of hydroquinone to reduce th leimide homopolymerization in reversible Furan-maleimide systems. We further studied reactions of an epoxy polymer with DA adducts with hydroquinone at different temperatures. Figure 7 shows the normalized peak cycloadduct and MPM of the FA4-MPM samples with and without hydroqui at 140, 150, and 170 °C. We did not see any specific effect of hydroquinone on conversion (Figure 7a). Interestingly, the addition of hydroquinone slowed do sumption of maleimide due to the side reaction at both 140 °C and 150 °C however, it had the reverse effect on maleimide homopolymerization at 17 7b). Although beyond the scope of this paper, the anomalies in trend observ could be due to other reactions happening between the precursors of DA hydroquinone. However, a potential explanation is that the radicals on hydro come active again because quinones with their resonance structures only st cals, not eliminate them. We further studied reactions of an epoxy polymer with DA adducts with and without hydroquinone at different temperatures. Figure 7 shows the normalized peak area of DA cycloadduct and MPM of the FA4-MPM samples with and without hydroquinone treated at 140, 150, and 170 • C. We did not see any specific effect of hydroquinone on cycloadduct conversion (Figure 7a). Interestingly, the addition of hydroquinone slowed down the consumption of maleimide due to the side reaction at both 140 • C and 150 • C as expected; however, it had the reverse effect on maleimide homopolymerization at 170 • C (Figure 7b). Although beyond the scope of this paper, the anomalies in trend observed at 170 • C could be due to other reactions happening between the precursors of DA adducts and hydroquinone. However, a potential explanation is that the radicals on hydroquinone become active again because quinones with their resonance structures only stabilize radicals, not eliminate them. however, it had the reverse effect on maleimide homopolymerization at 170 °C (F 7b). Although beyond the scope of this paper, the anomalies in trend observed at 1 could be due to other reactions happening between the precursors of DA adduct hydroquinone. However, a potential explanation is that the radicals on hydroquinon come active again because quinones with their resonance structures only stabilize cals, not eliminate them. Effect of a Different Maleimide Precursor on the Side Reaction Another strategy used to minimize the side reaction was to employ a new male precursor. We replaced the bismaleimide compound (MPM) with a trismale Effect of a Different Maleimide Precursor on the Side Reaction Another strategy used to minimize the side reaction was to employ a new maleimide precursor. We replaced the bismaleimide compound (MPM) with a trismaleimide compound (3M); the chemical structures are shown in Scheme 1a. MPM has two maleimide functional groups per molecule of MW 358 g/mol while 3M has three maleimide functional groups per molecule of MW 726 g/mol which reduces the maleimide concentration [M] by about 20% (Table S2). The reduced [M] is expected to reduce the rate of homopolymerization. In addition, the greater size of 3M molecules is anticipated to reduce the molecular mobility, which would also adversely affect the reaction rate. As designed, the use of 3M significantly adversely affects the onset of side reaction at 150 • C (Figure 8a). In contrast to FA4-MPM in Figure 7b, the free maleimide concentration increases along with the furan concentration at the beginning due to rDA reactions, but there is no noticeable decrease even after three hours. In samples without ( Figure 8a) and with hydroquinone (Figure 8b), the adduct concentration decreased during heating due to predominance of rDA over fDA reactions at 150 • C. For the time scale studied, comparison of the maleimide concentration between samples without and with the addition of hydroquinone presented no significant effect of hydroquinone on the side reaction. (Table S2). The reduced [M] is expected to reduce the rate of homopolymerization. In addition, the greater size of 3M molecules is anticipated to reduce the molecular mobility, which would also adversely affect the reaction rate. As designed, the use of 3M significantly adversely affects the onset of side reaction at 150 °C (Figure 8a). In contrast to FA4-MPM in Figure 7b, the free maleimide concentration increases along with the furan concentration at the beginning due to rDA reactions, but there is no noticeable decrease even after three hours. In samples without ( Figure 8a) and with hydroquinone (Figure 8b), the adduct concentration decreased during heating due to predominance of rDA over fDA reactions at 150 °C. For the time scale studied, comparison of the maleimide concentration between samples without and with the addition of hydroquinone presented no significant effect of hydroquinone on the side reaction. Hydroquinone is a known free radical inhibitor and is commonly included into these polymeric Furan-maleimide chemistries to prevent maleimide homopolymerization; however, its effect on rheology to the best of our knowledge has not been reported [17][18][19][20]39]. In Figure 9, the samples with hydroquinone demonstrate comparatively less stiffness overall compared with the samples without. While this could be in part due to hydroquinone and chloroform plasticization in the FA4-MPM and FA4-3M networks, with support of Figure 9b, a better explanation is that hydroquinone is serving its intended Hydroquinone is a known free radical inhibitor and is commonly included into these polymeric Furan-maleimide chemistries to prevent maleimide homopolymerization; however, its effect on rheology to the best of our knowledge has not been reported [17][18][19][20]39]. In Figure 9, the samples with hydroquinone demonstrate comparatively less stiffness overall compared with the samples without. While this could be in part due to hydroquinone and chloroform plasticization in the FA4-MPM and FA4-3M networks, with support of Figure 9b, a better explanation is that hydroquinone is serving its intended purpose of stabilizing free radicals of the maleimide and delaying the onset of irreversible crosslinking. Briefly, the tests conducted at 150 °C were taken straight from room temperature, b the 120 °C tests were first jumped to 140 °C to de-gel the samples by rDA, adhere t samples to the plates, and then measure the properties as the temperature stabilized 120 °C. Ensuring that samples adhere and are flush between the plates takes about 1 min. At 120 °C, besides just being at a lower temperature and slower free radical gener tion, the crosslinking due to equilibrium rDA/fDA reactions likely reduces the mobili compared with 150 °C where many free radicals can be generated and move around free in the molten state. Yet, at 120 °C, the power law slopes change noticeably for all the sam ples past 10 to 100 min. Since the activation energy barrier for rDA reactions must higher than that of the fDA reactions, any increases in temperature from there will acc erate the frequency of rDA more than that of fDA, reducing DA crosslinks (i.e., equil rium shifts in the rDA direction) [40]. Therefore, this change in power law slopes cou not be attributed to a relative increase of the fDA reaction at these temperatures. The FA4-3M is softer compared with FA4-MPM even after the homopolymerizati since the maleimide units exist on a flexible polymer backbone. It can be argued that t observed material behavior of FA4-MPM and FA4-3M is the result of a blend networ with both reversible and permanent bonds contributing to the network stiffness. By e amining the structures of MPM and 3M, one could also reason that MPM will serve as stiffer backbone due to its phenyl rings. This would help explain why so little MPM mu react to irreversibly stiffen the network (compare 10 min in Figures 5 and 9b). Our rhe ogy data confirm that 3M compared with MPM reduces the extent of network stiffeni by the side reaction. This could be due to several reasons: trismaleimide (1) is heavier diffusion is slower; (2) has fewer maleimide groups per volume; (3) has different react ity; and (4) has less adduct bond stiffness when a succinimide unit or chain forms. In Figure 10, the temperature ranges differ for thermal events among the DA po mers and the samples with hydroquinone on the second heat. Expectedly, since the FA 3M samples have maleimides functionalized onto a longer polymer backbone (Jeffami T403), the glass transition temperature is lower compared to that of the FA4-MPM sam ples. Second, the rDA heat flows past 90 °C occur at higher temperatures for the FA4-3 compared to the FA4-MPM. This suggests the FA4-3M DA adducts have a higher activ tion energy barrier for rDA reactions, which could be because of a few reasons. Speci cally, 3M adducts are more flexible, and MPM has its maleimide groups neighboring ph nyl rings that can pull additional electron density; thus, the thermodynamic stability b tween DA adducts formed by MPM and 3M should vary [3]. Briefly, the tests conducted at 150 • C were taken straight from room temperature, but the 120 • C tests were first jumped to 140 • C to de-gel the samples by rDA, adhere the samples to the plates, and then measure the properties as the temperature stabilized at 120 • C. Ensuring that samples adhere and are flush between the plates takes about 1-2 min. At 120 • C, besides just being at a lower temperature and slower free radical generation, the crosslinking due to equilibrium rDA/fDA reactions likely reduces the mobility, compared with 150 • C where many free radicals can be generated and move around freely in the molten state. Yet, at 120 • C, the power law slopes change noticeably for all the samples past 10 to 100 min. Since the activation energy barrier for rDA reactions must be higher than that of the fDA reactions, any increases in temperature from there will accelerate the frequency of rDA more than that of fDA, reducing DA crosslinks (i.e., equilibrium shifts in the rDA direction) [40]. Therefore, this change in power law slopes could not be attributed to a relative increase of the fDA reaction at these temperatures. The FA4-3M is softer compared with FA4-MPM even after the homopolymerization since the maleimide units exist on a flexible polymer backbone. It can be argued that the observed material behavior of FA4-MPM and FA4-3M is the result of a blend network, with both reversible and permanent bonds contributing to the network stiffness. By examining the structures of MPM and 3M, one could also reason that MPM will serve as a stiffer backbone due to its phenyl rings. This would help explain why so little MPM must react to irreversibly stiffen the network (compare 10 min in Figures 5 and 9b). Our rheology data confirm that 3M compared with MPM reduces the extent of network stiffening by the side reaction. This could be due to several reasons: trismaleimide (1) is heavier so diffusion is slower; (2) has fewer maleimide groups per volume; (3) has different reactivity; and (4) has less adduct bond stiffness when a succinimide unit or chain forms. In Figure 10, the temperature ranges differ for thermal events among the DA polymers and the samples with hydroquinone on the second heat. Expectedly, since the FA4-3M samples have maleimides functionalized onto a longer polymer backbone (Jeffamine T403), the glass transition temperature is lower compared to that of the FA4-MPM samples. Second, the rDA heat flows past 90 • C occur at higher temperatures for the FA4-3M compared to the FA4-MPM. This suggests the FA4-3M DA adducts have a higher activation energy barrier for rDA reactions, which could be because of a few reasons. Specifically, 3M adducts are more flexible, and MPM has its maleimide groups neighboring phenyl rings that can pull additional electron density; thus, the thermodynamic stability between DA adducts formed by MPM and 3M should vary [3]. Conclusions Diels-Alder polymers formed via Furan-maleimide chemistry are among the most common dissociative covalent adaptable networks studied for their thermoreversibility; general reasons include their ease of synthesis and recyclability without byproducts (e.g., water), expensive catalysts, or additional stimuli (e.g., pH) [41]. This chemistry shows significant promise for producing a range of useful sustainable thermoplastics and thermosets because of reports on their adhesive, self-healing, and 3D-printing qualities [3,42]. An important question probed through this work by overall reaction kinetics and rheological analyses was how long such DA materials can remain recyclable at elevated temperatures. Retro-DA occurs above 110 °C to return the polymer network into a (re-)processable melt state. However, at that high temperature, there is a possibility of a side reaction to form irreversible crosslinking through the maleimide homopolymerization. Applications of thermoset polymers typically require their robust mechanical property and thermal stability at operating conditions. To fulfill these requirements on top of the reversibility of thermosets, reversible (or recycle) processes at high temperature (e.g., >160 °C) with reduced side reaction is very important. The small-molecule bismaleimide compound MPM studied here, which is among the most commonly used maleimides [2,19,23,26,35,39,43], suffered from rapid network stiffening and lack of recyclability due to side reactions. The onset of this network stiffening was observed at 160 °C in a temperature sweep and 10 min as a function of duration at 150 °C. A number of mitigation strategies against maleimide homopolymerization were explored. Reducing the side reaction by changing the feed ratio of reactants is one possibility. However, this is not a linear relationship probably because of the network mobility increase associated with smaller ratios and less crosslink density [20,39]. We also provided a detailed study on how adding a radical-reaction inhibitor, hydroquinone, could regard the side reaction. Further, in addition to having greater functionality with a larger molecular weight than bismaleimide, the new maleimide precursor has slower diffusion and lower concentration in the system, both of which would slow the onset of the side reaction and reduce its consequent network stiffening. Complete prevention of the maleimide homopolymerization may not be feasible especially in systems in which the (re-)processing temperatures exceed 150 °C, such as in rubber systems utilizing the DA chemistry [25,44]. Nevertheless, these strategies are all potential ways to minimize the impact by the maleimide side reaction and provide insight into how to extend the service life over repeating process cycles of such materials. Supplementary Materials: The following supporting information can be downloaded at: www.mdpi.com/xxx/s1. Table S1 provides property table with molecular weight, specific volume, and molar volume of different precursors. Table S2 includes molarity of furan and maleimide Conclusions Diels-Alder polymers formed via Furan-maleimide chemistry are among the most common dissociative covalent adaptable networks studied for their thermoreversibility; general reasons include their ease of synthesis and recyclability without byproducts (e.g., water), expensive catalysts, or additional stimuli (e.g., pH) [41]. This chemistry shows significant promise for producing a range of useful sustainable thermoplastics and thermosets because of reports on their adhesive, self-healing, and 3D-printing qualities [3,42]. An important question probed through this work by overall reaction kinetics and rheological analyses was how long such DA materials can remain recyclable at elevated temperatures. Retro-DA occurs above 110 • C to return the polymer network into a (re-)processable melt state. However, at that high temperature, there is a possibility of a side reaction to form irreversible crosslinking through the maleimide homopolymerization. Applications of thermoset polymers typically require their robust mechanical property and thermal stability at operating conditions. To fulfill these requirements on top of the reversibility of thermosets, reversible (or recycle) processes at high temperature (e.g., >160 • C) with reduced side reaction is very important. The small-molecule bismaleimide compound MPM studied here, which is among the most commonly used maleimides [2,19,23,26,35,39,43], suffered from rapid network stiffening and lack of recyclability due to side reactions. The onset of this network stiffening was observed at 160 • C in a temperature sweep and 10 min as a function of duration at 150 • C. A number of mitigation strategies against maleimide homopolymerization were explored. Reducing the side reaction by changing the feed ratio of reactants is one possibility. However, this is not a linear relationship probably because of the network mobility increase associated with smaller ratios and less crosslink density [20,39]. We also provided a detailed study on how adding a radical-reaction inhibitor, hydroquinone, could regard the side reaction. Further, in addition to having greater functionality with a larger molecular weight than bismaleimide, the new maleimide precursor has slower diffusion and lower concentration in the system, both of which would slow the onset of the side reaction and reduce its consequent network stiffening. Complete prevention of the maleimide homopolymerization may not be feasible especially in systems in which the (re-)processing temperatures exceed 150 • C, such as in rubber systems utilizing the DA chemistry [25,44]. Nevertheless, these strategies are all potential ways to minimize the impact by the maleimide side reaction and provide insight into how to extend the service life over repeating process cycles of such materials. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/polym15051106/s1. Table S1 provides property table with molecular weight, specific volume, and molar volume of different precursors. Table S2 includes molarity of furan and maleimide moieties in the DA polymers. Figure S1 provides 1 H-NMR spectra and assignments of FGE, Jeffamine ED-600, and FA4. Figure S2 shows complex viscoelastic modulus and Tan(δ) for samples corresponding to Figures 1a and 9a. Figure S3 is DSC and rheology data in triplicate demonstrating reproducibility. Data Availability Statement: The data presented in this study are available on request from the corresponding authors.
10,645.6
2023-02-23T00:00:00.000
[ "Materials Science", "Chemistry" ]
NMN Deamidase Delays Wallerian Degeneration and Rescues Axonal Defects Caused by NMNAT2 Deficiency In Vivo Axons require the axonal NAD-synthesizing enzyme NMNAT2 to survive. Injury or genetically induced depletion of NMNAT2 triggers axonal degeneration or defective axon growth. We have previously proposed that axonal NMNAT2 primarily promotes axon survival by maintaining low levels of its substrate NMN rather than generating NAD; however, this is still debated. NMN deamidase, a bacterial enzyme, shares NMN-consuming activity with NMNAT2, but not NAD-synthesizing activity, and it delays axon degeneration in primary neuronal cultures. Here we show that NMN deamidase can also delay axon degeneration in zebrafish larvae and in transgenic mice. Like overexpressed NMNATs, NMN deamidase reduces NMN accumulation in injured mouse sciatic nerves and preserves some axons for up to three weeks, even when expressed at a low level. Remarkably, NMN deamidase also rescues axonal outgrowth and perinatal lethality in a dose-dependent manner in mice lacking NMNAT2. These data further support a pro-degenerative effect of accumulating NMN in axons in vivo. The NMN deamidase mouse will be an important tool to further probe the mechanisms underlying Wallerian degeneration and its prevention. INTRODUCTION Axon degeneration is a widely recognized hallmark of many neurodegenerative disorders and axonopathies, including peripheral neuropathies, Parkinson's disease, multiple sclerosis, and others [1,2]. Therefore, understanding the molecular mechanisms causing axon destruction will have significant therapeutic implications. Wallerian degeneration, the degeneration of the distal axon stump following injury [3], shares both morphological and mechanistic features with the axon pathology in several neurodegenerative disorders [2]. Recent studies revealed a crucial role for the endogenous mammalian nicotinamide mononucleotide adenylyltransferase (NMNAT) isoform NMNAT2 in axon survival [4]. NMNAT2 is actively transported along the axon, but, due to a short half-life [4,5], its levels in transected axons decline prior to any visible sign of fragmentation, suggesting it may be a trigger for axon degeneration [4]. Any of the three natural NMNAT isoforms or the slow Wallerian degeneration protein (WLD S ) (an aberrant fusion protein with NMNAT activity [6]) can robustly delay Wallerian degeneration when present at sufficient levels in axons [2]. This is likely achieved by maintaining axonal NMNAT enzymatic activity after loss of endogenous NMNAT2 through increased levels and/or greater relative stability of the introduced proteins [2,4]. The findings that specific depletion of NMNAT2 in neuronal primary culture is sufficient to initiate WLD S -sensitive degeneration [4] and that mice lacking NMNAT2, which develop severe axonal defects and die at birth, can be rescued by WLD S expression [7] are also consistent with this model. More recently, other endogenous regulators of Wallerian degeneration have emerged, but NMNAT2 loss appears to be a critical, early event in a conserved axon degeneration pathway upstream of other core regulators, including the pro-degenerative sterile alpha and TIR motif containing 1 (SARM1) protein [2,[8][9][10][11][12]. However, exactly how maintaining NMNAT activity promotes axon survival has still not been fully resolved [8]. We have proposed a model, based on pharmacological and genetic evidence, where accumulation of the NMNAT substrate, nicotinamide mononucleotide (NMN), an expected consequence of NMNAT2 depletion, promotes Wallerian degeneration [11,13]. A cornerstone of the model was the finding that expression of the E. coli enzyme NMN deamidase, for which the only known activity is conversion of NMN to its deamidated form, nicotinic acid mononucleotide (NaMN) [14], confers neurite protection in primary neuronal cultures comparable to that produced by WLD S or stable NMNATs [13]. Crucially, both WLD S /NMNATs and NMN deamidase are able to scavenge NMN, but, unlike WLD S / NMNATs, NMN deamidase lacks intrinsic NAD synthesis activity. Here we have used zebrafish larvae and a transgenic mouse line to test whether E. coli NMN deamidase can delay axon degeneration in vivo as it does in primary neuronal cultures. We have also tested whether it can protect axons from insults that do not involve physical injury, including whether it can rescue axonal defects in mice lacking NMNAT2. In all cases, we find that NMN deamidase expression confers strong protection, consistent with a pro-degenerative role for NMN accumulation in axons in vivo. NMN Deamidase Blocks Wallerian Degeneration in Zebrafish Larvae Many events occurring in Wallerian degeneration are evolutionarily conserved [15][16][17][18][19]. We previously found that the NAMPT inhibitor FK866 delays Wallerian degeneration in zebrafish larvae, likely through inhibition of NMN accumulation and a consequent rise in Ca 2+ [11,13]. Therefore, we first asked whether expression of NMN deamidase can protect axons in vivo using this relatively simple vertebrate model system. NMN deamidase was transiently expressed (along with DsRed) in trigeminal and Rohon-beard somatosensory neurons (Figure 1A). Axons were cut 48-54 hr post-fertilization using two-photon laser axotomy. Control larvae displayed a normal rate of axon degeneration after injury (expressed as lag phase [18]), usually completed within 2 hr under these experimental conditions ( Figures 1B and 1C; Movie S1). In contrast, NMN deamidase blocked injury-induced axon degeneration up to at least 12 hr ( Figures 1B and 1C; Movie S2), similar to the previously reported effects of WLD S [18]. A few axons (n = 3) were followed up to 24 hr and were found to remain intact (Movie S2). Axon regrowth from the site of injury confirmed successful axon transection and that the zebrafish larvae were viable for the duration of the experiment. Interestingly, quick axonal regeneration following laser axotomy in NMN deamidase-expressing neurons occurred despite the presence of the preserved distal stump (Movie S2), contrasting what is seen in nerves in Wld S mice [20,21]. It may be that there are fewer physical restrictions influencing regrowth in the zebrafish larvae, but, as previously described [18], the regenerating axons still avoided the distal stump, suggesting the regeneration process may still be affected. These data indicate that NMN deamidase can confer strong in vivo protection against Wallerian degeneration in a vertebrate system. Generation and Biochemical Characterization of an NMN Deamidase Transgenic Mouse Next, we generated a transgenic mouse expressing E. coli NMN deamidase, fused to EGFP, under the control of the b-actin promoter (see the Experimental Procedures). We obtained four positive founder mice that showed no overt phenotype up to at least 1 year (mice were not aged further). All four founders were fertile, but only one (14209) transmitted the NMN deamidase transgene to fertile offspring. Although we were able to detect the presence of transgene mRNA by RT-PCR (Figure 2A), EGFP-tagged NMN deamidase was undetectable using fluorescence microscopy, immunofluorescence, or by immunoblotting for EGFP (data not shown), suggesting extremely low expression of the protein. However, NMN deamidase activity was detectable in brain extracts. Interestingly, mean enzyme activity in founder 14209 and its hemizygous offspring (0.007 ± 0.001 mU/mg) was one or two orders of magnitude lower than in the other founders, which were unable to efficiently transmit the transgene (0.069 and 0.612 mU/mg protein in founders 14207 and 14208, respectively). Enzyme activity was essentially undetectable in wild-type mice (%0.002 mU/mg). Notably, brain levels of NMN negatively correlated with NMN deamidase activity in the founders ( Figure 2C), whereas NaMN and the corresponding dinucleotide NaAD, which were only reliably detectable in transgenic mice, both positively correlated with activity ( Figures 2C and 2D). These are predicted consequences of NMN deamidase activity on the NAD biosynthetic pathway ( Figure 2B), and they match previously reported nucleotide determinations in cultured dorsal root ganglion (DRG) neurons exogenously expressing NMN deamidase [22]. NAD levels did not appear to correlate as strongly with NMN deamidase activity ( Figure 2D), perhaps indicating that the generation of NAD from NaAD (via NAD synthase) can balance any loss of production from NMN (via NMNAT) ( Figure 2B). ATP, ADP, and AMP levels appeared largely unaffected by NMN deamidase expression ( Figure 2E). NMN Deamidase Expression Confers Morphological and Functional Preservation of Transected Axons We previously reported that NMN deamidase expression confers protection against injury-induced axon degeneration in primary neuronal cultures similar to WLD S [11,13]. Here, using four complementary methods, we assessed whether this protective effect was reproducible in vivo by comparing rates of Wallerian degeneration in wild-type mice with those in hemizygous (NMNd hemi ) or homozygous (NMNd homo ) transgenic mice. (legend on next page) NMNd homo mice for up to 2 or 3 weeks, respectively (Figures 3A and 3B; Figure S1C). Second, light microscopy revealed morphological preservation of significant numbers of transected myelinated sciatic nerve axons for at least 2 weeks after lesion in NMNd hemi mice and for greater than 3 weeks in NMNd homo mice ( Figures 3C and 3D; Figure S1D). Third, structural continuity of YFP-labeled axons in lesioned sciatic nerves from NMNd hemi mice additionally hemizygous for the YFP-H transgene was also preserved for up to 2 weeks ( Figure 3E). Finally, we used electromyography to assess functionality of severed axons, and we found that conduction velocity was fully preserved in NMNd homo mice at 7 days after sciatic nerve lesion ( Figure 3F). Crucially, structural preservation of axons after sciatic nerve lesion was also seen in independent NMN deamidase founder mice (Figures S1A, S1B, and S1E), thereby ruling out the possibility that the protective phenotype in the founder 14209-derived line is the result of disruption of another gene at the site of transgene integration. While the majority of axons in transected nerves from NMNd hemi and NMNd homo mice were strongly protected, a small proportion appeared to degenerate rapidly. This likely reflects variation in NMN deamidase expression between different axons, although we cannot rule out this subpopulation was instead dying via an alternative mechanism. Interestingly, variable protection of individual fibers is also seen in transected Wld S nerves [6]. Together, these data show that the NMN scavenger enzyme NMN deamidase delays Wallerian degeneration in mice. The level of protection is approaching that seen in Wld S mice [6], despite NMN deamidase being unable to synthesize NAD directly. Crucially, where NMNd hemi and NMNd homo mice were compared directly, we saw greater and prolonged protection of axons in the NMNd homo mice (Figures 3A-3D; Figure S1C). Enzymatic activity and deamidated nucleotide levels were higher in NMNd homo mice than in NMNd hemi mice (data not shown), in keeping with an expected higher expression level of the bacterial enzyme, indicating a dose-dependent protective effect for NMN deamidase similar to that seen for WLD S [6,23]. NMN Deamidase Prevents NMN Accumulation in Injured Sciatic Nerves We have previously shown that NMN accumulates in transected wild-type sciatic nerves prior to degeneration of their axons, reasoning that this is likely due to rapid loss of axonal NMNAT2 and a subsequent failure to convert NMN to NAD [13]. In transected nerves from NMNd hemi mice, we instead observed a reduction in the rate of accumulation of NMN, consistent with the activity of the bacterial enzyme, coupled to a compensatory increase in NaMN ( Figures 4A and 4B). Accumulation of NaMN in the transected nerves from NMNd hemi mice, like NMN accumulation in wild-type nerves, is an anticipated consequence of an early loss of NMNAT2, since conversion of NaMN to NaAD, like the conversion of NMN to NAD, requires NMNAT activity (Figure 2B). Similarly, modestly declining levels of NAD in both the wild-type and NMNd hemi transected nerves, and of NaAD in the NMNd hemi nerves ( Figures 4C and 4D), are also realistic outcomes of loss of axonal NMNAT2 prior to frank degeneration of axons. This metabolic profiling of pyridine mononucleotides and dinucleotides in transected sciatic nerves is consistent with the hypothesis that NMN deamidase delays Wallerian degeneration by preventing NMN accumulation. NMN Deamidase Protection Is Neuron Specific and Effective against Insults that Do Not Involve Physical Injury Exogenous expression of NMN deamidase in primary superior cervical ganglion (SCG) neurons is sufficient to confer a slow Wallerian degeneration phenotype [13]. To confirm that the axon-protective effect of transgene-expressed NMN deamidase is also neuron specific, we cultured SCGs and DRGs from NMNd hemi and NMNd homo mice and assessed protection of cut neurites. We found that SCG and DRG neurites were both strongly protected against injury-induced degeneration in the transgenic cultures and that this was dose dependent ( Figures 5A and 5B; Figures S3A and S3B). Protection appeared stronger in DRG neurites than in SCG neurites. Interestingly, the exogenous addition of NMN to transected NMNd homo SCG neurites had limited toxic effect on the protective phenotype, reflecting the ability of NMN deamidase to scavenge exogenous NMN ( Figures 5A and 5B). This contrasts the toxic effect of NMN on transected neurites protected with FK866, where the synthesis of the NMN is inhibited but where a scavenging system for the exogenously added NMN is not present [13]. We next tested the relative resistance of NMNd homo SCG neurons to two different pro-degenerative insults, vincristine toxicity and trophic factor deprivation, both of which trigger WLD S -sensitive axon degeneration without physical injury [24][25][26]. In both cases, we observed substantial neurite protection in the NMNd homo cultures relative to controls (Figures 5C-5F). These data confirm that NMN deamidase functions within neurons and that it inhibits an axon degeneration pathway that can be activated by several types of insult, including those not involving physical injury. (B) Quantification of NF-H band intensity after normalization to histone 3 (H3) in wild-type, NMNd hemi , and NMNd homo mice. Less nerve lysate from uncut nerve was used per lane to avoid overloading and progressively higher amounts loaded for cut nerves (as degeneration increased) to better visualize the intact NF-H and smear of degradation products. We corrected for this by normalizing to the loading control band and expressing values as a percentage of the normalized value in uncut nerve (mean ± SD; n = 3-4; two-way ANOVA followed by Bonferroni post hoc test, **p < 0.01, ***p < 0.001, and ****p < 0.0001; NS, non-significant). (C and D) Light microscopy images of sciatic nerves from wild-type, NMNd hemi , and NMNd homo mice at the indicated time points after cut (C) and quantification of the percentage of intact axons (D) (mean ± SD; n = 3; two-way ANOVA followed by Bonferroni post hoc test, ****p < 0.0001). (E) Fluorescent images of sciatic nerves from YFP and YFP/NMNd hemi mice at the indicated time points after cut. Axon continuity is better preserved in the NMN deamidase-expressing axons. (F) Conduction velocities measured in uncut nerves or 7 days after cut via stimulation of sciatic nerve to evoke electromyographic activity (mean ± SEM; n = 5-6; one-way ANOVA, ***p < 0.001). See also Figure S1. NMN Deamidase Rescues Axonal Defects in NMNAT2-Deficient Mice Mice homozygous for the Nmnat2 gtE gene trap allele lack detectable NMNAT2 expression and die at birth, with widespread axon truncation caused by an underlying axon outgrowth defect [7]. WLD S expression or an absence of SARM1 can rescue the axon extension defect, allowing mice to survive into adulthood, indicating that the underlying defect shares aspects of its mechanism with Wallerian degeneration [7,10]. Consistent with an involvement of NMN in promoting axon degeneration [13], we previously found that blocking NMN synthesis in Nmnat2 gtE/gtE DRG cultures with NAMPT inhibitor FK866 partially rescues neurite extension [10]. However, technical limitations meant more pronounced or prolonged rescue could not be achieved. We therefore assessed whether NMN deamidase expression in NMNAT2-deficient mice can restore axon extension and promote their survival. We first assessed rescue of the axon defect in newborn Nmnat2 gtE/gtE ;NMNd hemi pups (lacking NMNAT2 but expressing NMN deamidase). Although Nmnat2 gtE/gtE ;NMNd hemi pups still died during the first post-natal day, they lacked the characteristic hunched posture of Nmnat2 gtE/gtE pups that die at birth ( Figure 6A). Consistent with this rescue of gross morphology, we found that distal phrenic nerve branches, which are absent from diaphragms of embryonic day (E)18.5 Nmnat2 gtE/gtE embryos, were present in Nmnat2 gtE/gtE ;NMNd hemi embryos ( Figure 6C). In addition, neurite outgrowth in E18.5 DRG and SCG explant cultures was significantly rescued (Figures 6D and 6E). Nmnat2 gtE/gtE ;NMNd hemi DRG neurite outgrowth actually matched that of wild-type and NMNd hemi controls. In contrast, rescue of Nmnat2 gtE/gtE ;NMNd hemi SCG neurite outgrowth was substantial, but less complete (including no rescue of a small subpopulation of neurites). Interestingly, this is consistent with the marginally weaker protection against Wallerian degeneration in SCG cultures compared to DRG cultures, and it may reflect different levels of transgene expression in different neuronal types. Crucially, the Nmnat2 gtE allele remained effectively silenced when NMN deamidase was expressed ( Figure 6B), excluding the possibility that changes in Nmnat2 silencing are responsible for the observed rescue. To assess whether higher doses of NMN deamidase can confer improved rescue, we inter-crossed Nmnat2 +/gtE ;NMNd hemi mice. These crosses produced a number of mice lacking NMNAT2 and positive for the NMN deamidase transgene that survived beyond weaning up to at least 2-3 months of age ( Figure 6F). Our current genotyping methods cannot distinguish NMNd hemi mice from NMNd homo mice with complete certainty, but the fact that no Nmnat2 gtE/gtE ;NMNd hemi pups from Nmnat2 +/gtE 3 Nmnat2 +/gtE ;NMNd hemi crosses survived past the first post-natal day ( Figure 6F) strongly suggests the viable NMNAT2-deficient/ NMN deamidase-positive mice from the Nmnat2 +/gtE ;NMNd hemi inter-crosses were homozygous for the NMN deamidase transgene (Nmnat2 gtE/gtE ;NMNd homo mice). Interestingly, this implies dose-dependent rescue, similar to that conferred by WLD s in the same model [7], matching the dose-dependent protection of axotomized axons (see above). The ability of NMN deamidase expression to rescue axon defects in Nmnat2 gtE/gtE mice and promote their survival in a dose-dependent manner provides strong support for a key involvement of NMN accumulation in the axon truncation phenotype caused by NMNAT2 deficiency. DISCUSSION Here we have demonstrated that NMN deamidase is able to delay Wallerian degeneration in vivo, thereby extending previous findings obtained in primary neuronal cultures [11,13,22]. It can do this in two disparate vertebrate species and seemingly also when expressed at very low levels. Neurites in primary cultures of neurons from NMN deamidase transgenic mice also show strong protection following vincristine treatment and nerve growth factor (NGF) withdrawal, both of which are insults that do not involve physical injury. Strikingly, NMN deamidase expression can also rescue the severe axon outgrowth defect in mice lacking NMNAT2, which has an underlying degenerative basis [7,10], to the extent that NMNAT2-deficient mice, rather than dying at birth, can instead survive well past weaning age if they express enough of the enzyme. Notably, all of these outcomes, including the dose dependency of the protective phenotype, mirror those previously seen with WLD S , thereby supporting the idea that NMN deamidase influences a core step in an evolutionarily conserved WLD S -sensitive axon degeneration pathway that is activated in a variety of situations. When considered in isolation, all the data presented here are consistent with a model in which preventing NMN accumulation in axons after induced or constitutive depletion of NMNAT2 protects them from degeneration. We originally proposed this model in response to two observations that contradicted the view that maintaining NAD levels is instead critical for axon , and (F) was calculated from three fields per sample in three independent experiments (mean ± SD; two-way ANOVA followed by Bonferroni post hoc test, *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001; NS, non-significant). See also Figure S3. (legend continued on next page) survival. The first was the ability of NAMPT inhibitor FK866 to delay Wallerian degeneration despite a simultaneous decline in NAD levels, while the second was our previous finding that exogenous expression of NMN deamidase in primary neuronal cultures can phenocopy WLD S , despite there being no clear route for generating NAD from its product, NaMN, once NMNAT2 depletion has occurred [13]. We argued that the simplest way to link these observations to the strong protective effect of NMNATs (which do maintain NAD levels) was through a shared ability to keep NMN levels low. However, a recent study has questioned this model, reporting situations where genetic and pharmacological manipulation of the NAD biosynthetic pathway in primary DRG cultures raises NMN either in uninjured neurites that remain unaffected or in injured neurites that show delayed Wallerian degeneration [22]. While these findings seem inconsistent with the NMN accumulation model, it is notable that all instances where raised NMN correlates with neurite preservation also involve substantial increases in NAD. An increase in NAD is also not universally sufficient to protect injured axons [22,27], so it was postulated that changes in other, as yet unknown, metabolites cause axon degeneration [22]. However, the common protective mechanism of FK866, bacterial NMN deamidase, and mammalian NMNAT is more likely to be their known, shared influence on NMN than similar effects on another metabolite that has not yet been identified or shown to be similarly affected by all three manipulations. Likewise, it can be argued that a direct involvement of changes to endogenous levels of NaMN or NaAD in the protective mechanism is unlikely. Given that our new in vivo data further support a pro-degenerative role for NMN, we suggest that the best fit for all currently available data is a model involving both NMN and NAD. For example, the effects of accumulated NMN may be evident only in the context of physiological or declining NAD levels, as occurs in transected axons in the first few hours after injury [13,22]. Given their structural relatedness, high NAD could inhibit the pro-degenerative function of NMN directly. Alternatively, accumulated NMN could activate early steps in the degeneration pathway, irrespective of NAD concentration, but raising NAD could compensate for later SARM1-dependent NAD depletion [22,28]. Crucially, this would reconcile the NMN accumulation model [10,11,13] with a SARM1-dependent NAD depletion model [22,28], and it provides a simple explanation for FK866mediated protection of cut axons. Of note, although the protective effect of FK866 has been reported in several studies, the degree of protection achieved does vary [11,13,22,27,29]. Different sources of FK866 could account for this variability; however, in our hands, FK866 is one of the most effective pharmacological tools for delaying Wallerian degeneration, consistently protecting injured DRG neurites for up to 48 hr [13]. NMN deamidase transgenic mice have other potential uses. The ability of NMN deamidase to rescue axon defects in NMNAT2-deficient mice suggests that these mice could be a useful tool to study the putative detrimental effects of NMN accumulation in models of neurodegenerative diseases. Crucially, comparisons between the abilities of NMN deamidase and WLD S to ameliorate symptoms in any given model should help to resolve whether the primary underlying defect is axon degeneration or a failure to produce NAD, particularly in the nucleus where WLD S is most abundant [2]. Outside of the axon degeneration field, NMN deamidase transgenic mice could be informative with respect to general NAD metabolism. In keeping with the reported low expression levels of NAD synthase in the nervous system [30], which converts NaAD to NAD, these mice have abnormally high steady-state levels of both NaMN and NaAD. It will, therefore, be interesting to establish to what extent normal regulatory feedback loops in the NAD biosynthetic pathway [31] are altered in NMNd hemi and NMNd homo mice. Importantly, such changes might explain why the transgenic founders with higher enzyme activity failed to transmit the transgene. In conclusion, our data strongly support an in vivo role for NMN accumulation in triggering axon degeneration both after injury and when NMNAT2 is constitutively depleted, with axon protection by WLD s /NMNATs and NMN deamidase in both situations at least partially relying on their ability to limit NMN accumulation. Animal Procedures Animal work was carried out in accordance with the Animals (Scientific Procedures) Act, 1986, under Project Licenses PPL 70/7620 and PPL 40/3482 following the appropriate ethical review processes at the University of Nottingham and Babraham Institute. Design and generation of a transgene construct for expression of E. coli NMN deamidase fused to EGFP from a b-actin promoter and procedures for PCR-genotyping founder mice and their transgene-positive offspring are detailed in the Supplemental Experimental Procedures. Sciatic nerve lesions and the analysis of YFP-labeled nerves were performed as described previously [13,32] (see also the Supplemental Experimental Procedures). Zebrafish experiments were performed as described in [11] (see also the Supplemental Experimental Procedures). (C) bIII-tubulin immunostaining (left-hand images) revealing the presence of phrenic nerve terminal branches in a Nmnat2 gtE/gtE ;NMNd hemi diaphragm. These are invariantly absent from Nmnat2 gtE/gtE diaphragms. Boxed regions are magnified (right). Acetylcholine receptor (AChR) clusters are labeled by counter-staining with bungarotoxin-TRITC. Innervation of AChR clusters is only evident in the Nmnat2 gtE/gtE ;NMNd hemi diaphragm (images representative of n = 3 Nmnat2 gtE/gtE and n = 5 Nmnat2 gtE/gtE ;NMNd hemi diaphragms). Innervation in Nmnat2 gtE/gtE ;NMNd hemi diaphragms is superficially similar to that in wild-type and NMNd hemi controls (data not shown). (D and E) Radial neurite outgrowth from DRG explants (D) and radial neurite outgrowth from SCG explants (E). Ganglia were taken from E18.5 embryos of the genotypes listed and outgrowth followed over 7 days in culture (mean radial extension in mm ± SEM; average of two ganglia per embryo for n = 3-5 embryos per genotype; two-way repeated-measures ANOVA with Dunnett's multiple comparisons post hoc tests of wild-type versus other groups, *p < 0.05 and ***p < 0.001). Representative images of neurite outgrowth at 7 days for Nmnat2 gtE/gtE and Nmnat2 gtE/gtE ;NMNd hemi DRG (D) and SCG (E) cultures are shown to the right of each graph in each panel. Nmnat2 gtE/gtE ;NMNd hemi DRG and SCG neurite outgrowth (from ganglia positioned on the left) extends beyond the right-hand edge of the images. Two populations of neurites can be differentiated in Nmnat2 gtE/gtE and Nmnat2 gtE/gtE ;NMNd hemi SCG cultures (E); most Nmnat2 gtE/gtE neurites show severely retarded outgrowth (mass, solid line), with a subpopulation extending further (dashed line); most Nmnat2 gtE/gtE ;NMNd hemi neurites instead show nearnormal outgrowth (mass, solid line), while a subpopulation shows severely limited outgrowth (dashed line). (F) Viability past post-natal day 1 (P1) for offspring from Nmnat2 +/gtE 3 Nmnat2 +/gtE ;NMNd hemi and Nmnat2 +/gtE ;NMNd hemi 3 Nmnat2 +/gtE ;NMNd hemi matings. Statistical Analysis Data are expressed as mean ± SEM or SD (as stated). Statistical analysis was performed using GraphPad ANOVA or Student's t test, with p values less than 0.05 being considered significant for any set of data. All other methods are described in the Supplemental Experimental Procedures.
6,133.4
2017-03-20T00:00:00.000
[ "Biology" ]
Roles of Ubiquitination and Deubiquitination in Regulating Dendritic Cell Maturation and Function Dendritic cells (DCs) are specialized antigen-presenting cells that play a key role in immune homeostasis and the adaptive immune response. DC-induced immune tolerance or activation is strictly dependent on the distinct maturation stages and migration ability of DCs. Ubiquitination is a reversible protein post-translational modification process that has emerged as a crucial mechanism that regulates DC maturation and function. Recent studies have shown that ubiquitin enzymes, including E3 ubiquitin ligases and deubiquitinases (DUBs), are pivotal regulators of DC-mediated immune function and serve as potential targets for DC-based immunotherapy of immune-related disorders (e.g., autoimmune disease, infections, and tumors). In this review, we summarize the recent progress regarding the molecular mechanisms and function of ubiquitination in DC-mediated immune homeostasis and immune response. INTRODUCTION Ubiquitination is a post-translational mechanism of protein modification that plays a crucial role in diverse biological processes by regulating the degradation and activity of substrate proteins (1,2). The covalent conjugation of ubiquitin to lysine (K) residues of substrate proteins is mediated by an enzymatic reaction cascade. This process is catalyzed by the sequential activity of ubiquitinactivating (E1), ubiquitin-conjugating (E2), and ubiquitin-ligating (E3) enzymes (3). The substrate specificity of ubiquitination is primarily determined by E3 ubiquitin ligases, which recognize substrate proteins and catalyze the conjugation of ubiquitin to target proteins (3). The E3 ubiquitin ligases are a large, diverse group of proteins, characterized by one of several defining motifs. These have been historically grouped into two classes: the RING (really interesting new gene)-type E3 ubiquitin ligases and the HECT (homologous to the E6AP carboxyl terminus)-type E3 ubiquitin ligases (4). During the formation of polyubiquitin chains, the carboxyl-terminal glycine residue of ubiquitin is typically attached to an internal K residue or the amino-terminal methionine (M1) of another ubiquitin, which forms eight polyubiquitin chains, including K6, K11, K27, K29, K33, K48, K63, and M1 ubiquitin chains (5,6). The type of polyubiquitin chains can influence the fate of the substrate, which exert distinct biological functions. For example, K48-and K11-linked polyubiquitin chains are usually involved in directing proteins for proteasome-dependent degradation, whereas K63-linked polyubiquitin chains have nonproteolytic functions, including the regulation of protein kinase activation and DNA damage repair (7). Ubiquitination is a reversible process that can be reversed by deubiquitination, which is performed by DUBs (8). DUBs can cleave conjugated ubiquitin chains from ubiquitinated substrates for signal transduction (8). Ubiquitination and deubiquitination play an essential role in the regulation of different aspects of immune function. The role of ubiquitination in lymphocyte biology has been reviewed elsewhere (6,(9)(10)(11). Therefore, this review will focus on dendritic cells (DCs). DCs are specialized antigen-presenting cells that initiate and shape both the innate and adaptive immune responses (12). DCs arise from a hematopoietic lineage, which consist of developmentally and functionally distinct subsets. Under steady state conditions, DCs are customarily divided into conventional DCs (cDCs) and plasmacytoid DCs (pDCs). Moreover, cDC subsets comprise type 1 cDCs (cDC1s/CD8a + / CD103 + DCs) and type 2 cDCs (cDC2s/CD11b + DCs), which are specialized at priming CD8 + and CD4 + T cells, respectively (13). Under inflammatory conditions, circulating Ly6C hi monocytes directly differentiate into monocyte-derived DCs (MoDCs), which are induced by the inflammatory microenvironment and possess pro-inflammatory activity (13,14). Recent studies have demonstrated that ubiquitination regulates DC-mediated immune homeostasis and adaptive immune responses. In this review, we describe the recent progress regarding the crucial role of ubiquitination in DC maturation and function ( Table 1). MOLECULAR MECHANISMS OF UBIQUITINATION-MEDIATED DC MATURATION Regulation of MHCII and Costimulatory Molecules by Ubiquitination Under physiological conditions, DCs maintain an immature or steady state to induce immune tolerance and maintain immune homeostasis (46). Upon toll-like receptor (TLR) stimulation, DCs undergo a transition from an immature to mature state and are accompanied by markedly upregulated membrane molecules, MHCII and costimulatory molecules, including CD80, CD86, and CD40 (13). Ubiquitination is an important mechanism that controls the surface expression and transport of MHCII by ubiquitin-dependent protein degradation in the lysosome (15,16). The E3 ubiquitin ligase MARCH1, a member of the membrane-associated RING-CH (MARCH) family, ubiquitinates the b subunit of MHCII molecule via its cytoplasmic lysine (17). Apart from MHCII, MARCH1 also ubiquitinates CD86 in DCs by inducing intracellular degradation via the transmembrane domains (18,19). Moreover, the transmembrane domains of MARCH1 interact but do not ubiquitinate CD83. CD83 competes with CD86 for interaction with MARCH1, thereby inhibiting CD86 ubiquitination and promoting the expression of CD86 on the surface of DCs (20). Overall, MARCH1 negatively regulates DC maturation by inducing the ubiquitin-dependent degradation of MHCII and CD86, and its expression is downregulated during DC maturation. In addition, ubiquitination can also affect DC maturation by regulating the level of MHCII transcription. FIGURE 1 | Ubiquitination regulating DCs activation and maturation by NF-kB signaling. Ubiquitination regulates both canonical and non-canonical NF-kB signaling in DCs. DCs are activated by toll-like ligands and inflammatory factors, such as GM-CSF, during an infection or inflammation. Activated DCs produce various cytokines, such as the proinflammatory cytokines IL-6, IL-12, and IL-23, which, in turn, regulate the differentiation of T cells. The deubiquitinase OTUB1 promotes canonical NF-kB activity by cleaving K48-linked polyubiquitination of UBC13. Rhbdd3 negatively regulates DC activation by recruiting A20 and facilitates A20mediated deubiquitination of NEMO. CYLD inhibits activation of canonical NF-kB signaling by removing K63-linked polyubiquitin chains from NEMO. c-Cbl inhibits NF-kB signaling by stabilizing p105 and accumulating p50 via its RING domain, thus attenuating the recruitment of stimulatory NF-kB heterodimers and suppressing the activation of DCs. MKRN2 and PDLIM2 synergistically promote polyubiquitination and degradation of p65, thereby suppressing NF-kB-dependent activation of DCs. Trabid deubiquitinates and stabilizes Jmjd2d, a histone demethylase that removes transcriptionally repressive histone modifications, H3K9me2 and H3K9me3, from the Il12 and Il23 promoters to promote the recruitment of c-Rel, thereby facilitating the production of those cytokines in activated DCs. CRL4 DCAF2 negatively regulates IL-23 production in DCs by controlling NF-kB inducing kinase (NIK) stability and noncanonical NF-kB activation. Regulation of NF-kB Activation by Ubiquitination The canonical NF-kB signaling pathway is a key mediator of TLR-stimulated DC activation and functional maturation (47). Accumulating evidence suggests that ubiquitination plays a crucial role in the regulation of NF-kB signaling in both DC tolerance and activation. Several E3 ubiquitin ligases and DUBs in DCs regulate NF-kB activation by ubiquitination or deubiquitination of the NF-kB essential modulator NEMO (also known as IKK g) and NF-kB subunits (p65 and c-Rel) ( Figure 1). In general, TLR stimulation leads to the recruitment of the adaptor protein MyD88 and the kinases IRAK1 and IRAK4. Subsequent IRAKs now dissociate from the receptor complex and interact with E3 ubiquitin ligase TRAF6, which results in the K63 polyubiquitination of IRAK1/4 and TRAF6 itself by cooperating with the E2 ligases Ubc13 (48). As for TNFR stimulation, LUBAC (linear ubiquitin chain assembly complex) is required for full activation of NF-kB by the MyD88-dependent pathway. LUBAC complex extends K63-linked poly-ubiquitin chains with M1-linked poly-ubiquitin chains, resulting in recruitment and activation of the IKK complex through its adaptor NEMO, thereby liberating NF-kB transcription factors. The ubiquitin-editing protein, A20, contains a deubiquitinase ovarian tumor (OTU) domain, which cleaves the K63-linked ubiquitin chains of NEMO, thereby inhibiting IKK and NF-kB activation (30,51). It has been proposed that the rhomboid family protease, Rhbdd3, recruits A20 by binding K27-linked polyubiquitination on Lys302 of NEMO via the ubiquitinbinding-association (UBA) domain and facilitates A20mediated deubiquitination of NEMO (31). An A20 or Rhbdd3 deficiency causes aberrant DC activation and increased production of the inflammatory cytokines, IL-12 and IL-6, in response to TLR ligands (31,32). However, emerging evidence shows that the deubiquitinase activity of A20 is dispensable for NF-kB signaling (52). A20 negatively regulates TNF-induced NF-kB activation by binding and stabilizing linear ubiquitin chains (49,50). Therefore, the precise molecular mechanisms that regulates TLR or TNF-induced NF-kB-dependent DC activation by A20 deserves further investigation. The nuclear protein, PDLIM2, as a nuclear E3 ubiquitin ligase, negatively regulates DC activation by promoting the p65 subunit of NF-kB polyubiquitination and proteasome-dependent degradation through its LIM (abnormal cell lineage 11-islet 1mechanosensory abnormal 3) domain (34). The chaperone protein, HSP70, is required for the transport of the NF-kB-PDLIM2 complex to the proteasome (35). A deletion of PDLIM2 in DCs enhances the LPS-induced production of the proinflammatory cytokines, IL-6 and IL-12p40 (34). Interestingly, another study has reported that the E3 ubiquitin ligase, MKRN2, and PDLIM2 may synergistically promote polyubiquitination and p65 degradation, thereby suppressing NF-kB-dependent DC activation (34,36). The E3 ubiquitin ligases, c-Cbl and Cbl-b, are two members of a highly evolutionary conserved family of Cbl proteins characterized as negative regulators of DC activation in vitro. c-Cbl inhibits the NF-kB pathway by stabilizing p105 and accumulating p50 via its RING domain, thereby attenuating the recruitment of stimulatory NF-kB heterodimers and suppressing the proinflammatory cytokine expression of IL-12 (40). Similarly, bone marrow-derived dendritic cells (BMDCs) from cbl-b deficient mice exhibit increased cytokine production (IL-1a, IL-6, and TNF-a) upon TLR stimulation. However, the molecular mechanisms by which Cbl-b is involved in DC activation remains largely unclear (41). Jin et al. (37) reported that the deubiquitinase, Trabid, plays an essential role in mediating epigenetic regulation of TLRinduced expression of IL-12 and IL-23 in DCs. Another NF-kB family member, c-Rel, plays a key role in the TLR-induced expression of IL-12 and IL-23 (53,54). Moreover, Trabid promotes the recruitment of c-Rel to the IL-12a, IL-12b, and IL-23a gene promoters by deubiquitinating and stabilizing the histone demethylase, Jmjd2d, which facilitates the production of these cytokines in activated DCs. Deubiquitination could also augment NF-kB-dependent DC activation by stabilizing the E2conjugating enzyme. A recent study has demonstrated that OTUB1 promotes NF-kB activity in DCs by cleaving K48linked polyubiquitination and increasing the stability of the E2-conjugating enzyme, UBC13 (38). UBC13 induces the activation of TGFb-activated kinase 1 (TAK1) by ubiquitinating IL-1 receptor-associated kinase 1 (IRAK1) and TNF receptor-associated factor 6 (TRAF6), which is essential for NF-kB activity (55). In addition, a conditional deletion of OTUB1 in DCs impairs the TLR-induced activation of DCs and production of IL-12, IL-6, and TNF (38). Together, these data suggest a pivotal role for OTUB1 in driving NF-kBdependent c ytokine p roduction by DCs. An othe r deubiquitinase, CYLD, is also an important regulator of NF-kB activity. Full-length CYLD inhibits activation of NF-kB signaling by removing K63-linked polyubiquitin chains from the signaling molecules, NEMO, TRAF2, and RIP1 (receptor-interacting protein 1) (56). However, the natural occurring short splice variant of CYLD (sCYLD), which lacks the TRAF2 and NEMO-binding sites, results in increased NF-kB activity (56). A conditional deletion of sCYLD in DCs augments the activation of CD8a + DCs and the production of TNF, IL-10, and IL-12 in mice infected with Listeriosis. This suggests that this regulation depends on sCYLD-mediated deubiquitination; however, the precise targets remain unclear (39). Ubiquitination has also been found to play an important role in regulating the non-canonical NF-kB pathway involved in cytokine production by DCs. A recent study has revealed the E3 ubiquitin ligase, CRL4 DCAF2 , is involved in GM-CSF-induced expression of the inflammatory cytokine, IL-23. CRL4 DCAF2 negatively regulates IL-23 production in DCs by controlling NF-kB-inducing kinase (NIK) stability and noncanonical NF-kB activation. Previous studies suggest that cellular inhibitor of apoptosis 1 and 2 (cIAP1/ 2) activation is mediated through its K63 ubiquitination by TRAF2 and promotes NIK degradation (57). Interestingly, CRL4 DCAF2 directly promotes polyubiquitination and subsequent degradation of NIK independent regulation of cIAP activity. The DCconditional deletion of DCAF2 enhances IL-23 production in DCs and promotes the development of psoriasis (43). Collectively, CRL4 DCAF2 negatively regulates the production of the inflammatory cytokine, IL-23, in DCs by ubiquitinating NIK, implying a crucial role for CRL4 DCAF2 on the non-canonical NF-kB-dependent cytokine production in DCs (Figure 1). Immune Homeostasis It has been widely accepted that DC are inducers of central and peripheral tolerance, which plays a crucial role in maintaining immune homeostasis and blocking autoimmune responses (58,59). DC-induced immune tolerance primarily depends on signaling pathway-mediated maturation, cytokine production, and immunomodulatory molecule expression of DCs. Ubiquitination plays a key role in the regulation of DCinduced immune tolerance. The E3 ubiquitin ligase, CRL4 DCAF2 , is critical for maintaining T cell homeostasis under physiological conditions. CRL4 DCAF2 ablation in DCs impairs T cell homeostasis and causes spontaneous autoimmunity in aging mice, which is characterized by the increased frequency of Th17 effector T cells in the inguinal lymph nodes and elevated anti-nuclear antibodies in the serum (43). In addition, the deubiquitinase, A20, has also been well-characterized for this function. As discussed above, A20 inhibits the MyD88-NF-kB signaling axis in DCs to prevent spontaneous DC activation and activated T cell expansion (33). While an A20 deficiency in DCs has little effect on T cell development, the mice with a conditional A20 deficiency in DCs spontaneously developed systemic autoimmunity, including lymphocyte-dependent colitis, seronegative ankylosing arthritis, and stereotypical conditions for human inflammatory bowel disease (IBD) (33). In addition, DCs lacking A20 exhibit increased co-stimulatory molecules expression and induce the generation of in vivo self-reactive Th1 and Th17 cells, which was accompanied by a high level of IL-6 secretion (33). Another study showed that the expression of A20 in DCs both preserves T cell homeostasis, and also directly inhibits B cell activation in vitro (30). These findings indicate that DCs require A20 to preserve immune homeostasis and suggest that A20 may function as a crucial checkpoint in the development of systemic autoimmunity, which provides a potential target for the therapeutic intervention of autoimmune diseases. Rhbdd3, an upstream regulator of A20, and A20 binding and inhibitor of NF-kB-1 (ABIN-1) are also important to DCinduced immune tolerance. Rhbdd3 recruits A20 to promote NEMO deubiquitination and thereby negatively regulates DC activation. Rhbdd3 knockout mice spontaneously develop a systemic autoimmune phenotype, which is characterized by increased serum concentrations of immunoglobulins and antibodies and a reduced frequency of splenic Tregs, implying that Rhbdd3 is involved in controlling immune homeostasis (31). Similarly, as a ubiquitin-binding protein, ABIN-1 expression in DCs is necessary to preserve immune homeostasis (60). The DCspecific deletion of ABIN-1 exhibits excessive activation of NF-kB and MAPK signaling and produces more inflammatory cytokines (TNF, IL-6, IL-12, and IL-23) in vitro. Consequently, DC-conditional ABIN-1 deficient mice develop splenomegaly and lymphadenopathy by three to four months of age, characterized by an accumulation of myeloid cells and activated T lymphocytes (60). While a deficiency of Cbl-b alone exhibits no obvious effects on DC development and function (41), it has been reported that mice which a double ablation of c-Cbl and Cbl-b in DCs results in the development of severe liver inflammation (42). Cbl mutant mice display significantly increased cDC1 in the peripheral lymphoid organs and liver, which exhibits a hyperactivated phenotype and mediates the systemic activation of T and B cells. Further studies have shown that Cbls promote FMS-like tyrosine kinase 3 (FLT3) signaling via the ubiquitination of FLT3. This research suggests that both c-Cbl and Cbl-bmediated ubiquitination hold the key for DC subset homeostasis and immune quiescence under steady-state conditions, implying that c-Cbl and Cbl-b may have the potential to be therapeutic targets for the treatment of DCmediated liver inflammation (42). Antigen Presentation The MHCII molecule is crucial for the antigen presentation function of DCs by displaying antigenic peptides to CD4 T cells, which facilitates their activation, proliferation, and differentiation (61). MARCH1-mediated ubiquitination degradation of this molecule is an important mechanism that controls the surface expression of MHCII in DCs (15,16). In agreement with this finding, MARCH1 expression is inversed to the level of surface MHCII expression during DC maturation (21). Thus, MARCH1 may negatively regulate the antigen presentation capability of DCs by reducing the surface expression of MHCII under steady state conditions (17). However, DCs deficient in MHCII ubiquitination do not present more antigen, but exhibit a low Ag-presenting ability in vivo (22). Moreover, MHCII knock-in mice whose MHCII was not ubiquitinated show DC dysfunction similar to that of MARCH1 knockout mice (22). Therefore, MARCH1 both negatively regulate the surface expression of MHCII, and maintain DC function in the steady state via MHC II ubiquitination (15,16,22). Tregs are potent immune regulatory cells that upregulate MARCH1 expression in DCs by IL-10. This upregulation increases MHCII ubiquitination and reduces the surface expression of MHCII, which consequently results in impaired antigen presentation by DCs (20,23,24). WWP2 has also been shown to ubiquitinate and deplete the surface expression of mature MHCII, thereby suppressing DCmediated T cell activation during Salmonella infection (28). CD4 T Cell Activation In addition to affecting antigen presentation, MHCII ubiquitination can also regulate DC-mediated T-cell activation and development. However, the function of DC-mediated T-cell activation in MHCII ubiquitination appears to be highly complex. One study found that increased antigen presentation did not result in enhanced CD4 T cell activation (25). Moreover, MARCH1-deficient DCs have reduced, rather than increased, ability to activate naїve CD4 + T cells despite their higher level of surface MHCII expression (25). This finding may due to an excess accumulation of MHCII, which results in proteotoxicity and homeostatic disruption of the lipid raft and tetraspanin web in DCs (26). However, this has not been studied with mature naïve T cells, only for developing CD4+ thymocytes (26). In addition, MHCII ubiquitination is important for the development of thymic Tregs, but does not seem to affect the development of conventional T cells (62,63). Ubiquitination also participates in DC-mediated T-cell activation by promoting MHCII gene transcription. The E3 ubiquitin ligase, Hrd1, increases MHCII expression through the ubiquitin-dependent degradation of BLIMP1. In addition, Hrd1 knockout mice exhibit impaired DC-mediated priming of CD4 + T cells and an attenuated autoimmune response (29). As previously discussed, MARCH1 can regulate DC phenotypic maturation via ubiquitination-dependent degradation of the costimulatory molecule, CD86, thereby controlling DC-mediated T cell activation (18,20). CD86 ubiquitination is an important mechanism of controlling the level of CD86 expression in DCs (19). DCs with ubiquitinationresistant mutant CD86 have greater T cell-activating abilities than that of DCs expressing wild-type CD86 in vitro (18). However, CD86 ubiquitination in DC-mediated T cell activation in vivo remains unknown. CD4 T Cell Differentiation In addition to serving as APCs, DCs are also key regulators of CD4 + T cell differentiation by secreting different cytokines (64). Ubiquitination regulates cytokine production by DCs in response to different types of pathogens, which, in turn, guides the differentiation of CD4 + T cells to the generation of effector T cells, including the Th1 and Th17 subsets of helper T cells. It has been established that IL-12 and IL-23 overproduction are involved in the pathogenesis of autoimmune diseases through controlling the differentiation of naїve CD4 + T cells toward the Th1 and Th17 lineages, respectively (65,66). The deubiquitinase, Trabid, in DCs promotes the generation of Th1 and Th17 cells and EAE pathogenesis through the epigenetic regulation of IL-12 and IL-23 expression (37). The culture medium from LPSstimu lated Trabid-d efi cient BMDCs decreases the differentiation of CD4 + T cells into Th1 and Th17 cells in vitro due to reduced IL-12 and IL-23 production (37). Furthermore, mice with a conditional deletion of Trabid in DCs are resistant to the induction of experimental autoimmune encephalomyelitis (EAE), which display a significantly lower frequency of Th1 and Th17 cells in vivo. However, T cells with a conditional Trabid deficiency have no defects in the in vivo production of Th1 and Th17 cells or in vitro T cell responses. These data demonstrate that Trabid mediates the production of IL-12 and IL-23 in DCs, thereby promoting Th1 and Th17 differentiation (37). In contrast to the positive role of Trabid in Th17 differentiation, E3 ubiquitin ligase, CRL4 DCAF2 , in DCs negatively regulates Th17 differentiation in vivo by limiting the production of IL-23 (43). Mechanistically, as discussed above, CRL4 DCAF2 negatively regulates noncanonical NF-kB activation by inducing NIK ubiquitination and degradation, thereby inhibiting IL-23 production in DCs. A conditional DC deficiency of CRL4 DCAF2 mice display substantially enhanced sensitivity to the induction EAE and have an increased frequency of Th17 inflammatory effector cells in the peripheral lymphoid organs (43). A20 is another Th17-regulatory factor identified to be a deubiquitinase of NEMO, which regulates TLR-and TNFtriggered NF-kB activation (67). The DC-specific deletion of A20 promotes Th17 cell production both in vivo and in vitro through secreting high amounts of IL-23 (30). Intestinal DCs from DC-conditional A20 knockout mice have enhanced and biased abilities to promote inflammatory Th1 and Th17 cell differentiation, which is associated with lymphocyte-dependent colitis (68). In addition, Rhbdd3 recruits A20 to inhibit IL-6 production in DCs, and thus suppresses the generation of Th17 cells. Rhbdd3-deficient mice display increased susceptibility to Th17 cell-mediated colitis (31). UBIQUITINATION IN DC-MEDIATED CROSS-PRIMING OF THE CD8 T CELL RESPONSE Antigen cross-presentation is crucial for an effective CD8 + T cell response in both intracellular bacterial infections and cancer (69). DCs have superior antigen cross-presentation ability to cross-prime CD8 + T cells both in vivo and in vitro (70,71). Moreover, a recent study has revealed ubiquitination to be an important mechanism of regulating DC cross-presentation (44). The deubiquitinating enzyme, UCH-L1, has been shown to promote DC cross-presentation both in vitro and in vivo. In addition, UCH-L1 knockout mice infected with listeria monocytogenes exhibit impaired DC-mediated cross-priming of the CD8 + T cell response. Mechanistically, UCH-L1 promotes DC cross-presentation not by favoring antigen uptake or phagosome maturation, but by facilitating the recycling of MHC class I (MHC I) molecules. Reduced UCH-L1 expression in DCs impairs the colocalization of intracellular MHC I with late endosomal/lysosomal compartments, which is required for antigen cross-presentation. This data reveals that UCH-L1 expression in DCs plays a critical role in modulating DCmediated cross-priming of the CD8 + T cell response, which facilitates the exploration of potential targets for therapeutic intervention against various infections and cancers. However, the substrate and E3 ubiquitin ligase for UCH-L1 in DCs remains to be defined. UBIQUITINATION IN DC MIGRATION The migratory ability of DCs is key for the initiation of protective pro-inflammatory and tolerogenic immune responses (72). Apart from glycolytic metabolism and epigenetic pathways (73,74), ubiquitination is also involved in the regulation of the migration of different DC subsets in health and disease (27,45). A CRL5 complex, Cullin RING Ligase CRL5 ASB2a, which is highly expressed in immature DCs and down-regulated following DC maturation, which promotes DC migration by facilitating cell spreading and the formation of adhesive structures in immature DCs. Further research reveals that CRL5 ASB2a triggers polyubiquitylation and drives proteasome-mediated degradation of actin-binding protein filamins (FLNs), which are major organizers of the actin cytoskeleton (45). Interestingly, MARCH1-mediated MHCII ubiquitination is essential for the antigen presentation function of DCs but is also required for the migration of CD206expressing monocyte-derived DCs (CD206 + MoDCs) to skindraining lymph nodes (sdLNs) (27). The expression of IFN regulatory factor (IRF) 4 and C-C Motif Chemokine Receptor 7 (CCR7) play a pivotal role in controlling DC migration (72). CD206 + MoDCs from MARCH1 knockout mice exhibit MHCII overexpression and decreased IRF4 and CCR7 expression and have reduced migratory ability from the skin to sdLNs. Moreover, GM-CSF could restore CD206 + MoDC migration by promoting IRF4 expression both in vitro and in vivo. Collectively, these data suggest that the downregulation of MHCII by ubiquitination is crucial for the migration of CD206 + MoDCs to sdLNs in health and disease. While enhanced DC migration during early stages of the immune response is essential for the rapid induction of the immune response to eliminate invading pathogens, the timely termination of DC trafficking at the late stage of the inflammatory response is required to prevent unwanted inflammation (74,75). While current observations have primarily focused on the positive regulation of ubiquitination in DC migration, whether ubiquitination or deubiquitination can also function as a negative mediator that suppresses DC migration requires further investigation. CONCLUSION AND FUTURE PERSPECTIVES In this review, ubiquitination was identified to play a critical role in DC maturation and function. To date, multiple E3 ubiquitin ligases and DUBs have been identified as key regulators of different physiological aspects of DCs, ranging from DC maturation to DC-mediated immune homeostasis and adaptive immune responses. These ubiquitin enzymes also play a key role in pathological processes mediated by DCs, indicating that they can be potential therapeutic targets for DC-based treatment of autoimmune diseases and cancer. The development of E3 ubiquitin ligase and DUB inhibitors with high selectivity should be considered as a promising immunotherapy approach. Bortezomib (or PS-341, trade name Velcade) is the first drug approved for targeting the UPS for the treatment of multiple myeloma (76). Several derivatives of bortezomib are at various stages of clinical trials for treatment of tumors (77). However, therapeutic approaches that selectively target the DCs ubiquitin-proteasome system in cancer and other diseases need to be further developed. It has been well-established that ubiquitin enzymes are key players in the ubiquitination process; however, a major challenge in future studies will be to elucidate their underlying molecular mechanisms. For example, how each type of ubiquitin chain is generated, hydrolyzed, and recognized in DCs remains poorly understood. Whether acetylation and phosphorylation are involved in the translational modifications of ubiquitin remains unknown. Identification of the substrates of these enzymes is another challenge. Moreover, different DC subsets have both shared and unique functions; however, it remains largely unknown whether/how ubiquitin enzymes regulate the function of different DC subsets. AUTHOR CONTRIBUTIONS BZ and LZ drafted the manuscript. LX and YX discussed and revised the manuscript. QY and KR designed the study and revised the manuscript. All authors contributed to the article and approved the submitted version.
5,908
2020-11-16T00:00:00.000
[ "Biology" ]
Carbon Consumption Patterns of Microbial Communities Associated with Peltigera Lichens from a Chilean Temperate Forest Lichens are a symbiotic association between a fungus and a green alga or a cyanobacterium, or both. They can grow in practically any terrestrial environment and play crucial roles in ecosystems, such as assisting in soil formation and degrading soil organic matter. In their thalli, they can host a wide diversity of non-photoautotrophic microorganisms, including bacteria, which play important functions and are considered key components of the lichens. In this work, using the BioLog® EcoPlate system, we studied the consumption kinetics of different carbon-sources by microbial communities associated with the thallus and the substrate of Peltigera lichens growing in a Chilean temperate rain forest dominated by Nothofagus pumilio. Based on the similarity of the consumption of 31 carbon-sources, three groups were formed. Among them, one group clustered the microbial metabolic profiles of almost all the substrates from one of the sampling sites, which exhibited the highest levels of consumption of the carbon-sources, and another group gathered the microbial metabolic profiles from the lichen thalli with the most abundant mycobiont haplotypes. These results suggest that the lichen thallus has a higher impact on the metabolism of its microbiome than on the microbial community of its substrate, with the latter being more diverse in terms of the metabolized sources and whose activity level is probably related to the availability of soil nutrients. However, although significant differences were detected in the microbial consumption of several carbon-sources when comparing the lichen thallus and the underlying substrate, d-mannitol, l-asparagine, and l-serine were intensively metabolized by both communities, suggesting that they share some microbial groups. Likewise, some communities showed high consumption of 2-hydroxybenzoic acid, d-galacturonic acid, and itaconic acid; these could serve as suitable sources of microorganisms as bioresources of novel bioactive compounds with biotechnological applications. Introduction Lichens have been classically defined as mutualistic symbiotic associations where a fungus (mycobiont) provides a suitable habitat for an extracellular photobiont, either a green alga (chlorobiont) or a cyanobacterium (cyanobiont), or both. The photobiont fixes carbon through photosynthesis and, in the case of a cyanobacterium, also contributes to the fixation of nitrogen [1]. Approximately one-fifth of all known fungi have been described as obligate lichen-forming species [2], reflecting the evolutionary success of this symbiotic association. Lichens exist as discrete thalli characterized by a poikilohydric lifestyle, allowing them to grow in virtually any terrestrial environment, from the pattern of the community associated with its thallus, but has less impact on the microbial community of its underlying substrate. For this, we analyzed the pattern of consumption of different carbon-sources as a measure of metabolic activity of microbial communities associated with both thallus and substrate of Peltigera lichens, in order to identify the mainly used carbon-sources which contribute to define the metabolic ability of each community and the differences between them. Results and Discussion Previously, 50 lichen samples from two forested sites in the Coyhaique National Reserve in southern Chile, were identified by phylogenetically related sequences of cyanobacterial small sub-unit (SSU) rDNA and fungal large sub-unit (LSU) rDNA with sequences previously reported for Nostoc and Peltigera. This way, we defined five cyanobiont (C01, C03, C10, C12, and C14) and six mycobiont (M1, M2, M4, M5, M6, and M8) haplotypes [42]. In our previous phylogenetic analysis [24], most of our mycobionts formed defined and well-supported monophyletic groups with some of the Peltigera species downloaded from the database: M1 was closely-related to P. ponojensis, M2 to P. extenuata, M4 to P. rufescens, and M6 to P. frigida. However, some of our sequences were closely related to more than one species, forming defined and well-supported monophyletic lineages. Thus, we used the name of the most emblematic species in the group to name the lineage: M5 was part of the P. canina lineage, and M8 of the P. hymenina lineage. According to updated analyses of the species of these lineages, M5 and M8 most-likely correspond to P. 'fuscopraetextata' and P. truculenta, respectively [10,14]. Although our study is confined to a particular geographic area, it is possible to observe some specificity patterns in the mycobiont-cyanobiont associations. In general, mycobionts are more specialized than cyanobionts, whereas cyanobionts associate frequently with several Peltigera species [10,12,14,15]. This is the case of C03, which is associated with four mycobionts in a reduced area, although the abundance of its pairs is low, so probably the sampled forest is not the optimum environment for this cyanobacterium. Conversely, those lichens paired to the successful cyanobiont C01, such as M5 (most probably P. 'fuscopraetextata') and M6 (P. frigida), were the most abundant. P. 'fuscopraetextata' has asexual propagules (phyllidia) allowing the vertical transmission of the photobiont and thus high levels of specificity, which is in accordance with the results of Magain et al. [10]. In addition, these authors reported that P. frigida showed intermediate levels of specialization; although we found that this mycobiont just paired with the cyanobiont C01, when we extend the analysis to other sites, we actually observe that its specificity is not as high (e.g., in the Magallanes region, this mycobiont is paired with cyanobionts C02 and C14 [24]). On the other hand, M8 (most probably P. truculenta), from the Polydactylon section, was associated with specific cyanobionts (C10 and C12, plus C11 if the broader sampling sites in Zúñiga et al. [24] are considered), which were not paired with any other mycobiont [24]. This was also observed across the class Lecanoromycetes, where specific monophyletic groups of lichens are specialized on specific groups of photobionts [44,45]. More specifically, Peltigera species from section Polydactylon also have a high specialization [12,15] at narrow phylogenetic or geographic scales. Nevertheless, it has been shown previously [14] that most South American Peltigera species from this section are more generalists than other species, which would be advantageous for colonizing new geographical areas or habitats. To carry out the metabolic profiling, composite samples from the lichen thallus and the associated substrate were prepared, grouping them by site (S1 and S2), mycobiont type (M), and cyanobiont type (C). This arrangement produced 11 samples from lichen thalli and 11 samples from the underlying substrates [42]. The kinetics of the consumption of carbon-sources of all samples were adjusted to the modified Gompertz equation, determining that at 48 h all the communities were in the exponential phase of color development (Appendix A). Thus, the communities analyzed in this study grow faster than those evaluated in other studies, where the exponential phase was reached only at 72 h [34,46]. This difference should not be due to the temperature of the test since these were similar to those found in natural conditions (28 °C and 25 °C). Therefore, it is likely that this difference in the growth rate is due to natural differences at the sampling sites, since it has been shown that environmental conditions can shape the metabolic structures of lichen bacterial microbiomes and bacterial communities of soils [34,40,46,47]. Subsequently, the consumption data of the 31 carbon-sources in EcoPlates TM by the lichen and substrate microbiotas registered during the exponential phase (48-72 h) were used to carry out a clustering analysis to group bacterial communities with similar carbon-source utilization patterns ( Figure 2). The results revealed that the communities were distributed in three groups (Groups 1-3). Group 1 includes the microbial metabolic profiles of almost all the substrates from site S2 (except S2M8C12-S, which was clustered in Group 3). The consumption of carbon-sources indicates that significantly higher metabolic activity was detected in the substrate microbiotas, which utilized mainly the following sources: D-mannitol and N-acetyl-D-glucosamine (carbohydrates); D-galactonic acid γ-lactone, D-galacturonic acid, and itaconic acid (carboxylic acids); L-arginine, L-asparagine, and L-serine (amino acids); and phenylethylamine. Group 2 gathered the microbial metabolic profiles associated with the lichen thallus with mycobiont haplotypes M5 and M6, the most abundant ones in both sites. The patterns grouped in this cluster were characterized by a generally lower consumption of some carbon-sources when compared to those from the substrates clustered in Group 1, but they used mainly D-mannitol and the amino acids L-asparagine and L-serine at a similar level. Group 3 was the most heterogeneous one and included the metabolic profiles of five microbial communities obtained from lichen thalli (S2M4C03-L, S2M1C03-L, S1M8C10-L, S2M2C03-L, and S2M8C12-L) and five obtained from the substrates (S1M8C10-S, S1M5C01-S, S1M6C01-S, S2M8C12-S, and S1M5C14-S). This group exhibited the lowest levels of carbon-source consumption. To carry out the metabolic profiling, composite samples from the lichen thallus and the associated substrate were prepared, grouping them by site (S1 and S2), mycobiont type (M), and cyanobiont type (C). This arrangement produced 11 samples from lichen thalli and 11 samples from the underlying substrates [42]. The kinetics of the consumption of carbon-sources of all samples were adjusted to the modified Gompertz equation, determining that at 48 h all the communities were in the exponential phase of color development (Appendix A). Thus, the communities analyzed in this study grow faster than those evaluated in other studies, where the exponential phase was reached only at 72 h [34,46]. This difference should not be due to the temperature of the test since these were similar to those found in natural conditions (28 • C and 25 • C). Therefore, it is likely that this difference in the growth rate is due to natural differences at the sampling sites, since it has been shown that environmental conditions can shape the metabolic structures of lichen bacterial microbiomes and bacterial communities of soils [34,40,46,47]. Subsequently, the consumption data of the 31 carbon-sources in EcoPlates TM by the lichen and substrate microbiotas registered during the exponential phase (48-72 h) were used to carry out a clustering analysis to group bacterial communities with similar carbon-source utilization patterns ( Figure 2). The results revealed that the communities were distributed in three groups (Groups 1-3). Group 1 includes the microbial metabolic profiles of almost all the substrates from site S2 (except S2M8C12-S, which was clustered in Group 3). The consumption of carbon-sources indicates that significantly higher metabolic activity was detected in the substrate microbiotas, which utilized mainly the following sources: D-mannitol and N-acetyl-D-glucosamine (carbohydrates); D-galactonic acid γ-lactone, D-galacturonic acid, and itaconic acid (carboxylic acids); L-arginine, L-asparagine, and L-serine (amino acids); and phenylethylamine. Group 2 gathered the microbial metabolic profiles associated with the lichen thallus with mycobiont haplotypes M5 and M6, the most abundant ones in both sites. The patterns grouped in this cluster were characterized by a generally lower consumption of some carbon-sources when compared to those from the substrates clustered in Group 1, but they used mainly D-mannitol and the amino acids L-asparagine and L-serine at a similar level. Group 3 was the most heterogeneous one and included the metabolic profiles of five microbial communities obtained from lichen thalli (S2M4C03-L, S2M1C03-L, S1M8C10-L, S2M2C03-L, and S2M8C12-L) and five obtained from the substrates (S1M8C10-S, S1M5C01-S, S1M6C01-S, S2M8C12-S, and S1M5C14-S). This group exhibited the lowest levels of carbon-source consumption. The carbon-source consumption pattern by the lichen substrate microbial communities from site S2 (Group 1) appears more functionally diverse and metabolically active compared to that of the community of the lichen thalli and substrates from site S1. This is in accordance with the features of site S2, which is closer than site S1 to wetlands, meaning that the communities obtained from substrates of site S2 were presumably exposed to higher levels of organic matter [42,48]. On the other hand, two of the lichen samples whose mycobiont haplotypes are present in both sites (M5 and M6) grouped in the same cluster (Group 2), which suggests that of the mycobiont and the site, the former has the greater influence on the microbial community of the thallus. Although soil factors certainly affect soil microbial communities [49], these factors could become less important in the case of soils influenced by lichens [50]. However, lichen samples with the mycobiont haplotype M8 (also present in both sites) did not group within this cluster, but clustered together with their substrates in Group 3. This mycobiont haplotype is the only one belonging to the section Polydactylon, whilst the other five are part of the section Peltigera [24]. Therefore, this metabolic differentiation could be a result of the distinct phylogenetic histories of each mycobiont section [17]. However, this metabolic differentiation could also be a consequence of the incidence of cyanobiont haplotypes C10 and C12, which show phylogenetic specificity with the mycobiont haplotype M8 [11]. The influence of photobionts on bacterial communities closely related to lichen thalli has been suggested before, as The carbon-source consumption pattern by the lichen substrate microbial communities from site S2 (Group 1) appears more functionally diverse and metabolically active compared to that of the community of the lichen thalli and substrates from site S1. This is in accordance with the features of site S2, which is closer than site S1 to wetlands, meaning that the communities obtained from substrates of site S2 were presumably exposed to higher levels of organic matter [42,48]. On the other hand, two of the lichen samples whose mycobiont haplotypes are present in both sites (M5 and M6) grouped in the same cluster (Group 2), which suggests that of the mycobiont and the site, the former has the greater influence on the microbial community of the thallus. Although soil factors certainly affect soil microbial communities [49], these factors could become less important in the case of soils influenced by lichens [50]. However, lichen samples with the mycobiont haplotype M8 (also present in both sites) did not group within this cluster, but clustered together with their substrates in Group 3. This mycobiont haplotype is the only one belonging to the section Polydactylon, whilst the other five are part of the section Peltigera [24]. Therefore, this metabolic differentiation could be a result of the distinct phylogenetic histories of each mycobiont section [17]. However, this metabolic differentiation could also be a consequence of the incidence of cyanobiont haplotypes C10 and C12, which show phylogenetic specificity with the mycobiont haplotype M8 [11]. The influence of photobionts on bacterial communities closely related to lichen thalli has been suggested before, as the photosynthetic and nitrogen-fixing capabilities of the photobionts could influence the availability of soil nutrients [40,42,50]. The carbon-sources that were intensively metabolized by both types of communities (i.e., from lichens and substrates) were D-mannitol, L-asparagine, and L-serine. Mannitol is the most abundant polyol in nature; it is produced by bacteria, yeasts, fungi, algae, lichens, and many plants, and is used as a carbon and energy source [51]. In addition, this carbohydrate is the most widely distributed polyol in fungi, being found in spores, fruiting bodies, and mycelia [52], which suggests that it could be very abundant in the soil of temperate rainy forests and therefore an important carbon-source for microbial communities from lichens and substrates. On the other hand, cyanobacteria and certain species of bacteria associated with lichens are able to liberate free amino acids [53]. Amino acids would become easily available to microbial communities during their growth and thus it is expected that a high utilization rate of amino acids is observed in these communities. Furthermore, considering that the metabolic abilities of microbial communities likely reflect the abundance and bioavailability of carbon compounds in the soil organic matter [54,55], the high use of amino acids suggests that they constitute an important energy source for soil microbial communities. Serine and asparagine are among the most abundant amino acids in the soil (5% of the total free amino acid content) [56]. On degradation to pyruvate and oxaloacetate, respectively, they become central in bacterial primary metabolism, and it is thus consistent that these carbon-sources were the most-consumed by both microbiotas. Among the least consumed carbon-sources were 2-hydroxybenzoic acid, L-phenylalanine, and α-cyclodextrin. Aromatic compounds are stable in soils and are harder to degrade than many other organic compounds, since microorganisms require elaborate degradation strategies to overcome the high chemical stability of the aromatic ring [57]. However, it is interesting to highlight the high consumption of 2-hydroxybenzoic acid shown by the sample S2M5C03-S, since this acid is introduced into the environment because it is widely used as an intermediate in pharmaceuticals [58,59]. Since this aromatic organic compound is very toxic, further studies of its biodegradation by the soil microbial community underlying that lichen are needed to search for bacteria with potential applications in the bioremediation of environments contaminated with this pollutant. On the other hand, some nutrients were more extensively utilized by the communities from the substrates than from the lichens, among them N-acetylglucosamine, D-galacturonic acid, itaconic acid, and phenylethylamine. These preferences of the communities of these Group 1 substrates could mean that these nutrients are available in the studied forest soils, in such a way that these microbial guilds have been enriched. N-acetylglucosamine is a building block of the bacterial peptidoglycan cell wall and is a monomeric unit in many naturally occurring polymers, such as chitin in the cell walls of many fungi and the exoskeleton or cuticle of arthropods; it is thus very abundant in most ecosystems. In addition, it plays an important role in supplying carbon and energy to bacteria by entering the glycolytic pathway after it is converted into fructose-6-phosphate [60,61]. D-galacturonic acid is one of the major polysaccharide constituents of plant cell walls, so it represents an important carbon-source for microorganisms living on decaying plant material, as found in the soil of temperate rain forests of southern Chile. In bacteria, this carboxylic acid is degraded in a five step pathway resulting in the formation of pyruvate and glyceraldehyde-3-phosphate [62]. Itaconic acid, an unsaturated dicarboxylic acid, and some alkylated derivatives are synthesized by some fungi and secreted in significant amounts to the environment [63]. Once secreted, this nutrient becomes available as a carbon and energy source by substrate-associated microbial communities. Finally, phenylethylamine is a microbial decarboxylation product of phenylalanine and can be found in fungi, bacteria [64], and many algae [65]. The physiological role of amine synthesis seems to be related to defence mechanisms used by bacteria to withstand acidic environments [66]. Two of the four abovementioned metabolites from Group 1 warrant particular attention, since it would be important to isolate microorganisms capable of metabolizing them in future work. First, we highlight D-galacturonic acid since it is the most abundant component of pectin, an abundant polysaccharide in plant cell walls. Pectin-rich residues, which are side products in sugar beet processing and in fruit juice production, are currently mainly used as animal feed, and it would be desirable to find new ways to convert this raw material into products of higher value [62]. Successful attempts have been described to ferment pectin-rich biomass to ethanol using genetically modified bacteria [67,68]. Therefore, exploring the microbial communities of specific lichen substrates could help find microorganisms that participate in such fermentations. Second, we draw attention to itaconic acid because it is of growing interest for the chemical industry as a renewable organic acid with potential to replace crude oil-based products (e.g., acrylic acid) [69]. While the anabolic pathway of this carboxylic acid is well understood, the catabolic pathway requires further research in order to engineer a production host with a disabled degradation pathway and thus increase its biodegradation potential [69]. Subsequently, we analysed samples M5C01 and M6C01 in more detail, since they were the most abundant symbiotic combinations at the two study sites, representing 95% and 68% of the samples collected in sites S1 and S2, respectively. Figure 3 shows the carbon-source consumption patterns of the microbial communities associated with M5C01 in site S1 ( Figure 3A) and in site S2 ( Figure 3B), which represent 43% and 44% of the samples collected at these sites, respectively. On the other hand, Figure 4 shows the carbon-source consumption patterns of the microbial communities associated with M6C01 in site S1 ( Figure 4A) and in site S2 ( Figure 4B), which represent 52% and 24% of the samples collected at these sites, respectively. desirable to find new ways to convert this raw material into products of higher value [62]. Successful attempts have been described to ferment pectin-rich biomass to ethanol using genetically modified bacteria [67,68]. Therefore, exploring the microbial communities of specific lichen substrates could help find microorganisms that participate in such fermentations. Second, we draw attention to itaconic acid because it is of growing interest for the chemical industry as a renewable organic acid with potential to replace crude oil-based products (e.g., acrylic acid) [69]. While the anabolic pathway of this carboxylic acid is well understood, the catabolic pathway requires further research in order to engineer a production host with a disabled degradation pathway and thus increase its biodegradation potential [69]. Subsequently, we analysed samples M5C01 and M6C01 in more detail, since they were the most abundant symbiotic combinations at the two study sites, representing 95% and 68% of the samples collected in sites S1 and S2, respectively. Figure 3 shows the carbon-source consumption patterns of the microbial communities associated with M5C01 in site S1 ( Figure 3A) and in site S2 ( Figure 3B), which represent 43% and 44% of the samples collected at these sites, respectively. On the other hand, Figure 4 shows the carbon-source consumption patterns of the microbial communities associated with M6C01 in site S1 ( Figure 4A) and in site S2 ( Figure 4B), which represent 52% and 24% of the samples collected at these sites, respectively. The consumption profiles show that both substrate communities exhibited higher activity at site S2 ( Figures 3B and 4B) compared with those at site S1 ( Figures 3A and 4A), which is consistent with the results previously reported by Leiva et al. [42], and suggests that edaphic factors have an effect on the level of metabolic activity of the soil microbial communities. On the other hand, when the microbial communities associated with the lichens are compared, the profiles are very similar depending on the mycobiont haplotype present in the thalli (M5, Figure 3; and M6, Figure 4) but independent of the site where the lichens grow. These results provide additional evidence that the lichen influences the metabolic pattern of the microbial community associated with its thallus, but has less impact on the microbial community of the underlying substrate. Finally, when we statistically compared the consumption of each carbon-source by the microbiotas of these two lichen pairs (M5C01 and M6C01), we verified that 12 out of the 31 carbonsources were consumed equally (D-Cellobiose, i-erythritol, D-mannitol, γ-hydroxybutyric acid, αketobutyric acid, D-malic acid, 2-hydroxybenzoic acid, L-phenylalanine, L-serine, L-threonine, Tween 80, and α-cyclodextrin) ( Table 1). Few studies distinguish microbial communities associated with the thallus versus those associated with the substrates where lichens grow [41,42,70], but they all agree that bacteria associated with the lichen thalli could be recruited, at least in part, from the substrates where lichens grow. The consumption profiles show that both substrate communities exhibited higher activity at site S2 ( Figures 3B and 4B) compared with those at site S1 ( Figures 3A and 4A), which is consistent with the results previously reported by Leiva et al. [42], and suggests that edaphic factors have an effect on the level of metabolic activity of the soil microbial communities. On the other hand, when the microbial communities associated with the lichens are compared, the profiles are very similar depending on the mycobiont haplotype present in the thalli (M5, Figure 3; and M6, Figure 4) but independent of the site where the lichens grow. These results provide additional evidence that the lichen influences the metabolic pattern of the microbial community associated with its thallus, but has less impact on the microbial community of the underlying substrate. Finally, when we statistically compared the consumption of each carbon-source by the microbiotas of these two lichen pairs (M5C01 and M6C01), we verified that 12 out of the 31 carbon-sources were consumed equally (D-Cellobiose, i-erythritol, D-mannitol, γ-hydroxybutyric acid, α-ketobutyric acid, D-malic acid, 2-hydroxybenzoic acid, L-phenylalanine, L-serine, L-threonine, Tween 80, and α-cyclodextrin) ( Table 1). Few studies distinguish microbial communities associated with the thallus versus those associated with the substrates where lichens grow [41,42,70], but they all agree that bacteria associated with the lichen thalli could be recruited, at least in part, from the substrates where lichens grow. Table 1. Statistical comparison of carbon consumption by lichen (-L) and substrate (-S) microbiotas of M5C01 and M6C01 from both sampling sites (S1 and S2). Average absorbance and standard deviation are shown. In rows, the same capital letter indicates no significant difference according to Games-Howell post hoc test (p < 0.05). S1M5C01-L S1M5C01-S S1M6C01-L S1M6C01-S S2M5C01-L S2M5C01-S S2M6C01-L S2M6C01-S In conclusion, the lichen thallus provides an adequate microhabitat that selects certain bacterial lineages [26], probably through the production of metabolites [42,71,72] which could select specific lineages in order to carry out defined functions as part of a multispecies symbiosis [25]. Several studies have characterized the biologically active metabolites produced by lichens (e.g., antibiotic, antiviral, anti-inflammatory, analgesic, cytotoxic, etc.) [71,[73][74][75][76]; however, fewer studies isolated microorganisms associated with lichens as bioresources of novel bioactive compounds with biotechnological applications [77][78][79][80][81]. Here we propose that metabolic profiling could be used as a preliminary approach to select suitable samples to isolate microorganisms with specific metabolic features. Study Sites and Sampling Fifty Peltigera-thallus fragments (approximately 15 cm 2 each) and their associated substrate (i.e., soil) (approximately 15 cm 3 each) were collected from two plots 300 m away from each other (sites S1 and S2; approx. 2 Ha each) in a fragmented second-growth forest of Nothofagus pumilio [48] in the Coyhaique National Reserve (Aysén Region, Chile; 45 • 31 42.96 S, 72 • 1 51.95 W). These two sites are close to pine tree plantations, but site S2 is closer than site S1 to open spaces (i.e., rocky hillsides and a mallín, a kind of wetland) [42]. All samples were collected at least 1 m from the next closest thallus in order to minimize resampling of the same genetic individual. The samples were placed in paper bags and transported in cooled containers. In the laboratory, the lichen samples were stored in paper bags at room temperature, while the substrate samples (removed with a sterile brush and spatula) were sieved and stored in plastic tubes at 4 • C. Lichen samples included in this study were previously identified by Zúñiga et al. [24]. The mycobiont and cyanobiont haplotypes were defined by analyzing the fungal LSU rDNA and cyanobacterial SSU rDNA regions, amplified with primers LIC24R and LR7 [17], and PCR1 and PCR18 [82], respectively. Phylogenetic analyses were performed with two sets of sequences; one set consisted of one representative of each mycobiont haplotype and 67 Peltigera sequences downloaded from GenBank, and the other set of sequences consisted of one representative of each cyanobacterial haplotype and 49 Nostoc sequences downloaded from GenBank. In addition, updated blast search analyses were performed in order to define the identity of mycobionts related to more than one reference species. Composite samples from both the lichen thallus and the associated substrate were prepared grouping the same mass of thallus or substrate of individual samples according to site (S1 and S2), mycobiont type (M) and cyanobiont type (C). Thus, the nomenclature for each composite sample indicates the site (S1 or S2), then the mycobiont (M1, M2, M4, M5, M6, or M8), then the cyanobiont (C01, C03, C10, C12, or C14) and whether it comes from the lichen thallus (-L) or from the substrate (-S). Microbial suspensions were obtained from 100 mg of thalli per composite sample, which were homogenized in 15 mL of PBS (137 mM NaCl, 2.7 mM KCl, 8 mM Na 2 HPO 4 , 2 mM KH 2 PO 4 ) and shaken overnight at 150 rpm and 25 • C. In the same way, microbial suspensions were obtained from 2 g of substrates (i.e., soil) per composite sample, which were shaken at 150 rpm and 25 • C for 1 h in 20 mL of PBS. The plates were inoculated with 100 µL of the previously filtered microbial suspensions from thalli or substrates, and subsequently incubated at 25 • C for 1 week in a humid chamber. Carbon-source utilization was monitored every 24 h for 7 days at 590 nm in an Epoch microplate reader (Biotek, Winooski, VT, USA). Precision and uncertainty of the measurements of absorbance were calculated using a randomly chosen sample from each of both batches of samples (i.e., thalli and substrates) at each timepoint. Reproducibility (precision) of the analytical method was determined by estimating the relative standard deviation (RSD) of triplicate readings using the formula RSD = (s/x) * 100, where s is the standard deviation of the data set, and x is the average of the data set. In addition, the experimental uncertainty was estimated using the confidence interval (CI; 95% level of confidence) of triplicate readings using the formula CI = x ± ts/ √ n , where x is the average of the data set, t is the Student's t for the desired level of confidence, s is the standard deviation of the data set, and n is the number of measurements (Table A1). Data were processed by subtracting the absorbance value at time zero, to minimize the interference of the sample colour [83], and the absorbance value from the control (water). In the data analysis, absorbance values of 0.1 or higher were considered positive and absorbance values higher than 2 were normalized to 2, according to the detection limit of the reader. To determine the incubation time at which all the communities were in the exponential phase of each curve and when the maximum absorbance was reached, data were adjusted using the modified Gompertz equation [84] using OriginPro software v8.07 (OriginLab Corporation, Northampton, MA, USA) ( Figure A1). For the following analyses, the average of the data recorded at 48 h and 72 h for each carbon-source was used, since both plate reading times correspond to the exponential phase of the carbon-consumption kinetics. Kinetic parameters (λ, µ m , and A) obtained from the modified Gompertz fitting were compared between each lichen and the corresponding substrate with one-way ANOVA comparisons with SPSS Statistics v17.0 (SPSS Inc, Chicago, IL, USA) since data showed a normal distribution according to the Jarque-Bera test and homoscedasticity of variances according to the Levene´s test. Clustering of average carbon consumption data was performed in Past software v3.20 (University of Oslo, Oslo, Norway) [86], under Ward's method with Euclidean distance. The resultant tree was exported, formatted in newick format and then imported in the iTOL v4.2.3 platform [87], where consumption data was added as a shape plot formatted dataset. Since the data have a normal distribution according to the Jarque-Bera test (except those of α-D-Lactose consumption that had to be transformed by cubic root to have a normal distribution) but the assumption of homoscedasticity of variances was not fulfilled according to the Levene's test, Welch's t-test and Games-Howell post hoc test were used to evaluate the differences in the consumption of each carbon-source between the M5C01 and M6C01 samples, considering both lichen and substrate samples, using SPSS Statistics v17.0 (SPSS Inc, Chicago, IL, USA). Table A1. Experimental uncertainty and precision of the measurements of absorbance used for analyzing the carbon-source utilization patterns of the microbial communities associated with lichen thalli and substrates. Test samples were randomly chosen from each sample batch at each timepoint. Figure A1. Colour development curves on the EcoPlate (Biolog) plates over time, adjusted according to the modified Gompertz model. λ, latency time (lag phase); μm, maximum colour development rate; A, maximum absorbance. (a) S1M5C01, (b) S1M5C14, (c) S1M6C01, (d) S1M8C10, (e) S2M1C03, (f) S2M2C03, (g) S2M4C03, (h) S2M5C01, (i) S2M5C03, (j) S2M6C01, (k) S2M8C12. The nomenclature for each composite sample indicates the site (S1 or S2), then the mycobiont (M1, M2, M4, M5, M6, or M8), then the cyanobiont (C01, C03, C10, C12, or C14) and whether it originates from the lichen thallus (-L) or from the substrate (-S). The significantly-different values (p ≤ 0.05) for each of the parameters are indicated by an asterisk (*) when comparing the kinetics of the thalli (■) and the substrates (•). Figure A1. Colour development curves on the EcoPlate (Biolog) plates over time, adjusted according to the modified Gompertz model. λ, latency time (lag phase); µ m , maximum colour development rate; A, maximum absorbance. (a) S1M5C01, (b) S1M5C14, (c) S1M6C01, (d) S1M8C10, (e) S2M1C03, (f) S2M2C03, (g) S2M4C03, (h) S2M5C01, (i) S2M5C03, (j) S2M6C01, (k) S2M8C12. The nomenclature for each composite sample indicates the site (S1 or S2), then the mycobiont (M1, M2, M4, M5, M6, or M8), then the cyanobiont (C01, C03, C10, C12, or C14) and whether it originates from the lichen thallus(-L) or from the substrate (-S). The significantly-different values (p ≤ 0.05) for each of the parameters are indicated by an asterisk (*) when comparing the kinetics of the thalli ( ) and the substrates ( ).
7,631.4
2018-10-24T00:00:00.000
[ "Environmental Science", "Biology" ]
Structure Determination and Functional Analysis of a Chromate Reductase from Gluconacetobacter hansenii Environmental protection through biological mechanisms that aid in the reductive immobilization of toxic metals (e.g., chromate and uranyl) has been identified to involve specific NADH-dependent flavoproteins that promote cell viability. To understand the enzyme mechanisms responsible for metal reduction, the enzyme kinetics of a putative chromate reductase from Gluconacetobacter hansenii (Gh-ChrR) was measured and the crystal structure of the protein determined at 2.25 Å resolution. Gh-ChrR catalyzes the NADH-dependent reduction of chromate, ferricyanide, and uranyl anions under aerobic conditions. Kinetic measurements indicate that NADH acts as a substrate inhibitor; catalysis requires chromate binding prior to NADH association. The crystal structure of Gh-ChrR shows the protein is a homotetramer with one bound flavin mononucleotide (FMN) per subunit. A bound anion is visualized proximal to the FMN at the interface between adjacent subunits within a cationic pocket, which is positioned at an optimal distance for hydride transfer. Site-directed substitutions of residues proposed to involve in both NADH and metal anion binding (N85A or R101A) result in 90–95% reductions in enzyme efficiencies for NADH-dependent chromate reduction. In comparison site-directed substitution of a residue (S118A) participating in the coordination of FMN in the active site results in only modest (50%) reductions in catalytic efficiencies, consistent with the presence of a multitude of side chains that position the FMN in the active site. The proposed proximity relationships between metal anion binding site and enzyme cofactors is discussed in terms of rational design principles for the use of enzymes in chromate and uranyl bioremediation. Introduction Contamination of groundwater, soils and sediments by longlived soluble radionuclide wastes (e.g. uranium (U(VI))) or toxic redox-sensitive metals (e.g. chromate (Cr (VI))) from legacy of nuclear weapons development is a significant environmental problem [1]. Unfortunately, limited technologies exist to efficiently decrease the concentrations of these contaminants. An envisioned low-cost solution uses microbes to change the redox status of contaminants from soluble (e.g.: U(VI)) to insoluble species (e.g.: U(IV)). Dissimilatory metal-reducing bacteria are good bioremediation candidates given their ability to reduce iron, sulfate, chromate, or uranyl ions as a form of anaerobic respiration [2,3]. It has been suggested that the mechanism used by these bioremediation candidates involves electron transfer reactions mediated by cytochromes located at the outer membrane or within extracellular polymeric substances (e.g., nanowires) [4,5]. An understanding of these mechanisms has been facilitated by prior structural measurements of metal reductases (i.e., MtrC and MtrF) in Shewanella oneidensis MR-1, a subsurface bacterium capable of anaerobic respiration using extracellular metal oxides (e.g., Fe(III) or U(VI)) as terminal electron acceptors [6,7]. However, while these and other dissimilatory metal-reducing bacteria have been shown to decrease U(VI) concentrations below the Environmental Protection Agency's maximum contaminant levels (MCLs) (0.13 mM or 30 mg/L, http://water.epa.gov), relatively slow growth rates and an inability to catalyze metal reduction under aerobic conditions limit the potential of dissimilatory metalreducing bacteria for bioremediation. In comparison, intracellular NAD(P)H-dependent FMN reductases, enzymes distributed in all bacterial species, reduce chromate or uranyl ions under both anaerobic [8,9] and aerobic conditions [10]. These flavincontaining proteins, which include YieF (renamed ChrR) [11] and NfsA isolated from Escherichia coli and ChrR from Pseudomonas putida [10,12,13], have a broad substrate specificity permitting the NAD(P)H-dependent reduction of quinines, prodrugs, chromate (Cr(VI)), and uranyl (U(VI) ions [11,12]. In the reduction of Cr(VI) to Cr (III), ChrR avoids the generation of highly toxic Cr(V), which induces oxidative stress through the production of reactive oxygen species (ROS) [14,15]. To understand the mechanism by which intracellular NAD(P)H-dependent FMN reductases bind and efficiently reduce toxic environmental contaminants, such as CrO 4 22 and UO 2 (CO 3 ) 3 42 , we have cloned, expressed, purified, and functionally characterized a putative chromate reductase (Gh-ChrR) from the recently sequenced genome of Gluconacetobacter hansenii [16]. Gh-ChrR belongs to the superfamily of NAD(P)H-dependent FMN reductases that catalyze the metabolic detoxification of quinones and their derivatives to hydroquinones, using NAD(P)H as the electron donor. This family of enzymes protects cells against quinone-induced oxidative stress, cytotoxicity, and mutagenicity in both prokaryotic and eukaryotic organisms. It has been suggested that the biological role of NAD(P)H-dependent FMN reductases is to prevent futile redox cycling involving univalent reduction of diverse classes of compounds and to quench ROS [11,12,15]. Indeed, the overproduction of these enzymes in bacteria greatly mitigates the toxicity of pollutants such as chromate and uranyl, enhancing the ability of these bacteria to survive in environments contaminated with these compounds [11,12]. Gh-ChrR has 57% amino acid sequence identity to P. putida ChrR, which has previously been shown to reduce chromate and uranyl [11,17]. To help understand the mechanistic requirements associated with metal binding and reducing toxic heavy metals, the crystal structure of Gh-ChrR was solved at 2.25Å resolution. The structure shows that the FMN cofactor is located near subunit interfaces in a pocket containing a cationic site appropriate for binding anions (e.g. UO 2 (CO 3 ) 3 42 or CrO 4 22 ) at an optimal distance for hydride transfer. Consistent with kinetic measurements, the proposed chromate binding site is near the site of putative NADH binding cleft. Gh-ChrR is a Flavoprotein Recombinant Gh-ChrR was purified from E. coli following protein overexpression ( Figure S1). The purified protein had a bright yellow color and the absorbance spectrum contained two characteristic peaks at 373 and 455 nm that indicate the presence of flavins ( Figure S2). The ratio of absorbance at 267 nm to 373 nm is 2.7, suggesting that the prosthetic flavin molecule in Gh-ChrR is FMN [18,19]. Purified Gh-ChrR contains an equimolar stoichiometry of FMN (e 373 = 11,300 M 21 cm 21 ) per monomer of Gh-ChrR (e 280 = 12,950 M 21 cm 21 ). NADH-dependent Metal Reduction As expected from the sequence homology between Gh-ChrR and other members of the FMN reductase family (Pfam ID: PF03358) ( Figure S3), Gh-ChrR functions as a NAD(P)H-dependent metal reductase ( Figure S4, S5). Both NADH and NADPH support maximal chromate reduction by Gh-ChrR, although NADH has a higher k cat /K m than NADPH ( Figure S4). This result is consistent with prior measurements where E. coli ChrR showed an eight-fold preference for NADH over NADPH [13]. Enzyme activity is dependent on the addition of both NADH cofactor and metal anion (e.g., chromate, ferricyanide, or uranyl) ( Figure S5). Metal-dependent increases in NADH oxidation rates obey simple Michaelis-Menten kinetics ( Figure S5; Table S1), permitting a simple characterization of apparent kinetic parameters linked to function. Upon NADH reduction, added metal is reduced to form Cr(III) or Ur(IV) ( Figure S6). The apparent K m for uranyl is below 100 nM, which is substantially lower than previously identified for E. coli and P. putida ChrR [11,17]. The enzyme efficiency for uranyl (k cat /K m .7.0610 4 M 21 s 21 ) is greater than for either chromate (1.0610 3 M 21 s 21 ) or ferricynide (1.6610 3 M 21 s 21 ). These favorable kinetic properties indicate that this enzyme may be able to efficiently decrease uranyl concentrations below the MCL. Substrate Inhibition Mechanism Initial-velocity measurements with chromate as the substrate and NADH as the electron donor were carried out at a fixed enzyme concentration. Consistent with a mechanism involving substrate inhibition by NADH, there were substantial reductions in initial enzyme velocities upon increasing NADH concentrations at fixed chromate concentrations ( Figure 1A). Other mechanisms, such as those involving a bi-bi ping pong reaction mechanism where increasing concentrations of NADH results in enhancements in enzyme velocity, are not consistent with the experimental data [20]. A highly characteristic relationship that is indicative of substrate inhibition is apparent when the kinetic data is plotted in the form of a double reciprocal plot comparing initial velocities relative to variable chromate concentrations at a series of fixed NADH concentrations (where NADH is the inhibitory substrate). Variable NADH concentrations only affect the slope (i.e., Slope 1/A , Figure 1B), where replots of this data permit determination of additional kinetic constants (see legend to Figure 1 and supplementary data). Consistent with a mechanism of an ordered bireactant system involving substrate inhibition, a complex double reciprocal plot for fixed concentrations of CrO 4 22 is observed ( Figure 1C), where individual curves bend upwards at high concentrations of NADH. Collectively, these results indicate that NADH binding blocks CrO 4 22 association, forming a dead-end complex ( Figure 1D); Such substrate inhibition is widespread in enzymology, occurring in approximately 20% of all enzymes where mechanisms involving substrate inhibition can serve a regulatory role [21,22]. Structure of Gh-ChrR The crystal structure of Gh-ChrR was elucidated to a resolution of 2.25 Å ( Table 1). The crystallographic asymmetric unit contains four monomers, each with a single bound FMN. The tetrameric structure of Gh-ChrR is consistent with the result of size exclusion chromatography (,80 kDa), as the mass of the monomeric unit is 21.3 kDa (193 native residues plus a six residue C-terminal polyhistidine tag) ( Figure S7). A tetrameric oligomerization state was also recently reported for E. coli ChrR [23], a protein with 61% sequence identity to Gh-ChrR. For each monomer in the asymmetric unit, electron density is missing or uninterpretable for 5-6 residues at the N-terminus and 7-9 residues at the Cterminus. Aside from the residues near the termini, there are no significant conformational differences between the four monomers as the a-carbons of residues P6-T186 superimpose on each other with a RMSD ranging from 0.35 to 0.38 Å (UCSF-Chimeria) [24]. Figure 2A is a cartoon representation of the backbone fold for one of the four essentially identical monomers in the asymmetric unit with the elements of secondary structure labeled. Each monomer contains two 3 10 -helices (I45-F48, V153-K156, labeled as g), six ahelices (F21-I32, Q53-E58, A62-T73, G90-R101, A125-L138, V167-T186) and five b-strands (L7-L13, I37-P40, A76-T81, P111-S118, A148-I150). The b-strands are organized into one parallel b-sheet, b2:b1:b3:b4:b5, flanked by helices a1 and a5 on one face and the remaining helices on the opposite face. The five longest helices are aligned in two groups that are approximately parallel to each other and orthogonal: (1) a1 and a3 and (2) a4, a5, and a6 so that helices are approximately parallel with each group and orthogonal between groups. Such a triple-layered, a/b/a structure resembles the fold in the flavodoxin superfamily of proteins [18,25]. Such a triple-layered, a/b/a structure resembles the fold in the flavodoxin superfamily of proteins [19,38] and is identical to the fold observed in the crystal structure recently reported for E. coli ChrR (PDB entry: 3SVL) [23], a structure that superimposes onto Gh-ChrR with a backbone RMSD of 0.9 Å . As shown in Figure 2B, the homotetramer is assembled as two sets of identical dimers (cyan/yellow and blue/purple) that are aligned side-by-side with an approximately 60 degree angle along the parallel plane a4 and a5 of each dimer. The monomer-monomer interface of each dimer is similar to that observed in other flavodoxin-like dimers, such as P. aeruginosa T1501 [26] or Saccharomyces cerevisiae Ycp4 [27]; the homotetramer structure is similar to the assembly observed in some other FMN reducatases, including ChrR from E. coli [23], EmoB from Mesorhizobium sp. BNC1 [28], ArsH from Shigella flexneri [18] and Sinorhizobium meliloti [25]. In Gh-ChrR this monomer-monomer interface is composed primarily of a5, a4, and the loop between b3 and a4, and to a lesser degree of a2, g1, and the loop between b1 and a1. In turn, the dimer-dimer interface is composed primarily of a5 and the loop between a5 and b5. The accessible surface area of the Gh-ChrR tetramer is 25360 Å 2 and the buried surface area is 13620 Å 2 (53.9%) or 3405 Å 2 per monomer. This is substantially . At low NADH concentrations it is possible to fit the data with a straight line. However, at high NADH concentrations, individual curves bend upwards. Values for K mA , K mB , K ia and K i were calculated from axes-intercepts and slopes in panels B and C (see Table S2) [20]. D. Cleland notation depicting catalytic mechanism of Gh-ChrR, showing substrate inhibition by NADH binding to FMN-E to form a dead-end complex FMN-E-NADH that competes with metal complex formation, M ox -FMNH 2 -E-NADH. doi:10.1371/journal.pone.0042432.g001 greater than the mean buried surface area for the dimer-dimer interface, 2640 Å 2 (18%) or 1320 Å 2 per monomer. FMN Binding Site As shown in Figure 2A and 2B, Gh-ChrR crystallized with one molecule of FMN associated with each protein monomer. FMN binds in a pocket on the surface of Gh-ChrR near the dimer interface. The FMN binding pocket is more clearly illustrated in Figure 2C, which highlights the electrostatic potential at the solvent-accessible surface of Gh-ChrR. The negatively charged ribityl phosphate group of FMN is deeply buried in a positively charged region (blue) composed of residues in the loop between b1 and a1 (S15-N22) and a positive electrostatic dipole from the Nterminus of the capped a-helix (a1) similar to that previously reported [29]. On the other hand, the aromatic isoalloxazine ring sits in a more hydrophobic region (white) of the binding pocket. Details of the protein-FMN contacts responsible for stabilizing the complex are shown schematically in two-dimensions in Figure 3: 12 hydrogen bonds and five hydrophobic contacts. Except for two hydrophobic contacts (Y51' and R101'), the FMN-protein interactions at the dimer interface are with one monomeric unit. As a consequence, the active site of Gh-ChrR is open and solvent accessible, a feature observed at the active site of oxidoreductases that facilitates promiscuous exchange of substrates [30][31][32]. Flavodoxins are commonly identified in genomes by primary amino acid sequence analysis and a fingerprint FMN-binding motif, T/SXTGXT, responsible for binding to the ribityl phosphate group [33,34]. In Gh-ChrR the equivalent sequence for this motif is G 14 SLRKASFN 22 . The sequence for this region in Gh-ChrR is similar to some other NAD(P)H-dependent FMN reductases including E. coli ChrR [23] (PDB entry: 3SVL) and two flavoproteins (PDB entries: 1NNI and 2VZY) shown to form tetramers [28,35] ( Figure S3). Within this FMN-binding sequence the side chains of the Gh-ChrR residues that make specific contacts with the ribityl phosphate group, S15, R17, S20, and N22, are conserved among the aligned sequences. It is worth noting that the amino acid sequence of the region responsible for binding to the isoalloxazine ring in Gh-ChrR, P 82 EYNY 86 , is conserved ( Figure S3). Putative NADH Binding Site While bound FMN is observed in the crystal structure of Gh-ChrR (Figures 2 and 3), NADH, an essential electron transfer component of the reductive reactions catalyzed by NAD(P)Hdependent FMN reductases, is absent. Efforts to co-crystallize Gh-ChrR with NADH were unsuccessful. However, it is possible to predict the location of the NADH binding site on the FMN-Gh-ChrR structure by superposing it on the structure of a homologous NAD(P)H-dependent FMN reductase, EmoB from Mesorhizobium BNC1 complexed with FMN and NADH [28]. Both Gh-ChrR and EmoB form homotetramers that have similar structures for the individual monomeric subunits (RMSD = 2.6 Å with 161 aligned Ca atoms, Figure S8). In EmoB, the nicotinamide ring of NADH sits above the bound FMN and stacks against the isoalloxazine ring of FMN. Only two residues in EmoB were observed to interact with NADH, K81 and G112 [28]. In the superimposition with Gh-ChrR ( Figure 4A and Figure S8), only one of the two equivalent residues, N85, is in a position to contact NADH, as G109 is too distant. The importance of N85 was confirmed by an N85A site-directed substitution (Table 2), resulting in an apparent K m value 3-fold larger than that for wild type Gh-ChrR that is consistent with a reduction in binding affinity. The aromatic ring of F137 is 2.82 Å from C4N of the NADH nicotinamide ring suggesting a possible hydrophobic interaction. Other Gh-ChrR residues that could potentially interact with NADH are N53, D54, and E57 at the adenosine part of NADH and P119 and T154 at the di-phosphate part of NADH. Collectively, the superposition of structures suggests that residues N53, D54, E57, S100, R101 and F137 from one monomer and residues N85, P119, and T154 from the other monomer of the dimer, may interact with NADH ( Figure 4A), and further suggests that the active site of Gh-ChrR has ample room for NADH to enter. Putative Metal Anion Binding Site Based on the substrate inhibition mechanism (Figure 1), metal is reduced only if it binds to Gh-ChrR before NADH. If NADH binds to Gh-ChrR before the metal a dead-end product forms that blocks metal binding. Unless NADH binding induces significant structural changes to Gh-ChrR upon binding, this sequence suggests that the substrate (metal) binding site is near the tightly bound FMN molecule. Attempt to co-crystallize Gh-ChrR bound to either chromate or uranyl was unsuccessful along with attempts to form complexes by soaking Gh-ChrR crystals with chromate or uranyl. However, spherical electron density was observed on the si-face of the FMN isoalloxazine ring ( Figure 3A) in a similar position observed for FMN in BluB from S. meliloti [32]. In BluB this electron density was modeled as molecular oxygen, but for Gh-ChrR it is best fit with a Clion because of its spherical rather than ellipsoidal shape. The distance from the plane of the isoalloxazine ring is similar in Gh-ChrR and BluB, 3.7 and 3.5 Å , respectively. This ligand-isoalloxazine ring distance is likewise similar in some other NAD(P)H-dependent FMN reductases with bound ligands such as CrS from Thermus scotoducutus SA-01 (PDB entry: 3HF3) [36] and WrbA from E. coli (PDB entry: 3BK6) [37], where the ligand-FMN distance is 3.38 and 3.40 Å , respectively. It is not immediately apparent what forces are holding the heteroatom in place because the nearest positively charged counter ion is the 4.57 Å distant amide group of the side chain of R101 ( Figure 4B). By contrast, in the crystal structure of T. scotoducutus CrS SA-01 (PDB entry: 3HF3) [36] the sulfide ion on the si-face of the FMN isoalloxazine ring is held in place by the side chains of two histidine residues less than 3 Å away. Regardless of the forces holding a negatively charged Cl 2 ion in place in Gh-ChrR, the side chain of R101 is a good candidate to assist the binding of an anion ( Figure 4B). The importance of R101 in metal binding and NADH interaction was confirmed by kinetic studies on a Gh-ChrR construct containing a R101A substitution. The enzyme efficiency (apparent k cat /K m value) of R101A for chromate was 25 fold less than that for wild type Gh-ChrR, and the apparent K m value was 8 fold greater than that for wild type Gh-ChrR (Table 2). Negatively charged species analogous to Cl 2 , including chromate, ferricyanide and uranyl ions, may be recruited to the catalytic center about R101 at a favorable distance (,3.5 Å ) for hydride transfer from FMNH 2 to the metal [38]. Discussion This study demonstrates that recombinant Gh-ChrR has the ability to reduce metal oxides (chromate, ferricyanide, uranyl) ( Figure S5). Of particular interest, Gh-ChrR binds and reduces uranyl with a higher affinity (apparent K m ,100 nM; Table S1) A. NADH was modeled into the Gh-ChrR structure by superimposing it with the NADH-containing structure of EmoB (PDB entry: 2VZJ, Figure S7). The nicotinamide ring of NADH (primarily green stick model) is stacked on top of the isoalloxazine ring of FMN (primarily yellow stick model), and the adenosine part of NADH points to ribtyl group of FMN. The black arrow indicates the distance from C4N of NADH to the si-face of the FMN isoalloxazine ring. Residues N53, D54, E57, S100, R101 and F137 from chain A (cyan) and residues N85, P119, and T154 from chain C (gold) interact with NADH. B. The putative active site of Gh-ChrR shown with bound FMN (primarily yellow stick model) and a chloride ion (green sphere). The black arrow indicates the distance from the Cl 2 to the si-face of the FMN isoalloxazine ring. Key residue R101 holding chloride ion in place is shown in a stick model. Critical residues for hydride transfer, N85 and Y86 from chain A (cyan) and S118 from chain C (gold) are shown in a stick model. The green dash lines indicate the distance (,3 Å ) between N of amide group of N85/Y86 and O4, and the distance (,3 Å ) between OG of hydroxyl group of S118 and O2. doi:10.1371/journal.pone.0042432.g004 than any other enzyme in this class [11,13,17]. The mechanistic basis for high-affinity binding and reduction of bound metal oxides can be understood from the 2.25 Å crystal structure of Gh-ChrR ( Figure 2, 3 and 4). Proximal to the FMN binding pocket near the subunit interface, a cationic cleft is observed that has optimal geometrical properties to bind NADH and either chromate or the physiologically relevant UO 2 (CO 3 ) 3 42 anion present at contamination sites (Rifle, CO) [39,40], permitting efficient enzyme cycling ( Figure 1). As observed in the structure of other flavodoxins [18,25,26,28], the cofactor FMN in Gh-ChrR is non-covalently bound near the dimer interface and held in place primarily via contacts with a5, a4, and the loop between b3 and a4. Enzyme kinetic measurements suggest that chromate and NADH bind sequentially to Gh-ChrR at different sites that are consistent with the presence of a positively charged groove in the catalytic pocket near the FMN (Figures 3 and 4). In the crystal structure a negatively charged chloride ion is observed bound in this pocket above the si-face of the isoalloxazine ring of FMN where the negatively charged chromate (CrO 4 22 ), ferricyanide (Fe(CN) 6 32 ), or uranyl (UO 2 (CO 3 ) 3 42 ) species may bind in a similar manner. If NADH binds to this metal binding site first, a dead-end product results and metal reduction is inhibited (Figure 1). If chromate, ferricyanide, or uranyl binds to this metal binding site first, NADH can move into its proper binding site in the positively charged groove at an optimal distance (,3.7 Å ) for hydride transfer. Note that the analysis of the crystal structure of Gh-ChrR in the absence of substrates suggests that the metal and NADH binding sites overlap: NADH binds on top of the ribityl group and the isoalloxazine ring of FMN and the metal binds on top of the isoalloxazine ring of FMN (Figure 4). Binding of the metal anion may induce some structural rearrangement in the active site of Gh-ChrR to remove this overlap and allow both species to bind simultaneously. After the chromate (VI) or uranyl (VI) ions are reduced to less soluble chromium (III) and uranium (IV) species, respectively, they are released from the catalytic center of the enzyme into the solvent. The open, solvent accessible nature of the catalytic pocket in Gh-ChrR may facilitate binding and reduction of a broad spectrum of substrates. There is no exchangeable proton, at least in the crystal structure of Gh-ChrR with oxidized FMN, near enough to stabilize the negative charge at N1 in the semiquinone form of FMN. The closest atoms to N1 that may serve as a general acid/base catalyst for protonation/deprotonation are the hydroxyl group of S118. The importance of S118 in catalysis was corroborated by kinetic studies on individual Gh-ChrR constructs containing a S118A substitution. The chromate reduction assay revealed that the catalytic efficiency (apparent k cat / K m value) of S118A was 50% reduced compared to wild type Gh-ChrR ( Table 2). The mechanism of two-electron reduction of U(VI) to U(IV) is straight-forward, it becomes more complicated for the odd electron reduction of Fe(III) to Fe(II) and Cr(VI) to Cr(III) and likely involves transferring of electron(s) to molecular oxygen and the generation of ROS [11]. This was confirmed by experiments designed to measure ROS generation that showed chromate and ferricyanide reduction produced 5-6 times more ROS than uranyl reduction ( Figure S9). While extracellular electron transport for metal reduction or detoxification has been shown to be effective under field conditions [41,42], metal reduction by cytosolic enzymes may provide an alternative reduction pathway. Based on our kinetic measurements, Gh-ChrR reduces highly soluble chromate, ferricyanide, and uranyl oxides to a less soluble reduced state using NADH as the electron donor. Optimal uranyl reduction is observed using a carbonate buffer that approximates subsurface conditions, which are dominated by negatively charged aqueous complexes of U(VI) such as UO 2 (CO 3 ) 3 42 [39,43]. This suggests that Gh-ChrR may be a useful enzyme for uranium bioremediation in aquifers. Many toxic metals and radionuclides can be precipitated and immobilized naturally by bacterial bioreduction. This phenomenon has been widely investigated as a promising, inexpensive approach for bioremediation of radionuclide and heavy metal contaminants [1,44]. Dissimilatory sulfate-reducing bacteria and dissimilatory ion-reducing bacteria have received intense attention as their extracellularly-located respiration chain can catalyze the desired reactions. However, respiration involving extracellular reactions is subject to inhibition by nitrate and oxygen, which commonly occur at contaminated sites, requiring terminal electron-accepting processes (TEAPs) to remove theses constituents before metal-reducing TEAPs can be initiated. In addition, reduced species formed in the extracellular environment, may be reoxidized [45][46][47]. Alternative approaches involving metal precipitation inside cells are promising and could be used to supplement extracellular processes or possibly as the primary enzymatic reductive process [48]. From this perspective, the ability of intracellular enzymes such as Gh-ChrR to catalyze the reduction of uranyl under aerobic conditions may lead to novel strategies for bioremediation of U(VI) in groundwater. Although G. hansenii is not a bacterium usually found at the environmental subsurface, recombinant Gh-ChrR has a capability of reducing chromate and uranium under aerobic condition in the micromole range, and therefore, may be a useful protein for bioremediation bioengineering (e.g. immobilized on bacteriophage or nanoparticle surfaces). Chemicals and Reagents Ni-NTA affinity resin was purchased from Qiagen Inc. (Valencia, CA), isopropyl b-D-1-thiogalactopyranoside (IPTG) and LB medium from Fisher Scientific (Wilmington, MA), and the crystallization screening kits from Hampton Research (Aliso Viejo, CA) and Emerald BioSystems (Bainbridge Island, WA). Uranyl (U(VI)) acetate dihydrate was purchased from Fluka (now Sigma-Aldrich Fine Chemicals, St. Louis, MO). All other chemicals were purchased from Sigma-Aldrich Fine Chemicals (St. Louis, MO). Protein Expression and Purification The DNA sequence of the Gh-ChrR gene from Gluconacetobacter hansenii ATCC 23769 (ZP_06834583) plus three site directed mutations (S118A, N85A and R101A) was codon optimized for expression in E. coli, synthesized, and inserted into the expression vector pJexpress411 (DNA 2.0 Inc., Menlo Park, CA, USA) such that a 6-histidine tag was present at the C-terminus of the gene product. The recombinant plasmid was then transformed into the E. coli expression host BL21(DE3). A single colony from a selection plate was inoculated into 20 mL of LB medium containing 40 mg/ mL kanamycin. Following overnight incubation at 37uC this culture was transferred into 1 L of LB medium containing 40 mg/ mL kanamycin and further incubated at 37uC until an OD 600 of 0.8-1.0 was reached. Protein expression was then induced by the addition of IPTG to the medium (0.02 mM final concentration). The temperature was immediately lowered to 14uC and 16 h later the cells were harvested by centrifugation and frozen at 280uC. To purify Gh-ChrR, thawed cells were first resuspended in lysis buffer (50 mM K 2 HPO 4 -NaH 2 PO 4 , 300 mM NaCl, pH 8.0), sonicated (Branson Ultrasonic, Danbury, CT) for 1 min three times on ice, and centrifuged at 100006g at 4uC for 20 min to remove cellular debris. The supernatant was collected and incubated with Ni-NTA resin at 4uC for 1 h. The mixture was then loaded into an empty column and washed sequentially with lysis buffer containing increased concentrations of imidazole. Gh-ChrR was eluted using lysis buffer containing 200 mM imidazole. This eluent was concentrated to ,1 mL (Millipore Amicon Centriprep) prior to loading onto a 1 mL Hitrap Q ion exchange column (GE Healthcare, Piscataway, NJ) connected to an Ä KTA explorer FPLC system (GE Healthcare, Piscataway, NJ) for further purification. Using a 0 to 1 M NaCl linear gradient, a major band containing Gh-ChrR eluted at a NaCl concentration between 0.15-0.2 M. Purified Gh-ChrR, which was yellow in color, was concentrated for structural and functional analyses. The SDS-PAGE analysis of the final product showed the sample to be .95% pure ( Figure S1). Reductase Activity Assays Measurements of absorbance changes for NADH (e 340 = 6220 M 21 cm 21 ) accurately measure the metal-dependent reductase activity of the chromate redutase enzyme ChrR, as previously validated by Puzon and coworkers [8]. Interference from the absorbance of chromate is minimal despite the absorbance shoulder of CrO 4 22 at 340 nm (900 M 21 cm 21 ) due to the apparent isosbestic point of Cr(III) at 340 nm, resulting in minimal interference [49]. As a result, measurements of NADH oxidation are routinely used to measure the function of chromate reductases, in both the absence of chromate as originally referenced [18,35], and in the presence of chromate [8,15]. All measurements are consistent with standard assays using 1,5diphenylcarbazide [50], which were used in a limited number of measurements to validate observed experiments. Likewise, neither U(IV) or U(VI) have significant absorption at 340 nm [51,52], allowing measurements of NADH reduction rates to assess enzyme activity. The ability of Gh-ChrR to reduce Cr(VI), Fe(III), and U(VI) was assayed using 96-well microplates by measuring NADH consumption using the absorbance at 340 nm (A 340 , e = 6220 M 21 cm 21 ) [8,15]. Initial velocity measurements for the reduction of Cr(VI) and Fe(III) were carried out at 37uC using various concentrations of potassium chromate and potassium ferricyanide in a 100 mL assay buffer (50 mM Tris-HCl, 100 mM NaCl, pH 7.4) containing 100 mM NADH and 5 mM Gh-ChrR. For reduction of U(VI), the assay was similar with that for Cr(VI) and Fe(III), except the assay buffer contained 100 mM NaHCO 3 -Na 2 CO 3 , 50 mM NaCl, pH 8.3. This assay buffer helps uranyl form stable negatively charged coordinated species, as previously reported [39,53]. The kinetic experiments were all conducted by adding metal anions before the NADH. All kinetic data were measured on a SpectraMax 384Plus microplate reader (Molecular Devices, Sunnyvale, CA). Values for apparent K m and V max were calculated by fitting the data to Michaelis-Menten plots using KaleidaGraph version 4.0 (Synergy Software, Reading, PA) ( Table S1). All measurements were conducted in triplicate under aerobic conditions. Substrate Inhibition Studies Chromate is reduced by Gh-ChrR in an NADH and chromate dependent manner. Increased concentrations of NADH lead to a reduction in the enzyme velocity of Gh-ChrR, suggesting that NADH may bind to the enzyme form a dead-end complex (Figure 1). Consistent with this observation, prior data has demonstrated an inhibition of metal reductase activity at elevated NADPH concentrations for ChrR from Thermus scotoductus SA-01 [54]. Such a report is consistent with data for a similar NADPHdependent quinone oxidoreductase enzymes from E. coli that also show substrate inhibition mechanisms [55]. To understand whether NADH acts as a substrate inhibitor [21], a mechanism involving substrate inhibition using an ordered bireactant model was investigated [20], where: Terms in the equation are defined as: v is the initial velocity, V max is the maximum velocity, When the noninhibitory substrate (i.e., CrO 4 22 or A) is varied, the equation simplifies to: Double reciprocal plots (i.e., 1/v versus 1[A]) result in a family of lines with varying slopes (i.e., Slope 1/A ) at fixed concentrations of the inhibitory substrate (i.e., [NADH] or [B]). There is a common intersection on the y-axis upon extrapolation to infinite B that corresponds to 1/V max ( Figure 1A). Relationships between the slopes (i.e., Slope 1/A ) in Figure 1A at fixed NADH concentrations provide additional kinetic information ( Figure 1B), since the replot of slope1/A is given by: where the slope of the replot equals K mA /V max K i , the y-intercept corresponds to: and the x-axis intercept is: Additional information is available from a consideration of the double reciprocal plot of velocity versus the concentration of the inhibitory substrate (i.e., [NADH] or [B]) at fixed concentrations of the noninhibitory substrate (i.e., [CrO 4 22 ] or [A]). This plot ( Figure 1C) yields a family of curves that demonstrate significant curvature at high concentrations of NADH (i.e., low values of 1/ [NADH]. Under these substrate conditions, the velocity equation is: A linear relationship is only observed at low concentrations of NADH (i.e., high values of 1/[NADH]). Extrapolation of the linear curves results in a family of curves that intersect at a common point (x, y) to the left of the y-axis, where x equals: .and y equals: Simultaneous solutions to the above equations permit calculation of the kinetic parameters K mA , K mB , K ia and K i (Table S2). Cr(III) and U(IV) Determination The Cr(III) product was measured by the absorbance at 580 nm, as previously reported [8,13]. The determination of U(IV) levels was based on the chemical reduction of Fe 3+ to Fe 2+ by U(IV) and the subsequent reaction of Fe 2+ with 1,10phenanthroline to produce a red/orange color with an absorbance at 510 nm [11]. The latter assay involved the addition of 100 mL of a colorimetric solution consisting of FeCl 3 (1 mM, pH 2.0), 1,10-phenanthroline (10 mM), and sodium acetate (1 M, pH 4.0) in a 5:1:1 volumetric ratio to each reaction well. The relative U(IV) concentration was determined using a standard curve prepared with different concentrations of FeSO 4 . Size Exclusion Chromatography The oligomeric state of Gh-ChrR in solution was determined by the retention time obtained for freshly prepared Gh-ChrR (from the HiTrap Q column and buffer exchanged into 50 mM Tris-HCl, 100 mM NaCl, pH 7.4) at the same elution condition. This involved the use of a Superdex 75 10/30 column (GE Healthcare, Piscataway, NJ) pre-equilibrated with the elution buffer (50 mM Tris-HCl, 100 mM NaCl, pH 7.4) and calibrated using three molecular mass standards (ribonuclease A (13.7 kDa), ovalbumin (44 kDa), and conalbumin (75 kDa)) (GE Healthcare, Piscataway, NJ) with a flow rate of 0.3 mL/min. The oligomeric state of Gh-ChrR in solution was then determined by the retention time obtained for freshly prepared Gh-ChrR. Crystallization and Structure Determination The Gh-ChrR solution used for the kinetic experiments was buffer exchanged into 20 mM Tris-HCl, 150 mM NaCl, pH 7.4 and concentrated to ,9 mg/mL for the crystallization trials. The hanging-drop vapor diffusion method was used for crystallization at room temperature by mixing 2 mL of protein solution with 2 mL of precipitant and equilibrating against 500 mL of precipitant. Crystals suitable for X-ray data collection were obtained with precipitant containing 0.5% of PEG4000, 10% isopropanol, and 0.1 M HEPES (pH 7.5). These crystals were incubated stepwise into cryoprotectant solutions with increasing concentrations of glycerol (up to 30%) prior to flash freezing in liquid nitrogen. Xray data collection was performed with an ADSC Q315 CCD detector at beamline X29A at the National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory. Diffracted data were processed using DENZO and integrated intensities were scaled using SCALEPACK from the HKL-2000 program package [56]. The structure of Gh-ChrR was phased by molecular replacement using Phaser from the CCP4 suite [57] and the crystal structure of a FMN reductase from Pseudomonas aeruginosa PA01 (PDB entry: 1RTT) [26] as the search model. The side chains of non-conserved residues between the model and Gh-ChrR were truncated using the CHAINSAW program. Molecular replacement yielded an initial structure with R work of 0.431 and R free of 0.442. One round of rigid body refinement and restrained refinement using the REFMAC program from the CCP4 suite reduced the R work and R free to 0.328 and 0.375, respectively, indicating the molecular replacement was successful. The missing side chains were manually rebuilt using the Crystallographic Object-Oriented Toolkit (Coot) [58] followed by numerous iterative rounds of restrained refinements using REFMAC. Improvement of structure quality was monitored by the decrease of R work and R free after each round of REFMAC refinement. The final Gh-ChrR model yielded a R work of 0.193 and a R free of 0.238 and the stereochemistry of the final structure was assessed by MOLPROB-ITY [59]. Detailed data collection and structural refinement statistics for Gh-ChrR is listed in Table 1. Structure Analysis and Modeling Except where specifically noted, all figures of protein structures were generated using PyMOL [60]. The Gh-ChrR structure was superposed with the EDTA monooxygenase B(EmoB)-NADH complex (PDB entry: 2VZJ) using the UCSF-Chimera MatchMaker program by aligning the Ca atoms of Gh-ChrR (PDB entry: 3S2Y) with those of EmoB [24]. The lowest RMSD (root mean square deviation) solution was selected for further analyses. ROS Measurement ROS generation during the reduction of chromate, ferricyanide and uranyl by the Gh-ChrR was measured using the oxidationsensitive fluorescent probe 5-(and-6)-carboxy-29, 79-dichlorodihydrofluorescein diacetate (Invitrogen; Grand Island, NY) following the standard protocol [61]. Briefly, the probe was dissolved in DMSO to a stock concentration of 200 mg/mL and 10 ml this solution was added to 100 mL of reaction buffer containing 5 mM enzyme, 500 mM substrate, and 100 mM NADH. After incubation at 37uC for 30 min, the steady state fluorescence was measured using a 96-well SpectraMax GenMiniXS reader (Molecular Devices, Sunnyvale, CA), with excitation wavelength at 488 nm and emission wavelength at 535 nm. The mean fluorescence intensity of eight reaction trials, with or without Gh-ChrR (5 mM), was determined. Background fluorescence was measured in the absence of either metals or NADH, and generally was less than 500 Arbitrary Fluorescence Units (AFU). Figure S1 SDS-PAGE analysis of the HiTrap Q fractions containing recombinant Gh-ChrR. Lane1: Molecular weight markers (labeled in kDa on the left). Lanes 2-9: Two mL aliquots from sequential fractions off a HiTrap Q ion exchange column. The major band identified with an arrow is Gh-ChrR (monomeric molecular mass = 21.3 kDa). Fraction 5, free of visible impurities on the gel, was used to crystallize Gh-ChrR and for the enzyme kinetic studies. (TIF) Figure S2 Influence of NADH on the UV/vis absorbance spectrum of Gh-ChrR. Spectra of a freshly purified solution of Gh-ChrR (15 mM protein, 50 mM Tris-HCl, 100 mM NaCl, pH 7.4) recorded before (solid line) and after (dashed line) the addition of excess NADH (100 mM). The spectral changes could be observed visually with the original yellow colored sample turning clear upon the addition of NADH indicating a transition from an oxidized to a reduced state. (TIF) Figure S3 Sequence alignment of Gh-ChrR with other NAD(P)H-dependent FMN reductases (NDFR). The amino acid sequences of the following FMN reductases were aligned using ClustalW2 (http://www.ebi.ac.uk/Tools/msa/clustalw2/): Gh_NDFR: G. hansenii Gh-ChrR [16]; Ec_NDFR: E. coli. IAI39 YieF [63]; Pa_NDFR: P. aeruginosa PAO1 NDFR reductase [26] (PDB entry: 1RTT); Bc_NDFR: B. subtilis str. 168 NDFR [35] (PDB entry: 1NNI); Edb_NDFR: EDTA-degrading bacterium BNC1 EmoB [28] (PDB entry: 2VZJ); Cpa_NDFR: Candidatus Protochlamydia amoebophila UWE25 NDFR [64]; and Rp_NDFR: Rhodopseudomonas palustris NDFR [65]. The secondary structure elements of Gh-ChrR are indicated on the top sequence (3 10 helices are indicated as g). Identical and conserved residues are highlighted red and yellow, respectively. The elements of secondary structure observed in the crystal structure of Gh-ChrR are shown on top of the alignment. Figure S5 Reduction of chromate, ferricyanide, and uranyl by Gh-ChrR. NADH-dependent reduction rates and associated nonlinear least squares fits (solid lines) for Gh-ChrR (5 mM) in the presence of the indicated concentrations of the metal oxides chromate (A), ferricyanide (B), and uranyl (C). Measurements were made by following NADH consumption and represent the average of triplicate experiments. The kinetic parameters obtained from a nonlinear least-squares fit of this data to Michaelis-Menten equations are listed in Table S1. (TIF) Figure S6 Increase in the levels of Cr(III) and U(IV) following the reduction of Cr(VI) and U(VI), respectively, by Gh-ChrR. A. The chromate reduction product Cr(III) was monitored by the increase in absorbance at 580 nm observed by incubating 125 mM Cr(VI) and 100 mM NADH with (open circles) and without (open triangles) Gh-ChrR. B. The uranyl reduction product uraninite (U(IV)) was monitored by the increase in absorbance at 510 nm observed by incubating 125 mM U(VI) and 100 mM NADH with (white column) and without (grey column) Gh-ChrR. Shown are the average of three independent measurements recorded 20 minutes after the addition of the protein. The calibration curve used to calculate the native molecular weight of Gh-ChrR. Gh-ChrR eluted with a retention time of ,60 min (solid square), a value that corresponds to an estimated native molecular weight of a tetramer, ,80 kDa. (TIF) Figure S8 Superposition of Gh-ChrR with EmoB (PDB ID 2VZJ) dimers. Superposition of one pair of dimers in the tetramer complex formed by Gh-ChrR (magenta) and EmoB (blue). The view in B is a 90u rotation towards the reader about the x-axis. NADH and FMN are highlighted in a stick representation with the carbon atoms of NADH and FMN colored green in EmoB and yellow in Gh-ChrR with all the nitrogen atoms colored blue, oxygen atoms colored red, and phosphorus atoms colored orange. (TIF) Figure S9 Reactive oxygen species (ROS) generated during the reduction of chromate, ferricyanide, and uranyl by Gh-ChrR. ROS production was monitored using fluorescence probes in a reaction mixture containing 500 mM metal substrate and 100 mM NADH in the presence (grey) and absence (white) of 5 mM Gh-ChrR in buffer containing 50 mM Tris-HCl, 100 mM NaCl, pH 7.4. The experiments were performed in triplicate, at 37uC, with the measurement error shown. (TIF)
9,665.6
2012-08-06T00:00:00.000
[ "Biology", "Chemistry" ]
Isolation, Synthesis and Structures of Ginsenoside Derivatives and Their Anti-Tumor Bioactivity Protopanaxatriol saponins obtained with AB-8 macroporous resin mainly consisted of ginsenosides Rg1 and Re. A novel mono-ester of ginsenoside-Rh1 (ginsenoside-ORh1) was synthesized through further enzymatic hydrolysis and octanoyl chloride modifications. A 53% yield was obtained by a facile synthetic method. The structures were identified on the basis of 1D-NMR and 2D-NMR, as well as ESI-TOF-MS mass spectroscopic analyses. The isolated and synthetic compounds were applied in an anti-tumor bioassay, in which ginsenoside ORh1 showed moderate effects on Murine H22 Hepatoma Cells. Introduction Panax ginseng, which belongs to the Araliaceae family, has been used as a traditional medicine for thousands of years and is now a popular natural medicine used worldwide [1,2]. Ginsenosides [3,4] have been regarded as the principal components responsible for the pharmacological activities of ginseng including anti-inflammatory activity, increasing the free radical scavenging activities and reducing weight [5][6][7][8][9]. Recent studies pointed that some rare ginsenosides such as Rh 1 , M1 had strong anticancer activity both in vivo and in vitro [10,11]. Pharmaceutical studies [12] have shown that ginseng saponins were ingested mainly by the bacteria of the small intestine and thus converted into their final forms: Re→Rg 1 →F 1 or Rh 1 →M4; Rb 1 →Rd→F 2 →M1; Rb 2 →M6→M2→M1; Rc→M7→M3→M1. Previous studies also showed that ginsenoside-M1 was further esterified with fatty acids and thus could be sustained longer in the body, a result that suggested that the fatty acid ester of the M1 might be the real anti-tumor active species in vivo [13]. Our laboratory previously reported the synthesis of three novel mono-esters of ginsenoside-M1 (DM1, SM1, PM1) and their bioactivity [14]. According to the previous study, we established a method for synthesizing ORh 1 and evaluated its bioactivity. Ginsenoside Rh 1 was obtained through enzymatic hydrolysis and the fatty ester was modified by octanoyl chloride. The structure of ginsenoside-ORh 1 was identified through 1 Hand 13 C-NMR and ESI-TOF-MS spectroscopic analyses. The pure ginsenoside-ORh 1 was used to determine the anticancer bioactivity against Murine H22 Hepatoma Cells; the results showed that ginsenoside-ORh 1 had moderate effects on H 22 cells. This is the first time ginsenoside-ORh 1 had been synthesized and reported. In this paper, we present the synthesis method and the bioactivity evaluation of the new ginsenoside fatty acid ester ORh 1 . Results and Discussion Characterizations of compounds 1-4 Compound 1 and compound 2 have been characterized as ginsenosides Rg 1 and Re, whose structures were identified by MS and NMR data analysis (not shown). After an enzymatic hydrolysis process, we obtained ginsenoside Rh 1 (Compound 3). Then, ginsenoside Rh 1 was modified by octanoyl chloride and formed ginsenoside fatty acid ester (compound 4). The structures of compounds 1-4 are shown in Figure 1. The HPLC analysis of compounds 3 and 4 is shown in Figure 2. From the HPLC analysis, the polarity of compound 4 was lower than that of Rh 1 (compound 3). From the 13 C-NMR data analysis and comparison, we found no significant chemical shifts changes for the main skeleton but an upfield shift of C'-6 of the 6-O-Glu (δ 63.5 to δ 64.8, see Table 1), indicating that the fatty acid ester subsituent was connected to that position of 6-O-Glu was observed. This assumption had been verified by HMBC (see Figure 3) which showed a cross-peak between H-6' to the carboxyl carbon. Bioactivity evaluation The purified and synthetic compounds were tested in an anti-tumor bioassay. Compounds 3 and 4 showed moderate cytotoxicity effects against the Murine H22 Hepatoma Cells. The results are shown in Table 3 and Figure 4. We tested the sample for four concentrations (10 μM, 20 μM, 40 μM, 80 μM). The highest inhibitory rate of ORh 1 was 87.16% at the concentration of 80 μM. The IC 50 of ORh 1 was obtained at the concentration of 42.44 μM. Conclusions The anti-tumor effects of some ginsenosides were known. For instance, Rg 3 has been shown to possess anti-tumor properties and have an effect on drug-resistant cultured cancer cells [15,16]. Rh 2 can reduce the proliferation of a variety of cultured cancer cells and can influence apoptosis [17][18][19]. Most of the ginsenosides with significant anti-tumor activities belongs to protopanaxdiol-type saponins. In our study, we investigated the protopanaxtriol-type ginsenoside Rh 1 . The protopanaxtriol-type saponins, including Rg 1 and Re, were finally transformed into ginsenoside Rh 1 after intestinal metabolism. We obtained the ginsenoside Rh 1 by means of enzymolysis and further chemical modification led to the synthesis of Rh 1 mono-fatty acid ester. The bioactivity evaluation of fatty acid ester (ORh 1 ) showed that this kind of compound was more active against human tumor cells. We concluded that the membrane transportation of small molecules depends on their lipophilic abilities and the fatty acid ester (ORh 1 ) with lower polarity met this requirement. In this study, we synthesized ORh 1 with fatty acid acyl-chlorides and the synthetic product had the desired lower polarity. In this study, ORh 1 showed more activity and higher anti-tumor efficiency than ginseonside Rh 1 on Murine H22 Hepatoma Cells, and as time increased (more than 24 hr), the anti-tumor action of Rh 1 was almost eliminated, whereas ORh 1 still displayed considerable anti-tumor action. General The 1 H-and 13 C-NMR spectra were measured on a Bruker AV 400 NMR spectrometer in CDCl 3 , using TMS as an internal standard. Chemical shifts (δ) are expressed in parts per million (ppm). The HR-ESI-TOF mass spectra were obtained from a MDS SCIEX API QSTAR-MS instrument. Column chromatographies were carried out with silica gel 60M (200-300 mesh) and AB-8 macroporous resin and HPLC were carried out on an LC-2010 system (Shimadzu). Chemicals and reagents Octanoyl chloride was purchased from ABCR GmbH& Co. KG. Enzyme (snailase) was purchased from BioDee BioTech Corporation Ltd. (code: S0100, Beijing, China). Other chemicals and reagents were purchased from the Chinese Chemical Group (Beijing, China). Extraction and isolation Extracts of leaves from Panax ginseng (100 g) was dissolved with sufficient amount of distilled water and filtered, then the filtrate was absorbed on a AB-8 macroporous resin column (100 × 10 cm) for 8 h. Gradient elution with 25%, 30%, 80% ethanol was used to elute the column. The dry eluate was obtained from 30% ethanol fraction and weighted (45 g). The fraction consisted of the protopananxtriol-type saponins Ginsenoside Rg 1 Enzymatic reaction A saponin fraction (1 g) obtained from the former procedure was weighed out and dissolved in distilled water (400 mL), enzyme (110 mg) was added and the mixture was cultured at 37 °C for 24 h while the pH was maintained at a value of 4.5. Then the enzyme reaction mixture was subjected to silica gel column chromatography eluting with CHCl 3 -MeOH (9:1) to afford purified Ginsenoside Rh 1 (3, 320 mg) as a white amorphous powder, mp 190-192 °C; ESI-MS[+]: m/z=639 [M+H] + ; 13 C-NMR data, see Table 1 and Table 2. In vitro anti-tumor assays Mono-nuclear cell direct cytotoxicity assay against human tumors cell lines were carried out at the Cell Culture Laboratory, Pharmaceutical College, Jilin University, using Murine H22 Hepatoma Cells. A blank control was used in this study. Generally, 5 × 10 5 /mL cells (190 μL) were placed in a 96-well plate and treated with obtained compound (10 μL). The normal cell was added culture medium. The plate was incubated at 37 °C for 24 h.Then, each well was added MTT (20 μL) with the concentration of 5 mg/mL and incubation at 37 °C was continued for 4 h. After that, the supernatant was removed and DMSO (150 μL) was added, the samples were agitated and the absorbance read at 490 nm.
1,800.4
2010-01-01T00:00:00.000
[ "Chemistry", "Medicine" ]
Incorporating the Internet of Things (IoT) Learning Module into the Smart Building Course The utilization of Internet of Things (IoT) devices is increasing very rapidly, creating a demand for skilled professionals who can navigate the complexities of this technology. It is a crucial educational challenge to prepare undergraduates to become contributors in this emerging field. The curriculum needs to be updated to offer students both theoretical learning and practical application so they can benefit from it. This paper introduces a comprehensive IoT learning module to the final-year students of civil engineering in an elective course, Smart Building. The proposed learning module includes an introduction to IoT concepts and applications, IoT devices, communication protocols, cloud platforms, user interfaces, and IoT risk management and security, all of which can be easily integrated into existing courses. Results from the module are presented using direct and indirect assessments, including assignments, hands-on practical, examinations, and surveys. These assessments are designed to evaluate the students’ understanding and the challenges they encountered in their IoT learning process. The findings indicate that a significant majority, surpassing 90% of the students, have demonstrated understanding of a minimum of three essential components of the fundamental IoT architecture, despite their limited background knowledge at the beginning of the course. Additionally, this learning module can serve as a valuable resource for other educators who intend to deliver IoT-related courses. INTRODUCTION The Internet of Things (IoT) can transform almost everything to make our lives easier, more distinct, and more pleasant.IoT provides innovative solutions to various business, government, and industry challenges and issues (Laghari et al. 2022).There are already more than 10 billion IoT devices currently in use, and 127 new ones are connecting to the internet every second.By 2025, new connections are predicted to expand 20 times.The growth of the IoT has created new opportunities for skilled engineers and professionals.Immersat Research found that 47% of businesses had to outsource their IoT-based projects due to a lack of internal expertise (Inmarsat Research 2022).Career opportunities in the IoT industry include the development of an IoT product (hardware design, testing, and integration), embedded systems, and cybersecurity. It is a crucial educational challenge to prepare undergraduates to become contributors in this emerging field.The curriculum needs to be updated to offer students both theoretical learning and practical application so they can benefit from the era of IoT (Abichandani et al. 2022).Some questions that arise when educating students about IoT technologies are: (a) which IoT topics should be covered in a course curriculum?(b) what are the most important IoT skills and knowledge students need to learn?(c) what is the most effective method for encouraging students to learn IoT?Many institutions are developing or delivering IoT courses, but their capacity is limited.This includes a lack of financial resources, complexity, exper-tise, ethics, risk, and data security.According to a report, there were not enough IoT trainers or experts in Malaysia to deliver and conduct specialized IoT courses (Leong &Letchumanan 2019). IoT is a relatively new subject that continues to grow at a rapid rate.Students must be prepared with the necessary tools and skills to keep up with the incredibly rapid pace of this sector.The market is currently booming with new products, technologies, and standards; the knowledge they acquire during their studies will be obsolete by the time they graduate.Therefore, a successful IoT curriculum must consider sustainable technical content and teaching paradigms.Few studies on the IoT curriculum for technical and non-technical backgrounds have been proposed in the published literature. According to (Silvis-Cividjian 2019), future IoT experts must achieve a balance between technical knowledge and user-related issues, including ethics, security, and privacy.The electrical and computer engineering students are introduced to the industry-relevant IoT course by (Seng, Wei, & Narciso 2020).The author concluded that by incorporating more hands-on laboratories in close collaboration with industry, students will have the opportunity to further refine their skills in creating IoTcentric solutions.(Bajracharya, Gondi, & Hua 2021) recommend that IoT courses should be designed and taught differently for non-technical students than for engineering, engineering technology, and computer science majors.(Ronoh et al. 2021) review the suitable learning methodologies for teaching IoT, including problem-based learning (PBL), flipped laboratories, cooperative learning, and other mixed methods.(Ahmed et al. 2022) suggest adding IoT learning modules, which involve device programming, system design, and cloud development, to existing computer science courses so that students can learn about smart-IoT technologies.(Yuan et al. 2023) propose teaching materials for senior undergraduate students and master's students on the IoT operating system.It is important to note that the landscape of IoT education is dynamic, and research in this field is ongoing.Hence, there arose a necessity to standardized curriculum and learning resources for IoT at different educational levels. This paper describes initiatives to address these IoT educational issues and challenges by introducing IoT topics in the Smart Building course.This is an elective course that is being offered for the first time to final-year students pursuing a Bachelor of Engineering degree at Universiti Kebangsaan Malaysia's (UKM) Department of Civil Engineering in the Faculty of Engineering and Built Environment.The introduction of IoT in this course can leverage every aspect of building management.As an example, IoT technologies help automate traditional building management systems (Kim et al. 2022).Smart buildings also use IoT technologies to connect sensors, lights, and meters to collect and analyze data.The buildings then utilize this data to improve a variety of aspects, including the infrastructure, public utilities, and services.In this course, in four weeks, students will get to engage in a series of hands-on activities that introduce them to all the fundamental aspects of IoT.Indirect and direct assessments such as teaching surveys, assignments, and examinations were executed to determine the performance of the students and the difficulties they encountered. The remaining section of this paper is arranged as follows: Section 2 discusses an overview of IoT technology and its role in smart buildings, including the evolution of IoT in education.Section 3 focuses on incorporating the IoT into the curriculum and providing guidance for other educators who intend to deliver IoT-related courses.The next section describes the teaching and learning IoT approaches for the Smart Building course.The results are reported using direct and indirect assessments.The paper concludes with some suggestions for improving future offerings. RELATED WORKS INTERNET OF THINGS (IOT) The Internet of Things (IoT) is a network of devices, things, and/or software that are all connected to each other.This network collects, analyzes, and sends information over the internet and cloud technology without the need for human interaction (Lombardi, Pascale, & Santaniello 2021).The deployment of IoT has already benefited a variety of sectors.Smart buildings and home automation allow devices and systems to be controlled automatically.This has improved security and energy efficiency while offering comfort to building occupants (Karimi et al. 2021).IoT technology can also assist in the early detection of structural cracks, leading to timely repairs and potentially preventing catastrophic failures (Abdul Razak et al. 2022).During the COVID-19 pandemic, IoT technology has been widely used in the healthcare sector (Dey & Chatterjee 2022;Velliyangiri et al. 2022).The IoT can reduce unneeded physical contact through remote monitoring and early diagnosis, creating an additional support system for doctors and patients.The IoT also has various advantages in the transportation industry, such as reducing the frequency of accidents, optimizing road traffic, and lowering fossil fuel usage and pollution (Kumar, Tiwari & Zymbler 2019;Malik et al. 2021).The IoT has changed people's lives, making them more adaptive and intelligent. Adding value to IoT with rapidly evolving technologies such as artificial intelligence (AI), digital twins, virtual reality (VR), blockchain, machine learning (ML), and others will provide a competitive advantage (Hansen & Bøgh 2021;Atlam et al. 2020;Fuller et al. 2020).The blended IoT has the potential to transform the way organizations, industries, and economies operate.For example, pairing IoT with ML and AI technologies enables the automatic identification of patterns and anomalies in the data generated by sensors, such as humidity and temperature, air quality, pressure, and sound.In the insurance sector, the IoT-blockchain technology allows the implementation of smart contracts, fraud, and claims management.The convergence of IoT with VR and digital twins can train personnel through interactive operational process demonstrations in preventative maintenance operation.These advancements also enable technical personnel to conduct maintenance duties with remote supervision immediately. The IoT ecosystem is based on four key elements which are, devices (for data collection), connectivity (for data transmission), analytic and storage (for information process), and user interface (Lombardi, Pascale, and Santaniello 2021).Devices are the first layer of the IoT ecosystem, and they act as the backbone of the network.Sensors in IoT devices will measure and collect the required data from their surroundings, capturing real-time environmental changes.The sensors will transmit raw data to the cloud using IoT gateways; major elements of IoT communications.It performs as a network router, where it sends data between IoT devices and the cloud.It is also used to manage communication between IoT protocols and networks, providing additional security to the devices.IoT networks communicate signals and data with other devices, gateways, and services running in the cloud.The primary networks that can facilitate various sectors for IoT applications are short-range networks and low-power widearea networks (LPWAN).Bluetooth Low Energy (BLE), Near Field Communication (NFC), and Zigbee are examples of short-range IoT networks (Bayılmış et al. 2022).LPWAN enables IoT communication between devices over 3 to 20km distances.Long Term Evolution for Machines (LTE-M), 5G, Narrowband IoT (NB-IoT), and LoRa are examples of LPWAN. The cloud which is known as the IoT network's central processing unit, manages, stores, and makes decisions on data.After the data is transferred to the cloud, it must be processed.Massive data is processed in milliseconds.It is then used for real-time analytics, enabling fast decisionmaking regarding the collected data and signals.Edge or fog computing is an extension of cloud networks, representing an alternative cloud option.When there is a need for substantial on-premises data processing and storage, edge computing is the preferred option.On the other hand, the cloud could be the favored choice as a high-performance facility with massive scalability and reduced operational expenses.The processed data information is easily accessible and managed through the IoT user interface (UI).The IoT system can be accessed by the user either through the devices themselves or remotely via smartphones, tablets, and laptops.For instance, Amazon Alexa and Google Home smart home systems allow users to interact with their smart devices. THE ROLE OF IOT IN SMART BUILDINGS Smart buildings are designed to provide an optimal environment for occupants while minimizing resource consumption and operational costs (Kim et al. 2022).Smart buildings come in various types, tailored to serve specific purposes and industries.These include commercial and residential buildings, factories and warehouses, as well as healthcare facilities, educational institutions, transportation hubs, and many others.The IoT technologies have significantly impacted smart buildings and home automation.It allows devices and systems to be controlled automatically, which improves security and energy efficiency, optimizes building operations, enhances the user experience, and offers comfort to building occupants.Implementation of IoT for smart buildings requires a good understanding of technical needs to ensure that the system is stable, effective, and secure (Jia et al. 2019).Engineers, architects, planners, and others are currently challenged to keep abreast of research on IoT and smart building technology. In recent years, IoT has begun to penetrate the building industry.Figure 1 The Scopus database was used to identify publications on the adoption of IoT and smart buildings.These publications covered a wide range of topics, including but not limited to energy efficiency, automation, security, privacy, and human-centric design.The trend shows an increase in yearly publications and will likely continue to evolve and expand in the future.Statista also predicts that the smart home market alone will continue to expand at a 11.43% annual growth rate and reach around USD 231 billion by 2028 (Statista 2023).Hence, there is a clear need for IoT technologies to be deployed in a smart building environment. The researchers have implemented sensors and IoT technologies to track and monitor various parameters, including temperature, humidity, water level, air quality, and energy consumption, within residential or general building settings.This data is analyzed in order to facilitate real-time adjustments and support long-term improvements.The lighting, room temperature, and other environmental factors can be adjusted to create a comfortable and productive atmosphere.It can be monitored and controlled remotely through computer or mobile applications.In the event of an emergency, the IoT system has the capability to trigger an alarm or promptly issue an alert, such as by activating warning lights, sounding sirens, sending mobile and email notifications, or displaying an alert on a web dashboard.Table 1 summarizes the selected examples of a published article related to IoT-based technologies, considering various scenarios for a smart building such as energy management, occupant comfort, ergonomics and user-friendly design, indoor air quality, indoor positioning, building analytics and data visualization, predictive maintenance, safety and security, and integration with other technologies. EVOLUTION OF IOT IN TEACHING AND LEARNING The emergence of IoT in teaching and learning has undergone a significant transformative process, reshaping education in various ways.Two important categories of IoT in education activities include utilizing IoT devices to teach any subjects effectively and incorporating IoT courses into the curriculum.IoT technology was in its initial stages in the early 2000s.The educators began experimenting with basic technological applications, such as radio frequency identification (RFID), for tracking assets and monitoring student attendance.Smart classrooms also emerged with the introduction of interactive online whiteboards, stylus pens, and early e-learning platforms (Mircea, Stoica, & Ghilic-Micu 2021). The proliferation of mobile phones, portable devices and the availability of cost-effective sensors have facilitated the development of smart campuses.Academic institutions are increasingly adopting IoT solutions to address energy management, security, and building maintenance needs.Learning management systems (LMS) are also becoming more prevalent, enabling online assignments, quizzes, tests, and educational content sharing.Such an approach offers students improved accessibility to learning resources and communication channels while also allowing educators to evaluate students' progress in real-time.The collected data can also be analyzed through IoT analytic platforms, which will be a valuable resource for educators to monitor academic performance and identify those who need further assessment or intervention, as well as to improve teaching strategies. The COVID-19 pandemic expedited the adoption of IoT in education (Khan, Tarimer, & Taekeun 2022).Academic institutions, including schools and universities, are increasingly utilizing IoT technology to facilitate remote and blended learning approaches.Consequently, the utilization of virtual classrooms, online proctoring, and AI-driven chatbots for student support experienced a significant increase.The scope of smart campuses has also grown to include contactless access control, temperature monitoring, and social distancing enforcement.The integration of IoT with other technologies, such as AI and machine learning algorithms, helps to predict student outcomes, recommend personalized resources, and automate administrative tasks.This also includes gamification, driven by IoT sensors and mobile devices, which has transformed the way students engage with educational content.The utilization of augmented and virtual reality (AR/VR) technology, integrated with the IoT, is being employed for immersive and interactive learning experiences.Students could engage in virtual field trips, collaborate on projects remotely, and access resources from anywhere.Remote labs equipped with IoT devices enable science and engineering students to conduct experiments from home. The evolution of IoT in teaching and learning continues to adapt to the shifting educational landscape, with a strong focus on personalized and data-driven approaches to enhance the quality of education and the student experience.In the coming years, the integration of IoT technology promises to further transform education. INCORPORATING IOT INTO CURRICULUM As more IoT devices become connected, we must ensure that students nowadays are equipped with the necessary abilities to drive this technology forward.Developing the IoT learning module for smart building based on current trends can provide several motivations.This includes giving students useful information about the latest changes, making the learning materials more relevant, and helping students stay on the cutting edge of the field.Many academic institutions offer IoT courses ranging from introductory to advanced levels.Some institutions also offer specialized courses on IoT as a certificate program.Most IoT courses emphasize hands-on, hardware-centric projects.Several core technical concepts must be covered for a thorough understanding of IoT (Nelke & Winokur 2020).This includes the following topics: INTRODUCTION TO THE CONCEPTS AND APPLICATIONS This module covers the IoT fundamentals, elements of the IoT ecosystem and architecture, IoT applications and trends in various sectors, and the benefits and challenges of the IoT.Students should be able to understand the definition and significance of the IoT and identify how it differs from traditional data collection systems. IOT DEVICES Connected IoT devices; sensors and actuators play an important role in creating solutions using the IoT.In this module, different types of sensors, actuators, and embedded development boards that are commonly used for IoT technology should be introduced.This includes an explanation of how it is designed as well as the connection between the devices, networks, and sensors.Students should be able to determine appropriate devices (sensors, microcontrollers, and others) to be used in a particular IoT system. IOT COMMUNICATION PROTOCOLS Connectivity is one of the main pillars of the IoT.Different communication and network protocols are required due to the variety of IoT data types and applications.This module should introduce the basic IoT networks and communication protocols such as cellular, Wi-Fi, Bluetooth, Zig-Bee, LoRaWAN, Message Queuing Telemetry Transport (MQTT), and many others.Students should be able to determine the appropriate protocol for communication between IoT devices. CLOUD COMPUTING In this module, the role of cloud computing in IoT, the cloud platform and its architecture, cloud types and services, and integration with the IoT platform should be covered.Students will then be able to construct IoT systems and utilize cloud services for the processing and storage of data generated by IoT devices. IOT PLATFORM AND USER INTERFACE The IoT platform is utilized for managing, collecting, storing, visualizing, and analyzing data from IoT devices. The functionality and demonstration of available IoT platforms on the market (AWS IoT, ThingSpeak, NodeRED, Kaa, to name a few) can be introduced in this module.Students should be able to create online dashboards for data analytics and telemetry. SECURITY AND RISK MANAGEMENT This module should cover the factors impacting IoT security, how to build trust in IoT, security management for IoT systems, and IoT device security itself.This includes privacy problems, cyberattack threats, big data problems, and compliance that needs to be followed with the laws and regulations.Students should be able to understand the evolving risks and challenges in IoT. It takes a different approach to teach IoT concepts to students with diverse backgrounds (both technical and non-technical).These include technical lectures, open discussions, continuous assessments such as assignments, project and test, laboratory exercises, blended learning, and many others.In the first week, students will learn the fundamentals of IoT and the importance of IoT in society.Students will also explore the ecosystem and architecture of IoT.Examples of IoT applications for smart built environments, including smart buildings, smart homes, and smart cities, to address current technologies that have been incorporated were also introduced in the lectures.These smart systems (refer Figure 3) include security and access control, energy management, climate control including temperature, humidity, and ventilation, building management systems such as structural health and predictive maintenance, and automation systems (lighting control, fire detection, water monitoring and many others).The drivers and emerging trends related to IoT-based smart buildings, as summarized in Table 2, were also discussed.Technology Enhances the application of existing methods and knowledge to make building systems work better or easier to use. Health and Wellbeing Support a healthy lifestyle through healthy noise management, optimal temperature, and clean air. Lifestyle Making it possible to work and live by combining work and living spaces and giving people access to the internet 24 hours a day. Climate Change Climate-resilient structures to withstand natural disasters. Space Utilization Flexible buildings that are scaled to facilitate multiple functions Security Enhances the procedures for the safety of the building and its occupants to minimize risks and reduce their consequences. Efficiency Reduces the consumption of natural resources and energy by enhancing the performance of the building's systems. A blended learning (BL) approach was implemented in this session to encourage student engagement and active learning.The lecture is conducted online, and the recorded videos will be provided on an e-learning platform (UKMFolio), with a total duration of 1.5 to 2 hours for each session.Students may also download the lecture notes that go along with the videos.A post-quiz was conducted after the session to measure students' understanding of the topic taught and will allow educators to evaluate their teaching and learning methods. IOT DEVICES (SENSORS, ACTUATORS, AND MICROCONTROLLERS) A variety of IoT-applicable sensors, actuators, and microcontrollers used in smart buildings will be introduced in the second week session.These devices should be capable of collecting environmental and operational data about the buildings.These include the types of sensors and actuators, the criteria for choosing the sensors and actuators, the classification of sensors, and the main benefits of each.Some of the important smart building sensors are temperature and humidity sensors, motions sensors, smoke sensors, and light sensors.Different types of microcontroller development boards, which include single boards, Arduino, and Raspberry Pi, their applications, and an integrated development environment (IDE) are also introduced. (a) Smart lighting system (b) Buzzer sound for alarm system A demonstration and hands-on session are conducted to allow the students to learn how to use Arduino communication modules that can be applied to different IoT systems.Students can replicate the IoT prototype without the need for physical hardware by using the online simulator Tinkercad, which is suitable for beginners and those without technical backgrounds.In this session, two exercises with Arduino IoT-based projects were given to the students to get familiar with the IoT devices and their functions.Arduino is a simple and versatile microcontroller that requires a basic programming language such as C or C++.A sample source code is also provided for students' reference.Figure 4 shows the students' work in Tinkercad for the smart lighting system and buzzer sound for the alarm system. COMMUNICATION PROTOCOLS AND CLOUD-IOT PLATFORM IoT devices communicate using IoT protocols to ensure that data retrieved from sensors is received and understood by another device, a gateway, or an application.In the third week, students will be introduced to the types of IoT communication protocols, their pros and cons, and their compatibilities.There are many IoT protocols, and each one of them has its own set of different features.Each IoT protocol supports device-to-device, device-to-gateway, device-to-cloud, or a combination of these communications. The ideal IoT deployment methodology is determined by cost, power consumption requirements, geography, batteryoperated options, and the presence of physical barriers.Some protocols work well for the IoT systems in buildings, while others work well for IoT deployments in multiple or outside buildings. In addition, students will also explore the concepts, infrastructure, and capabilities of a cloud based IoT platform.IoT platforms are middleware that facilitates data flow throughout the network and connects IoT devices to the cloud.There are numerous IoT platforms in the market, including Cisco IoT Cloud Connect, Microsoft Azure IoT Suite, IBM Watson IoT Platform, Google Cloud IoT Platform and AWS IoT Platform.For a better understanding of IoT architecture, students will learn how to develop a simple IoT application with Node-RED in the second session.Node-RED is a powerful and easy-to-use programming platform for the simulation of IoT scenarios. It is an open-source software that connects the input and output (I/O) of devices, cloud-based systems, databases, and application programming interfaces (API).Figure 5 In the final week, students will learn about the risks and challenges of IoT, including threats and attacks, risk management, and compliance with laws and regulations.Exploratory and case study activities are conducted during this session.The discussion covers several categories, including data and applications, technology acceptance, security and privacy, infrastructure, network and physical environment, and finance as shown in Table 3. IoT devices without virus or malware security are vulnerable to becoming bots that attack other network devices.The routing and forwarding functions of the IoT devices can potentially be taken over by hackers.Additionally, they can have access to sensitive information that IoT devices collect and transmit.Therefore, understanding risk management enables users to effectively employ IoT technologies while minimizing risks, such as data loss, security breaches, system failures, and other scenarios. STUDENT FEEDBACK AND PERFORMANCE This section used primary data from final-year Civil Engineering students () who enrolled in the elective Smart Building course to determine the understanding and challenges of IoT implementation.In this 4-week experience out of a total of a 14-week course, students take part in a series of lectures and practical exercises that introduce them to all the fundamental IoT concepts.Direct and indirect assessment from the assignment, mid-term examination and teaching survey were analyzed to measure students' performances and the challenges and difficulties they were struggling with. The students were assigned to write a report identifying the sensors available in smartphones and discussing their functions and applications in the context of smart buildings. During class lectures, sensors on smartphones were not discussed.Students were only exposed to sensor technology in general.This study is motivated by the fact that smartphones are arguably the most versatile IoT devices that we use daily.Investigation of mobile sensing is wellsuited for this introductory course as it is equipped with multi-communication interfaces (Bluetooth, Wi-Fi, and cellular communication including 4G and/or 5G) and sensors for user identification, monitoring, tracking, localization, and even personality traits.Table 4 shows the rubric for the assignment, with two performance indicators chosen.By assigning this as an independent study project, the first indicator will evaluate the students' ability to understand a smart sensor in the IoT.The other indicator is used to measure students' understanding of the types of sensors that are commonly used in smart buildings.The outcome scored by the rubric was separated into five different performance indicators ranging from excellent to poor. Figure 6 summarizes the results of the assessment for the IoT sensing device assignment.A total of 85% of students (refer to Figure 6(a)) were rated excellent for providing a detailed discussion on the types of sensors integrated in smartphones and their functions.Figure 6(b) demonstrates that 45% of students discussed smart building sensing systems, including their concept, features, and applications, at an excellent level.This result shows that all of the students have a good ability to understand the main components of IoT devices.After four weeks' session, all of the students were given a mid-term examination.In the context of smart buildings, five open-ended questions were designed, concentrating on the IoT ecosystem, devices, communication protocols, the IoT cloud platform, and security and risk management.In Figure 7, the grading statistics were reviewed to determine how well the newly introduced IoT topics were received by the students.It is observed that more than 90% of the students achieved excellent scores on questions related to IoT devices (Q2), IoT-based cloud platforms (Q4), and IoT security and risk management (Q5).The results show that the students are able to understand at least three basic components of the fundamental IoT architecture, despite their limited background knowledge at the beginning of the course.However, 55% of students scored a fair grade in Q1 and 50% of students scored below average in Q3.The Q1 relates to the IoT ecosystem in smart buildings, while the Q3 focuses on IoT communication protocols.It can be concluded that students have a fair understanding of the transformation of conventional buildings into smart and sustainable buildings.In contrast, the students dealt with difficulties in understanding the IoT communication protocols, for which technical background regarding building automation networks is needed.The findings from this study have significant implications for teaching and learning approaches. At the end of the four weeks class session, the teaching survey is conducted, and the students respond with both quantitative and qualitative feedback on the IoT module.Questions about teaching and learning approaches and knowledge contents are included in the quantitative surveys.The scores of teaching surveys from respondents () are tabulated in Table 5 and Table 6.The outcome was separated into five scales ranging from (very dissatisfied) to (very satisfied).The course achieved an excellent overall rating in this teaching survey.The highest possible score is , and the weighted average score is . The teaching survey comment, which included the following "The lecturer provides activities at the end of each class that really help the students understand the topic" shows the satisfaction of the students.One of the IoT module's most valued aspects, according to some students, was their experience learning with Node-RED and Tinkercad, with comment, "Lecturers have provided various platforms to give new exposure, and students can learn about IoT such as Node-RED and Tinkercad".The hands-on experience added an extra dimension to the course, which was interactive and interesting.In general, the module assists students in acquiring IoT knowledge.The assessment results show that the module was successful in providing students with a solid technical foundation for the IoT. DISCUSSION AND CONCLUSION In conclusion, a comprehensive IoT learning module is presented.Student assessments, feedback, and issues have been analyzed and identified.The findings also show that there is room for improvement in future offerings of the course.Communication protocols are essential components of the IoT ecosystem.While 50% of students were found to be performing below average on this topic, it is hoped that this percentage will be significantly lower in future classes.To improve this, more detailed explanation through interactive teaching and learning methods will be addressed.Also, demonstrating the importance of the IoT communication protocols in activities such as simple hands-on project using a microcontroller; NodeMCU ESP8266 with a Wi-Fi connection as the wireless transportation protocol and MQTT as the data communication protocol to transmit information between the device and the cloud.The proposed IoT learning module is successful in providing students with diverse backgrounds with a solid foundation of IoT technology.Adding demonstrations, laboratories, and/or hands-on activities would help them further understand and learn new skills, making them even more ready for the IoT. depicts the research trends observed in the field of IoT-smart buildings between the years 2018 and mid-2023. FIGURE 1 . FIGURE 1. Trends in IoT and Smart Building Research from the Year 2018 to Mid-2023 IOT LEARNING MODULE IN SMART BUILDING COURSE In this paper, an introduction to IoT technology topics was integrated in the Smart Building elective course for finalyear Bachelor of Engineering students in the Department of Civil Engineering, Faculty of Engineering and Built Environment at Universiti Kebangsaan Malaysia.The course aims to expose students to the technology and tools for operational control of smart buildings.Smart buildings use internet-connected devices to provide occupants with safety and comfort while monitoring and controlling many aspects of the building environment, such as energy efficiency and security.These devices are part of the IoT, which consists of electronic sensors and actuators connected to the internet through gateways, allowing for real-time data collection.Thus, students need to understand the ecosystem of IoT and how it can be implemented to create a smarter building.The framework for the IoT teaching and learning approach in the Smart Building course is shown in Figure 2. FIGURE 2 . FIGURE 2. Smart Building Course IoT Teaching and Learning Framework FIGURE 3 . FIGURE 3. IoT application in buildings and home FIGURE 4 . FIGURE 4. Sample of IoT based projects in Tinkercad (a) shows a sample of Node-RED programming flows to display the IoT data and Figure 5(b) shows the gauge and chart display in the Node-RED dashboard in real-time. FIGURE 5. Node-RED IoT Platform FIGURE 6. Results of IoT sensing device assignment TABLE 1 . Selected research on IoT for the development of smart buildings TABLE 3 . IoT risk factor in smart built environment
7,280
2024-03-30T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Auditory cortex activity related to perceptual awareness versus masking of tone sequences Sequences of repeating tones can be masked by other tones of different frequency. When these tone sequences are perceived, nevertheless, a prominent neural response in the auditory cortex is evoked by each tone of the sequence. When the targets are detected based on their isochrony, participants know that they are listening to the target once they detected it. To explore if the neural activity is more closely related to this detection task or to perceptual awareness, this magnetoencephalography (MEG) study used targets that could only be identified with cues provided after or before the masked target. In experiment 1, multiple mono-tone streams with jittered inter-stimulus interval were used, and the tone frequency of the target was indicated by a cue. Results showed no differential auditory cortex activity between hit and miss trials with post-stimulus cues. A late negative response for hit trials was only observed for pre-stimulus cues, suggesting a task-related component. Since experiment 1 provided no evidence for a link of a difference response with tone awareness, experiment 2 was planned to probe if detection of tone streams was linked to a difference response in auditory cortex. Random-tone sequences were presented in the presence of a multi-tone masker, and the sequence was repeated without masker thereafter. Results showed a prominent difference wave for hit compared to miss trials in experiment 2 evoked by targets in the presence of the masker. These results suggest that perceptual awareness of tone streams is linked to neural activity in auditory cortex. Introduction The presence of other sound sources can impair the perception of a target sound, even when the sounds do not overlap in their tonotopic representation in the cochlea ( Kidd et al., 2008 ). This informational masking has e.g. been studied with single tones, presented together with multiple masker tones ( Neff and Green, 1987 ). When the target is a sequences of repeated tones, masking is prominently reduced when the masker tones are newly randomized upon each target repetition, but not when the same masker is repeated together with the target ( Kidd Jr. et al., 1994 ). This reduced masking has been ascribed to the perceptual grouping of the target tones into an auditory stream. Multi-tone masking with repeated target tones is a powerful paradigm to study perceptual awareness, as the same physical stimulus can produce very different, salient percepts. Studies using magnetoencephalography (MEG) and electroencephalography (EEG) demonstrated that such repeated target tones evoke a negative-going wave in the auditory cortex when participants indicated that they are aware of the target stream, but not when it was masked ( Giani et al., 2015 ;Gutschalk et al., 2008 ;Hausfeld et al., 2017 ). With reference to this experimental setup, this wave has been labeled the awareness related neg-In the paradigms used in MEG so far, the target could be identified, first, by an identical tone repetition and, second, by the constant repetition interval that segregated it from the irregular intervals between target tones. In this setup, participants therefore immediately know that they detected the task-relevant tones, and it is possible that this task relevance then determines the processing of these tones in the auditory cortex. Here, we probed the role of the immediate target identification on the generation of the ARN with a modified paradigm, where participants were informed about the target only after the target tones were presented under informational masking. To this end, we used two different stimulus configurations: in the first, we jittered the target repetition by the same amount as the masker tones, and repeated the masker tones instead of randomizing them newly in each repetition. Because the target could be identified solely based on tone frequency in this case, and thus based on a single tone, a second paradigm was developed, where the target-tone frequency was varied randomly, such that the target could only be identified when the masked target sequence was correctly matched to the unmasked sequence presented thereafter. Results showed that hit and miss trials only differed when participants correctly identified the whole target sequence, but not when they identified tone frequency based on post-stimulus cues. Participants The study was approved by the ethical review board of Heidelberg University Medical School. All experiments were performed in accordance with the Declaration of Helsinki (2013 revision). Participants provided written informed consent and received payment for their participation. Exclusion criteria were any history of audiological, neurological, or psychiatric disorder, and magnetic implants that disturb MEG recordings. In both experiments, a screening of the individual task performance was first obtained in a psychoacoustic test, typically on the day before the MEG recordings. Only listeners with d' ≥ 0.9 were included for the MEG part of the respective experiment. In experiment 1, 37 listeners participated in the psychoacoustic test, of which 29 subsequently participated in the main MEG experiment. Nine of those were excluded from the analysis, because of insufficient signal-to-noise ratio of the MEG data ( n = 1), insufficient task performance ( d ' < 0.9) ( n = 7), or data loss ( n = 1) within the MEG session. The remaining 20 participants (10 females; 4 left-handers; mean age: 24.5 years; range: 19-39y) were included in the analysis of experiment 1. In experiment 2, 20 listeners participated in the psychoacoustic test, and 15 of them were included in the main MEG experiment. Two participants were excluded, because of insufficient performance (d' < 0.9). Overall, 13 participants (4 females; one left-hander; mean age: 25y; range: 20-39y) were included in the analysis of experiment 2. Two of them had already participated in experiment 1. Experiment 1 The multi-tone clouds were based on 9 frequency bands, equally spaced on a logarithmic scale between 200 and 5000 Hz (frequency spacing corresponding to approximately 6.2 semitones). The target frequency was restricted to one of the three middle frequencies (699 Hz, 1000 Hz, or 1430 Hz) to avoid major frequency-dependent differences in detection rate observed for higher (and to some degree for lower) frequencies ( Dykstra and Gutschalk, 2015 ;Gutschalk et al., 2008 ). Each of the remaining eight masker-tone frequencies were then randomized within their frequency bands in a range of ± 1.5 semitones. The randomization was restricted such that a protected-frequency region of ± 2/3 of an octave around the target tones was avoided by the masker tones. The Schematic spectrogram of targets-present trials (upper part) and targets-absent trials (lower part) for post-(left) and pre-stimulus cues (right) with every bar representing a pure tone. (B) Mean values of hit rate (green), false-alarm rate (red) and detectability (d', black) for the two probed conditions. Error bars indicate the standard error of the mean across listeners. Only listeners included in the MEG analysis are evaluated here ( n = 20). (C) Mean hit rates and standard errors plotted separately for the three target frequencies (F1: 699 Hz; F2: 1000 Hz; F3: 1430 Hz) and for the post-cue (gray) and the pre-cue (black) conditions. masker tones next to the target were therefore generally chosen at the edge of the protected region. The multi-tone cloud comprised 5 tones of each target and masker frequency. Tone duration was 100 ms, including 5 ms long raised cosine ramps at the beginning and ending. The average inter-stimulus interval (ISI) was 500 ms (i.e. 600 ms SOA from onset to onset), and the tone onset of each target and masker tone was uniformly jittered by ± 220 ms. There were 5 tones per target and 8 × 5 tones for the masker. In catch trials, 8 masker tones were configured around a protected region for one of the 3 target frequencies, but the target tones themselves were omitted. In the experiment, each target frequency was presented with equal probability with the constraint that more than two repetitions were not permitted. For the psychoacoustic screening test, 100 trials of the post-stimulus cue were presented; with 50% target and 50% catch trials. For the main MEG experiment, 216 trials were presented per set, including 25% catch trials. Each scene lasted 6120 ms with 1200 ms silence at the beginning and 120 ms silence at the end of each trial. The cue comprised two tones of the target frequency with a fixed 500 ms inter-stimulus interval and a minimum interval of 830 ms to the beginning/ending of the multi-tone cloud ( Fig. 1 A). Experiment 2 The temporal configuration of the tone cloud was similar to experiment 1, but the frequency of both, target and masker tones, was changed for each 600-ms time interval. The frequency of the first target tone was randomly drawn from a logarithmic scale in the range from 400 to 2000 Hz. The following target tones were randomized in a range of ± 2 semitones of the preceding tone, with the constraint that the randomization was repeated when the value fell out of the range 400-2000 Hz. The masker tones were randomly drawn from a logarithmic scale in the range from 150 to 5000 Hz, excluding within each 600-ms time window a range of ± 2/3 of an octave around the current target tone and ± 2/3 of an octave from the directly preceding target tone. Six masker tones were drawn per time interval, with the additional constraint that each tone had a minimum distance of one semitone to the tones in the same and the previous time interval. (The different randomization for target and masker tones was based on the assumption that perceived streams within a tone cloud typically comprise tones which are close by in frequency. If the same procedure would have been used for the masker tones, however, the overall seven streams would not have complied with the frequency differences between streams outlined above, at the same time. With the randomization used for the masker tones, the tone cloud was so dense that tones in subsequent intervals still had neighbors within a similar frequency range as in the target stream, allowing for alternative lines to be heard by the listeners). The post-stimulus probe sequence replayed the target without the masker, after a minimum silent interval of 1160 ms duration. Two different kinds of catch trials were used, to control that listeners could not rely on the first or last tone of the target sequence, only. Standard catch trials did not comprise a target sequence, but instead a seventh masker tone randomized in the same way. The cue sequence was generated as described for the target sequences above. Alternative catch trials comprised a tone cloud with identical structure as described above. However, the cue sequence differed from the masked target in the three middle tones, while the first and last tones were retained. For the three middle tones, the tone steps were randomized in a 2-semitone range with the constraint that the frequency step was always in the opposite direction of the original sequence. The timing was identical to the masked target sequence. Overall, 100 trials were presented in the psychoacoustic test (70 target, 15 standard catch trials, and 15 alternative catch trials). For the main MEG experiment, two blocks of 200 trials were presented (70% target trials, 15% standard catch trials, and 15% alternative catch trials). The sampling rate was generally 48,000 Hz. Procedures and data acquisition Each listener started with a psychoacoustic test. At first, the task was explained and 20 practice trials were provided to get used to the task; this part could be repeated if required. Then the number of trials specified above were presented and the d' was calculated. Only listeners whose detection and false alarm rate resulted in a sensitivity index d ' > 0.9 were then asked to participate in the main MEG experiment. Stimuli were presented binaurally in a silent room with circumaural headphones (Sennheiser HDA200) with the sound intensity being adjusted to a comfortable listening level. In the MEG experiment, sounds were generated and digital-toanalog transformed with an ADI-8 ds external audio interface (RME, Haimhausen) controlled with SoundMexPro software (HörTech, Oldenburg) in the Matlab environment. The analog signal was passed through programmable PA5 attenuators and then amplified with an HB7 headphone buffer (Tucker-Davis Technologies, Alachua) before being presented with ER3 headphones (Etymotic Research, Elk Grove Village) via 1-m-long plastic tubes and foam ear pieces. Stimuli were presented binaurally at a comfortable listening level. After each trial, listeners were prompted via visual instruction to indicate if the cued target was present in the tone cloud or not, by pressing one of two buttons on an optical response box (Current Designs Inc.). The minimum inter-stimulus interval between two trials was 1300 ms in experiment 1 and 500 ms in experiment 2. The post stimulus pause was 500 ms, but in experiment 1 an additional 800 ms visual feedback was presented. In experiment 1, the first set used post-stimulus cues and the second set pre-stimulus cues, only. The average duration of each set was 32.4 min in both experiments. The MEG was acquired with a Neuromag-122 whole-head system (MEGIN, Helsinki) with 122 planar gradiometers arranged in pairs at 61 positions. The data were recorded continuously with a sampling rate of 1000 Hz, direct coupled, and with a 330 Hz low-pass filter. Prior to recording, four head-position-indictor coils were fixed to the subjects' head and digitized with reference to the nasion and two pre-auricular points, together with 100 additional points equally distributed over the head surface and face. The position of the coils relative to the dewar was then determined at the beginning of each recorded set. Data processing and analysis Data analysis was performed with BESA 5.1.8 (BESA GmbH, Gräfelfing, Germany). The target tones were averaged in an interval from 200 ms before to 400 ms after target-tone onset, subtracting a baseline in the interval 100 ms before to tone onset. Masked target tones and cue tones were separately averaged. When averaged in dependence of the behavior, all target tones in one trial were averaged together in the hit or miss category, respectively. Noise contaminated epochs were identified with the interactive "artifact scan tool "; rejection thresholds were individually adjusted, such that at least 90% of the intervals were accepted. Two dipoles (one in each hemisphere) were then fitted in a ~20 ms interval centered around the N 1 evoked by the cue tones in the pre cue condition ( Fig. 2 A). These dipoles were then applied to both sets as a spatial filter to obtain source waveforms for each condition. The spatial filter additionally included two regional sources in the position of the eyes to model artefacts caused by blinks and eye movements ( Ille et al., 2002 ). Drifts and slow activity (caused e.g. by passing streetcars) were individually modeled for each condition by including the first component of a principal component analysis (PCA) in the time interval 200 ms before tone onset and 350-400 ms after tone onset. Source waveforms were low-pass filtered in MATLAB 2007 by applying a 2nd-order zero-phase shift Butterworth filter with a cut-off frequency of 20 Hz. Statistical analysis For statistical analysis of the MEG data, average amplitudes across the time interval 75-175 ms were measured in the source waveforms representing activity in left and right auditory cortex. Statistical analysis of target trials was based on an ANOVA for repeated measures with the factors detection (hit, miss) and hemisphere (right, left). The main hypothesis to be tested was that of a stronger negativity in the 75-175 ms time range for hit compared to miss trials, reflected by a main effect of detection in the ANOVA. The significance level was considered p < 0.05 (two-tailed); the polarity of the change was separately confirmed in the plots presented. To test for priming effects on the post-stimulus cue tones, an ANOVA was calculated with the same parameters. For the behavioral analysis, hit and miss rates were based on all trials (including those rejected for the MEG analysis). For the calculation of d', we inserted Hit = ( n -0.5)/n in case of 100% hit rates, and Miss = 0.5/ n was inserted in case of 0% false alarm rate, with n being the number of target-present or target-absent trials, respectively ( Macmillan and Kaplan, 1985 ). Experiment 1 In experiment 1, both target and masker comprised repeated tones with the same temporal jitter ( Fig. 1 A). All participants were presented, first, with a set of stimuli where the tone frequency was indicated after the masked target (post cue) and, second, with a set where the tone frequency was indicated before the masked sequence (pre cue). Performance was prominently better for pre-compared to post-stimulus cues ( Fig. 1 C) with higher hit rates (97.04% vs 75.99%; F 1, 19 = 54.02; P < 0.001) and lower false-alarm rates (4.17% vs 16.30%; F 1, 19 = 33.88; P < 0.001). The comparison of target frequencies reveals that the middle frequency was somewhat more frequently detected in the post-cue condition (mean hit rates: 699Hz: 69.72%; 1000Hz: 84.54%; 1430Hz: Dipole source waveforms for the target tones ( Fig. 2 B, C) were estimated with dipoles in the left and right auditory cortex, fitted to the N 1 m evoked by the unmasked target tones ( Fig. 2 A). Statistical results are summarized in Table 1: no difference in source activity was observed between hit and miss trials in the post-cue condition. For the pre-cue condition, in contrast, negative-going source activity was longer and significantly higher for hit compared to miss trials. Note that the source activity for miss trials is strongly overlaid with alpha activity because of the small number of trials. Source amplitudes for hit trials were also significantly larger in the pre-cue condition compared to the post-cue condition. A closer comparison between the overlaid waveforms ( Fig. 2 D) suggests that this difference emerges around and after the peak of the N 1 m, which is clearly recognized only in the post-cue condition, whereas the onset up to the N 1 m peak is very similar. It therefore appears that the difference between pre-and post-cue hit trials is mostly caused by a separate long-latency component subsequent to the N 1 m, with a peak latency around 180 ms. We also compared the response evoked by the post-stimulus cue tones between hit and miss trials ( Fig. 2 E and F), to explore if a priming effect was observed for detected targets. Results showed a stronger negative-going response for hit compared to miss trials (detection: F 1, 19 = 5.61; P = 0.0286; detection x hemisphere: F 1, 19 = 1.28; P = 0.2714). The results of the post-cue condition in experiment 1 argue against an interpretation of the ARN as neural correlate of the awareness of single tones under informational masking. One possibility, suggested by the pre-cue condition, is that enhanced negative-going activity is merely an indication of task-related attention. Alternatively, it could be that the pre-cue promotes the perception of the target frequency into one coherent stream, concurrently enhancing behavioral performance in the detection task. The latter interpretation cannot be tested with the data from experiment 1; it would predict that an ARN can be observed with post-stimulus cues, provided that the task requires that major parts of the target stream are perceived, rather than just single tones of that stream, as was sufficient for experiment 1. Experiment 2 To probe the hypothesis that the ARN is related to the awareness of auditory streams in experiment 2, random tone sequences were used as targets, which were newly randomized upon each trial and therefore unknown to participants when presented within the multi-tone masker ( Fig. 3 ). Only post-stimulus cues were used in experiment 2, which replayed the complete target sequence. Whereas the hit rates were lower in comparison to both conditions of the first experiment (mean hit rate: 47.31%), the false-alarm rates were in the same range as in the pre-cue condition of experiment 1 (3.46%) when no target was present, leading to an average d' of 1.783. In a subset of trials, a target was present but a different sequence was used as post-stimulus cue. This post-stimulus cue adopted the first and last tone from the target sequence, but comprised three different tones in between. These trials were included to control for the possibility that the sequence could be identified by a subset of tones, in particular by the first or last tones; the false alarm rate was 9.10% in this case, and the difference between false-alarm rates for target-absent and different-sequence trials was highly significant (F 1, 13 = 26.61; p = 0.0002). This finding suggests that the presence of a target-like stream makes the correct rejection more difficult than when no target-like stream is present, but the false alarm rate for target-like streams was still much lower than the hit rate for correct targets. Table 2 ). The response to correct rejections of different-sequence trials lies in between hit and miss trials. This is consistent with the assumption that the tone sequence embedded in the masker was perceived in about half of the trials, but that the correct behavioral response (rejection) does not dissociate between trials where the tone sequence was heard and those where it was not heard. In contrast to experiment 1, the comparison of responses evoked by post-stimulus cue tones revealed no significant effect of detec- tion ( F 1, 19 = 1.21; P = 0.293; detection x hemisphere: F 1, 19 = 0.13; P = 0.7247; cf. Fig. 4 D, E). If the ARN was related to serial streams rather than to single tones, a difference would be expected between the response to the first and subsequent tones of the sequence ( Giani et al., 2015 ;Wiegand and Gutschalk, 2012 ). To test this prediction, hit and miss trials were separately analyzed for the five subsequent tones of the target sequences. The results ( Fig. 5 ) are generally in line with this consideration: the difference between hit and miss trials is significant for tones 2-5 and remains a non-significant numerical trend for tone 1 ( Table 2 ). Whereas the N 1 decreases from the first to the second tone of the sequence in miss trials, it remains on a steady level in hit trials ( Fig. 5 B). Moreover, from the second tone on, the evoked negativity for hit trials appears more broad-based than the N 1 ( Fig. 5 C), and it appears that there is a second peak with a latency around 180 ms. Discussion The results of experiment 2 of this study show that the longlatency negativity in auditory cortex, referred to here as the ARN ( Gutschalk et al., 2008 ), covaries with perceptual awareness of randomtone streams in a multi-tone masker background. In contrast, experiment 1 did not provide such evidence of a difference response between hit and miss trials in a setting where the target was identified based on its frequency provided by a post-stimulus cue. Since this task did not require that the target tones are perceived as segregated stream, experiment 1 suggests that perceptual awareness of a tone's presence inside of a multitone masker is not necessarily coupled to ARN. The difference response observed for the pre-cue condition fulfills criteria for task-related attentional enhancement; it may also be associated with modulation of the perceptual organization, but this cannot be determined in retrospect. In contrast to previous studies, participants did not know if they had detected a target while listening to the masked stimulus before the post-stimulus cue. In experiment 1, there were multiple mono-tone streams with random onset. In experiment 2, the target-tone sequence was random but with a maximum distance of two semitones between directly subsequent tones, and a minimum distance of four semitones between target and masker tones in the same time interval. The frequency distance between target and masker tones, which is one important parameter for segregating a tone sequence from a multi-tone masker ( Micheyl et al., 2007 ), was similar in both experiments. However, the similarity between target and masker structure ( Durlach et al., 2003 ) was lower in experiment 2, because the randomization was less constrained for masker tones. It is therefore possible that the target was segregated more easily in experiment 2, however, no behavioral data regarding the perception of auditory streams is available for comparison from experiment 1. Tone versus tone-pattern masking While target tones were usually not cued in traditional informational masking paradigms, the same, known frequency was typically used ( Kidd Jr. et al., 1994 ;Neff and Green, 1987 ), which allows for full focus of attention to that frequency. At least for the stimuli in experiment 1 of the present study, the question may therefore be raised, if missed target tones were really "masked ", i.e. not perceived, or if the target tones are simply difficult to single out from the cloud in retrospect, when their frequency is not known beforehand. Miss trials in the post-cue condition of experiment 1 might then be caused by a limitation of short-term memory rather than a lack of perception. Conversely, the higher performance in the pre-cue condition could be readily explained by the lower memory load required for this task. The enhanced negativity in the pre-cue task additionally demonstrates that the target tones are differently processed after pre-cues. Whether this also goes along with a different perception or perceptual organization cannot be dissociated based on the reporting task used. In contrast, we consider it very unlikely that the target-tone patterns in experiment 2 could be recognized in the context of the masker tones without being organized into a distinct perceptual stream during perception and encoding in short-term memory, already. The computation load would here be much higher than for the recognition of a single tone, which was already difficult. Based on these considerations, we suggest that there are two levels of informational masking: First, there is the masking that has been traditionally described with multi-tone maskers, where single (or repeated) tones are not perceived at all ( Kidd Jr. et al., 1994 ;Neff and Green, 1987 ). Second, the higher-level quality of a stream, which is readily perceived without masker, can separately be masked, without necessarily masking the single tones comprised in the stream. We suggest that such tone-pattern masking dominates in experiment 2 of this study. Based on these data, one possible interpretation of the ARN is that it reflects the perception of streams rather than of single tones. Tone-pattern masking appears as the opposite site of stream (or figure) formation from a random-tone background ( Micheyl et al., 2007 ;Teki et al., 2011 ), and it has been pointed out that a major part of e.g. speech masking is likely related to the disruption of sequential stream formation ( Kidd et al., 2008 ). While a listener typically remains aware of all tones in a classical two-tone streaming paradigm ( Van Noorden, 1975 ), hearing out a stream from a multi-tone masker typically coincides with the perceptual awareness of the tones ( Micheyl et al., 2007 ). If or how these tones are perceived when the target stream is masked remains difficult to explore. In particular when the masker tones are presented synchronously ( Kidd et al., 1994 ;Micheyl et al., 2007 ;Teki et al., 2011 ), the masking is not only explained by the lack of stream formation, but also by the alternative grouping of the tones into serial chords. The temporal jitter used in this and previous studies ( Elhilali et al., 2009 ;Gutschalk et al., 2008 ) reduces such grouping and allows for easier glimpsing on the single tones ( Demany et al., 2011 ), which allows for hearing out single tones more easily in the absence of stream formation, as we think is the case for target tones in experiment 1 of this study. Still, the informational masking of tones and streams may often cooccur. In the present study, tones evoked a clear N 1 when the tone pattern was not detected, and a larger, more broad-based ARN in detected trials. Other studies with similar multi-tone maskers and softer target tones did not observe N 1 for undetected trials ( Dykstra and Gutschalk, 2015 ;Gutschalk et al., 2008 ), and similarly N 1 was only observed for detected but not for missed tones in noise ( Hillyard et al., 1971 ). Possibly, ARN related processes code for tone context, which can reach from a solitary event to one tone amongst multiple similar in a multi-tone cloud, reflecting a continuum from salient to masked. Thus, when a tone (or speech) pattern is masked, its rhythm, melody, (or prosody) may disappear, but the tone may still be perceived in a multi-tone context, and evoke a small N 1 as in the present study. Interaction of attention and perceptual organization The difference between hit trials in the pre-cue and post-cue conditions of experiment 1 could simply be explained by attention along the tonotopic axis ( Riecke et al., 2018 ) that evokes an additional component after the N 1 , previously referred to as negative difference wave N d ( Hansen and Hillyard, 1980 ;Rif et al., 1991 ) or processing negativity ( Näätänen et al., 1978 ). However, it has been suggested that the N d already operates on auditory streams rather than on pure tonotopy ( Alain and Woods, 1994 ). This interpretation has received further support by similar, negative-going response enhancement observed for attended speech streams ( Ding et al., 2014 ;Ding and Simon, 2012 ;Power et al., 2012 ) and active versus passive listening to stochastic figure-ground stimuli ( O'Sullivan et al., 2015 ). Along these lines, the longer-latency negativity for pre-cue hits could mean that the target tones were organized into a perceptual stream. More generally, it is possible that this N d is the longer-latency component of the ARN, which is related to active listening to auditory streams. In experiment 2 and in earlier studies, the ARN additionally included the latency range of the N 1 ( Dykstra and Gutschalk, 2015 ;Gutschalk et al., 2008 ), and the earlier part of the ARN showed N 1 -like behavior with respect to stimulus lateralization ( Königs and Gutschalk, 2012 ). We therefore expect that the N 1 part of the ARN is also related to the if and how a stream of tones is perceived. The buildup of the response enhancement for hit trials in experiment 2 shows that the longer-latency part of the ARN is only present from the second tone on, but the difference between hit and miss trials is driven by both time intervals, as the N 1 decreases for miss trials from the second tone on. Previous studies observed a different build-up pattern, where the second (and subsequent), but not the first tone of the sequence evoked an early ARN ( Giani et al., 2015 ;Wiegand and Gutschalk, 2012 ). The difference is probably related to the target onset coinciding with the masker onset in the present experiments, whereas the masker started before the targets in the previous studies, and the response was therefore adapted, already. Once the target is perceived as separate stream, however, ARN appears to partly re-adapt to the longer time intervals of the target stream, instead, similar as suggested for streaming of two-tone sequences ( Gutschalk et al., 2005 ). Overall, the representation of auditory streams in different background conditions remains closely coupled to selective attention, and two different concepts have been entertained for the negative difference response: one is response enhancement by focus of attention ( Ding and Simon, 2012 ;Elhilali et al., 2009 ;Hansen and Hillyard, 1980 ), the other, ARN, assumes a direct relationship to perceptual awareness ( Gutschalk et al., 2008 ;Snyder et al., 2015 ). The question then remains, which of the two concepts captures the difference response better: if the response was required for perceptual awareness of a stream, the stream should only be perceived when this type of response is evoked, but not otherwise. If the response reflects enhancement by attention, then the stream may already be perceived before, and the perception may or may not change as the response is enhanced by attention. This dissociation has not been successfully tackled today, and a third variant was in fact that the two cannot be dissociated at all ( Posner, 1994 ). One argument against a purely attention-based, task-related interpretation is that similar activity in secondary auditory cortex is observed when a single stream is presented in a passive setting ( Dykstra et al., 2016 ;Gutschalk et al., 2008 ), when no major activity is observed in areas outside of the auditory cortex ( Wiegand et al., 2018 ). While it has been shown that distraction of attention to another modality can further reduce the N 1 in such a setting, the response was still present even for high visual loads ( Molloy et al., 2015 ). We therefore suggest that this coupling of attention and perceptual awareness is in particular found in situations of perceptual competition, where multiple perceptual interpretations exist ( Desimone and Duncan, 1995 ). However, we suggest that the same network in auditory cortex may be active during the perception of a single stream in silence, without requirement for selective attention, and therefore suggest that the ARN requires attention in the experiments presented here, but is at the same time closely coupled to the perceptual awareness of auditory streams. Data and code availability statement Single-subject source level data and matlab readers are available on heiDATA ( https://doi.org/10.11588/data/Y8UEOY ). Declaration of Competing Interest The authors declare no competing financial interests.
7,841.4
2020-12-22T00:00:00.000
[ "Biology" ]
Improved unsupervised physics-informed deep learning for intravoxel-incoherent motion modeling and evaluation in pancreatic cancer patients ${\bf Purpose}$: Earlier work showed that IVIM-NET$_{orig}$, an unsupervised physics-informed deep neural network, was faster and more accurate than other state-of-the-art intravoxel-incoherent motion (IVIM) fitting approaches to DWI. This study presents: IVIM-NET$_{optim}$, overcoming IVIM-NET$_{orig}$'s shortcomings. ${\bf Method}$: In simulations (SNR=20), the accuracy, independence and consistency of IVIM-NET were evaluated for combinations of hyperparameters (fit S0, constraints, network architecture, # hidden layers, dropout, batch normalization, learning rate), by calculating the NRMSE, Spearman's $\rho$, and the coefficient of variation (CV$_{NET}$), respectively. The best performing network, IVIM-NET$_{optim}$ was compared to least squares (LS) and a Bayesian approach at different SNRs. IVIM-NET$_{optim}$'s performance was evaluated in 23 pancreatic ductal adenocarcinoma (PDAC) patients. 14 of the patients received no treatment between 2 repeated scan sessions and 9 received chemoradiotherapy between sessions. Intersession within-subject standard deviations (wSD) and treatment-induced changes were assessed. ${\bf Results}$: In simulations, IVIM-NET$_{optim}$ outperformed IVIM-NET$_{orig}$ in accuracy (NRMSE(D)=0.14 vs 0.17; NMRSE(f)=0.26 vs 0.31; NMRSE(D*)=0.46 vs 0.49), independence ($\rho$(D*,f)=0.32 vs 0.95) and consistency (CV$_{NET}$ (D)=0.028 vs 0.185; CV$_{NET}$ (f)=0.025 vs 0.078; CV$_{NET}$ (D*)=0.075 vs 0.144). IVIM-NET$_{optim}$ showed superior performance to the LS and Bayesian approaches at SNRs<50. In vivo, IVIM-NET$_{optim}$ showed less noisy and more detailed parameter maps with lower wSD for D and f than the alternatives. In the treated cohort, IVIM-NET$_{optim}$ detected the most individual patients with significant parameter changes compared to day-to-day variations. ${\bf Conclusion}$: IVIM-NET$_{optim}$ is recommended for accurate IVIM fitting to DWI data. Introduction The Intravoxel Incoherent Motion (IVIM) model (1) for diffusion-weighted imaging (DWI) shows great potential for estimating predictive and prognostic cancer imaging biomarkers (2)(3)(4)(5). In the IVIM model, DWI signal is described by a bi-exponential decay, of which one component is attributed to conventional molecular diffusion and the other to the incoherent bulk motion of water molecules, typically credited to capillary blood flow. Hence, IVIM provides simultaneously information on diffusion (D [mm 2 /s]; diffusion coefficient), capillary microcirculation (D * [mm 2 /s]; pseudo-diffusion coefficient) and the perfusion fraction (f [%]) without the use of a contrast agent (6)(7)(8). However, despite IVIM's great potential (2)(3)(4)(5), it is rarely used clinically. Two major hurdles preventing routine clinical use of IVIM are its poor image quality and the long time to fit the data (9)(10)(11). Tackling these shortcomings will help towards wider use of IVIM (12). Recently, a promising alternative for IVIM fitting was introduced: estimating IVIM parameters with deep neural networks (DNNs). Initially, Bertleff et al. (17) introduced a supervised DNN for IVIM parameter estimation, in which the network was trained on simulated data for which the underlying parameters were known. However, the strong assumption of simulated training and test data being identically distributed could limit the network's performance in vivo, where noise behaves less ordered. We solved this shortcoming in earlier work (11), where we used unsupervised physics-informed deep neural networks (PI-DNNs) (18,19). PI-DNNs formulate a physics-informed-loss-function that finds learned parameters through an iterative process. In this case, the PI-DNN used consistency between the predicted signal from the IVIM model and the measured signal as a loss term in the DNN. This resulted in an unsupervised PI-DNN capable of training directly on patient data with no ground truth: IVIM-NET orig . We demonstrated in both simulations and patient analysis that IVIM-NET orig is superior to the conventional LS approach and even performs (marginally) better than the Bayesian approach. Furthermore, IVIM-NET orig 's fitting times were substantially lower (4 × 10 −6 seconds per voxel (11)) than the LS and Bayesian approaches. However, that proof of principle IVIM-NET study did not explore many hyperparameters and focused on volunteer data. In this work, we hypothesize that IVIM-NET orig can be further improved by exploring the architecture of the network, its training features and other hyperparameters. To show this, we characterized the performance of IVIM-NET for different hyperparameter settings by assessing the accuracy, independence and consistency of the estimated IVIM parameters in simulated IVIM data. Finally, we compared the performance of our optimized IVIM-NET to LS and a Bayesian approach to IVIM fitting in patients with pancreatic ductal adenocarcinoma (PDAC) receiving neoadjuvant chemoradiotherapy (CRT), both in terms of test-retest reproducibility and sensitivity to treatment effects. IVIM-NET The code for all of our network implementations with some simple introductory examples in simulations and volunteer data are available on Github: https://github.com/oliverchampion/IVIMNET. We initially implemented the original PI-DNN (IVIM-NET orig ) (11) in Python 3.8 using PyTorch 0.4.1 (20). The input layer consisted of neurons that took the normalized DWI signal S(b)/S(b=0) as input, where S(b) is the measured signal at diffusion weighting b (b value). The input layer was followed by three fully connected hidden layers. Each hidden layer had a number of neurons equal to the number of measurements (b values and the number of repeated measures) and each neuron, in turn, contained an exponential linear unit activation function (21). The output layer of the network consisted of the three IVIM parameters (D, f, D * ). To enforce the output layer to predict these IVIM parameters, two steps were taken. First, the absolute activation function was taken of the neuron's output (X) to constrain the predicted parameters, e.g. to compute D: Second, a physics-based loss function was introduced that computed the mean squared error between the measured input signal, S(b), and the predicted IVIM signal S net (b), which was obtained by inserting the predicted output parameters into the normalized IVIM model. Hence: here B is the total number of image acquisitions (b values and repeated scans). Next, we evaluated whether seven novel hyperparameters (Table 1; Figure 1) of IVIM-NET improved fitting results. First, instead of normalizing the input and predicted IVIM signals by S0, we added S0 as an additional output parameter, to allow the system to correct for noise in S(b=0). Second, to restrict parameter values to physiologically plausible ranges, scaled sigmoid activation functions instead of absolute activation functions were used to constrain the predicted parameters of the output layer (Table 1), e.g. to compute D: where D min and D max are the fit boundaries. Bound intervals were D: -0.4 × 10 -3 -5 × 10 -3 mm 2 /s, f: -5-70%, D * : 5 × 10 -3 -300 × 10 -3 mm 2 /s, and S0: 0.7-1.3. The range was chosen broader than expected in vivo to compensate for the decreasing gradients at the asymptotes of the sigmoid function. Third, we varied the number of hidden layers between 1 and 9. Fourth, we used dropout regularisation (22) which randomly removes a set percentage of network-weights each iteration during training. Fifth, we used batch normalization (23) which normalizes the input by re-centering and re-scaling, and, consequently, preserves the representation ability of the network. Sixth, to reduce unwanted correlation between estimated parameter values, we implemented an alternative network architecture in which parameter values were predicted, in parallel, by independent sub-networks (Table 1; Figure 1). Furthermore, we evaluated different learning rates (LR) of the Adam optimizer (24), ranging from 1 × 10 -5 to 3 × 10 -2 , and with constant β=(0.9, 0.999). In traditional deep learning, training and evaluation are done on separate datasets, but as this is an unsupervised DNN approach, training was done on the same data as evaluation. So, for simulations, these were simulated data, and in vivo, these were in vivo data. The data were split into two datasets, one containing 90% of the data used for training; and one containing 10% of the data used for validation. The validation set was used only to determine early stopping criteria. Early stopping occurred when the loss function did not improve over 10 consecutive training epochs. Given the large amount of training data and the limited number of network parameters, instead of feeding all the data at every epoch, we validated the network every 500 mini-batches. So, effectively the network saw 500 × 128 IVIM curves in between validations. Validation was done on the entire validation set. as both positive and negative deviations from zero are equally undesirable. Some networks always returned the same value for D * , independent of the input data (Supporting Information Figure S1). For such cases, ρ is technically undefined. As these cases are undesirable ρ was set to 1. Simulations: characterization and optimization As training a DNN is a stochastic process (random initialization, random dropout, random minibatches), training on the same dataset usually results in different final network-weights, and consequently, different predictions on the same data. To assess the consistency of estimated parameter values, each network variant was trained 50 times on identical data, where each repeat had a new random initialization, dropout and mini-batch selection. The normalized coefficient of variation of the network over the repeated simulations (CV NET ) was taken as a measure of the consistency of each estimated parameter: where x̄t rue is the mean value of the simulated IVIM parameter, i=1,…,n indicated the summation over the different decay curves with n the number of simulated curves, j=1,…,m indicates the summation over the repeated neural network trainings with m the number of repeated trainings, x i,j is the j th repeated prediction of the i th simulated decay curve, x̄i is the mean over the repeated m predictions of the ith simulated signal curve. As the LS and Bayesian approaches predict the same values for repeated fits to the same dataset, CV NET was zero for them. As a result of the repeated training, we obtained 50 values for the NRMSEs and ρ's. Therefore, the median and interquartile range were reported. As a baseline for comparison, we evaluated the IVIM parameters (D, f, D * ) in IVIM-NET orig , the LS and Bayesian approaches. We used the Levenbergh-Marquant non-linear algorithm for the LS fit (26,27). For the Bayesian approach, we used the algorithm from previous work (11). For both the LS and Bayesian approaches S0 was included as a fit parameter. The Bayesian approach used a data-driven lognormal prior for D and D * , and a beta distribution for f and S0. The prior distributions were determined empirically by fitting these distributions to the results from the LS approach on the same dataset. The maximum a posteriori probability was used as an estimate of the IVIM parameters. The LS and Bayesian approaches were performed with fit boundaries of D: 0 × 10 -3 -5 × 10 -3 mm 2 /s, f: 0-70%, D * : 5 × 10 -3 -300 × 10 -3 mm 2 /s, and S0: 0.7-1.3. After baseline characterization, IVIM-NET was optimized by testing various combinations of the hyperparameters (Table 1; Figure 1). Previous studies reported reliable SNR values of IVIM in the abdomen between 10 and 40 (28)(29)(30)(31). So, to simulate reliable abdominal IVIM signals, an SNR of 20 was chosen for hyperparameter evaluation. We trained the network on the simulated signals using every combination of the following options: fitS0 parameters, absolute or sigmoid constraints, parallel network, dropout and batch normalization -while fixing the number of hidden layers to 3 (used in IVIM-NET orig , Table 1) and the LR to 1 × 10 -4 . In an exploratory phase, we found that reducing the LR from 1 × 10 -3 (IVIM-NET orig ) to 1 × 10 -4 was essential for obtaining networks with improvements in accuracy, independence and consistency. Based on these results, the best parameter combination was selected manually considering the combination of 1) NRMSE, 2) ρ and 3) CV NET . With the best options for the fitS0 parameters, constraints, parallel network, dropout and batch normalization, we tested the performance of the network as a function of the LR and number of hidden layers ( Table 1). From those results, we finally selected the best performing optimized network: IVIM-NET optim . Verification in patients with PDAC To test how IVIM-NET optim performed in vivo two IVIM datasets of patients with PDAC were used: one to assess test-retest reproducibility, and one to see whether we can detect treatment effects. Both studies were approved by our local medical ethics committee and all patients gave written informed consent. Both datasets (NCT01995240; NCT01989000) were published earlier (9,33,34). The first dataset consists of 14 patients with locally advanced or metastatic PDAC who underwent IVIM in two separate imaging sessions (average 4.5 days apart, range: 1-8 days) with no treatment in-between. The second dataset consisted of 9 PDAC patients with (borderline) resectable PDAC who received CRT as part of the PREOPANC study (35) where patients were scanned before and after CRT. DWI images were co-registered to a reference volume consisting of a mean DWI image over all b values using deformable image registration in Elastix (36). A radiologist (10 years' experience in abdominal radiology) and researcher (4 years' experience in contouring pancreatic cancer) drew a region of interest (ROI) in the tumor in consensus. IVIM parameter maps of D, f and D * were derived using the LS approach, Bayesian approach and IVIM-NET optim . Background voxels were removed automatically before fitting by removing voxels with S(b=0) < 0.5 × median(S(b=0)). Fitting was done without averaging over the diffusion directions. IVIM-NET optim was trained on all combined patient data. All computations were carried out on a single core of a conventional desktop computer (CPU: Intel Core i7-8700 CPU at 3.20 GHz). The average fitting time of each algorithm was recorded. Further analysis was performed with the median parameter values from within the ROIs. To evaluate test-retest repeatability, intersession within-subject standard deviation (wSD) (37) was calculated for each IVIM parameter using the data from the patients with repeated baseline scans. Bland-Altman plots were plotted for patients from both cohorts. We calculated the 95% confidence intervals (95CI) from the patients with repeated scans at baseline (assuming zero offsets). In the cohort receiving treatment, we used a paired t-test to test whether parameters had significantly changed due to treatment within the cohort. Furthermore, patients from the treatment cohort were added to the Bland-Altman plots and individual patients who had changes exceeding the 95CI were considered to have significant changes in tumor microstructure (38). Simulations: characterization and optimization The original network, IVIM-NET orig , showed substantially lower NRMSE for all estimated parameters than the LS and Bayesian approaches. However, IVIM-NET orig had strong correlations between D * and f (high ρ(D * ,f); Table 2 and Figure 2D) and had considerable CV NET . The NRMSE, ρ and CV NET for all hyperparameter combinations are shown in the Supporting Information Figures S2-S7. From this data, we selected IVIM-NET optim (Table 1) as optimal for IVIM-NET. IVIM-NET optim resolved the high dependency between D * and f found in IVIM-NET orig (Table 2; Figure 2D, E) and substantially reduced the NRMSE and CV NET . Figure 3 ilustrates the effect of changing a single option away from IVIM-NET orig (left side) or away from the selected optimum (right side). It is clear that the reduced ρ(D * ,f) cannot be attributed to a single parameter, but was a result of the combination of the proposed changes. For IVIM-NET orig , enabling the additional fit parameter S0, using sigmoid constraints and using the parallel network architecture all improved the NRMSE of all estimated IVIM parameters. Yet, the ρ(D * ,f) remains high for single deviations away from IVIM-NET orig and the network's consistency remains relatively poor (Figure 3). Single changes away from IVIM-NET optim can lead to marginally better NRMSE, lower ρ or lower CV NET (Figure 3), but only at a cost to the other two attributes. Supporting Information Figure S5-S7 show that the number of hidden layers did not have a substantial effect on the network's performance, although generally, an increase in the number of hidden layers resulted in a higher ρ, whereas a decrease resulted in higher NRMSE and CV NET . For the LR (Supporting Information Figure S5-S7), the order of magnitude was important as too high/low learning rates caused higher NRMSEs and less consistency. Regardless of which b-value distribution was used, the abovementioned results hold and IVIM-NET optim outperformed IVIM-NET orig (Supporting Information Figure S8 Verification in patients with PDAC The An example of parameter maps computed with the LS approach, Bayesian approach and IVIM-NET optim of a PDAC patient before receiving CRT is presented in Figure 5. The parameter maps of IVIM-NET optim were less noisy and more detailed than those computed by the LS and Bayesian approaches. In the test-retest cohort, IVIM-NET optim showed the lowest wSD for D and f (Table 3), while the Bayesian approach had the lowest wSD for D * . When averaging IVIM parameters for the repeated patient scans, IVIM-NET optim computed a higher D, lower f and higher D * than the LS and Bayesian approaches ( Table 4). The repeated scans are visualized as black x's in the Bland-Altman plots, together with their 95% CIs in Figure 6. When taking into account the CRT patients as a whole, IVIM-NET optim found a significant (P < 0.05) increase in mean f after treatment, whereas the LS approach found a significant increase in D after treatment. Although non-significant, IVIM-NET optim 's P-value for D was 0.07. Discussion This study is the first to show the potential clinical benefit of DNNs for IVIM fitting to DWI data in a patient cohort. We successfully developed and trained IVIM-NET optim , an unsupervised PI-DNN IVIM fitting approach to DWI that predicts accurate, independent and consistent IVIM parameters in Together with the fact that IVIM-NET optim detected an almost significant positive trend in D for the whole cohort of patients receiving CRT, these findings strongly suggest IVIM-NET optim as a good alternative for IVIM fitting in PDAC patients. Findings from other studies support this increase in D (39) and f (40) during CRT in PDAC patients. In general, PDACs tend to have lower diffusion due to the impeded water movement of compressing cells (41). Furthermore, PDACs are typically hypoperfused, due to significant tumor sclerosis creating elevated interstitial pressure, which compresses tumor feeding vessels (40,42). Effective treatment leads to necrosis, which in turn leads to lower cell densities and reduced interstitial pressure, and consequently increased diffusion (43,44) perfusion (45). Not all patients demonstrated a significant change for both diffusion and perfusion parameters induced by treatment. Therefore, using IVIM to discriminate between individual treatment effects may be feasible in the future. As the treatment of these patients was part of induction therapy and patients received surgery directly after, overall survival cannot be attributed purely to the CRT effect. Hence, given the limited number of patients and the diluted treatment effect, we did not compare overall survival between patients that showed potential treatment effects and others. Our previous work (34) showed that the LS approach to IVIM fitting was sensitive to individual treatment effects. However, high wSD was found which limited the study to detect individual treatment effects. Furthermore, this work (34) used denoised DWI b-images that substantially degraded image sharpness and tumor boundaries were harder to detect (e.g. compare figures from this work to example figures from (9)). Conversely, our present study demonstrates that DNNs can estimate parameter maps directly from the noisy data resulting in sharp high-quality IVIM parameter maps. Although IVIM-NET showed consistently better results both in simulations and in vivo, IVIM-NET predicts different IVIM parameters in repeated training. This causes a new sort of variability that, until now, was not an issue in fitting parameter maps. There may be methods to mitigate this variability. First, when probing treatment response, we would advise using one network such that this additional effect is not different pre and post-treatment. Second, to reduce the variation, one could consider taking the median prediction from 10 repeated trainings instead. We did so in an exploratory study where we formed 5 groups of 10 networks and showed that the median of 10 networks was substantially more consistent, with CV NET values of 7.6 × 10 -3 , 6.3 × 10 -3 and 18.7 × 10 -3 for D, f and D * , respectively. Having a set of networks will also allow the user to estimate the variation on the predicted parameter. Finally, although we see this additional uncertainty, we would like to stress that it is secondary to the overall error of the LS approach, which is apparent from the fact that in the simulations, all 50 instances of IVIM-NET optim had lower NRMSE than the LS approach. IVIM-NET optim was comparable or outperformed IVIM-NET orig at SNRs 8-100 and superior to the LS and Bayesian approach for SNRs 8-50 ( Figure 4). However, at extremely high SNR (SNR = 100; Figure 4), the LS approach outperformed IVIM-NET. The Levenberg-Marquardt algorithm for the LS function is an iterative function that finds a minimum of the squared difference. For a relatively smooth loss landscape and high SNR signal, the LS algorithm is designed to find the correct parameter estimates. However, at low SNR, the LS approach has trouble finding the correct parameters. This occurs either because the loss landscape is no longer smooth and hence it gets stuck in a local minimum, or, what we believe is more common, the noise has changed the signal such that the global optimum no longer is nearby the ground truth parameters. On the other hand, a DNN consists of a complex system that needs to encompass estimating the IVIM parameters for all voxels. It turns out that having been trained on all voxels enables better estimates for individual voxels at low SNR. We expect that DNNs focus on more consistent minima with parameter values that are more frequently observed. This might be similar to data-driven Bayesian fitting approaches (15,46). Conversely, our DNN seems to reach a maximum accuracy at high SNR. Potentially more complex DNNs that are optimized with simulations done at high SNR can comprise the subtle signal changes of the IVIM parameters at these SNRs. However, typical SNR values for IVIM data are < 50. Therefore, our findings suggest that using our DNN instead of the LS and Bayesian approaches for IVIM fitting to DWI-data would be beneficial in a clinical setting. The choice of the hyperparameters for IVIM-NET optim was based on an optimal combination of accuracy, independence and consistency across all IVIM parameters. However, other hyperparameter options may be more appropriate when characterizing an individual IVIM parameter (e.g. when an observer is only interested in D and IVIM is barely used to correct for perfusion). Figures S1-S6 of the Supporting Information can help interested readers select the best network for their purposes. The high dependency between D * and f that appears in IVIM-NET orig could not be attributed to a single cause. Initially, we expected that this dependency originated in the fully connected shared hidden layers of the original network. Although ρ decreased when adding the 'parallel network architecture' to IVIM-NET orig , ρ remained substantial (Figure 3). These dependencies between estimated parameters are not per se specific to DNNs. For instance, similar dependencies between D * and D or f were found in a different data-driven Bayesian fitting approach (9). For IVIM-NET optim , these dependencies were small at clinical SNR values and similar to those of the LS approach. Although simulation studies in parameter estimation are extremely valuable as the underlying parameter values are known, they also come with limitations. One limitation is that the noise characterization of real data can be diverse and hard to model. For instance, DWI artifacts caused by motion are not considered in simulations and may affect the results of fitting the IVIM model (47). We are aware that Rician noise could be added to generate more realistic noise. However, to compare models, we believe that adding Rician noise and hence additional systematic errors, only complicates the comparison, as errors from the fit methods could cancel out errors induced by the Rician noise. We do not expect that adding Rician noise would have had a significant effect on our results, as we do not expect any of the fit approaches to deal differently with Rician noise. Hence, we employed Gaussian noise for a fairer and easier comparison between algorithms. Nonetheless, our in vivo results show that in the presence of real noise, IVIM-NET optim is superior to the alternatives. Another limitation is the underlying assumption that the IVIM model is complete and hence that data are perfectly bi-exponential. In reality, the IVIM model is a simplification and real data will be more complex. Conclusion We substantially improved the accuracy, independence and consistency of both diffusion and perfusion parameters from IVIM-NET by changing the network architecture and tuning hyperparameters. Our new IVIM-NET optim is considerably faster, and computes less noisy and more detailed parameter maps with substantially better test-retest repeatability for D and f than alternative state-of-the-art fitting methods. Furthermore, IVIM-NET optim was able to detect more individual patients with significant changes in the IVIM parameters throughout CRT. These results strongly suggest using IVIM-NET optim for detection of treatment response in individual patients. To stimulate wider clinical implementation of IVIM, we have shared IVIM-NET online and encourage our peers to test it. (Table 1). In this example, the input signal, consisting of the measured DWI signal, is feedforwarded either through (A) a parallel network design where each parameter is predicted by a separate fully connected set of hidden layers or (B) the original single fully-connected network design. The blue circles indicate an example of randomly selected neurons for dropout. In this example, the output layer consists of four neurons with either absolute (Eq. 1) or sigmoid activation functions (Eq. 4) whose values correspond to the IVIM parameters. Subsequently, the network predicts the IVIM signal (Eq. 3) which is used to compute the loss function (Eq. 2). With the loss function, the network trains the PI-DNN to give good estimates of the IVIM parameters. Figure S1: Plots of the estimated IVIM parameters where no Spearman rank correlation coefficient (ρ) can be determined and is set to a ρ of 1.
6,062
2020-11-03T00:00:00.000
[ "Computer Science" ]
Study on heat transfer enhancement of finned tube boiling in water tank A passive residual heat removal heat exchanger (PRHR HX) was used as the research object, and an experimental device with heat conduction oil as the heat source was set up to complete the experimental study of the outside tube saturation boiling. The improved Wilson diagram method was used to calculate the experimental data, and the heat transfer law of the outside tube boiling of smooth tube and finned tube was obtained. Compared with the two types of tubes, the intensification rate of smooth tube was 1.07, and that of finned tube was 2.39, and the finned tube was 2.23 times that of smooth tube. According to the intensification rate, the Nu modification of the heat transfer coefficient of the two tubes was further obtained. Introduction Atomic energy is a kind of safe, clean and economical new energy. At present, China has the demand of sustainable development of energy strategy. Atomic energy is expected to replace traditional oil, coal and natural gas in a certain range. Since the world's first nuclear power plant was built in 1954, atomic energy has played an important role in the development of world energy [1]. Atomic energy is a clean energy source, but it is also extremely dangerous. In the event of an accident such as a leak, it will cause pollution in the area of the accident that cannot be recovered. Therefore, for the development of atomic energy, safety is the first priority. After experiencing the lessons of nuclear energy development and accidents, some scholars put forward a new safety design concept, that is, immobile safety design concept. Passive safety design is to ensure the safety of the site by removing excess heat through physical laws such as gravity, evaporation, and natural circulation, which cannot be destroyed. Therefore, the corresponding safety countermeasures can be automatically completed after an accident, independent of the operational capability of the operator and the energy supply outside the system. The concept of passive safety design only relies on the natural physical laws that represent the development trend of atomic energy safety. Yanzhen Ming, Wei Liu et al [2]. took the waste heat discharge heat exchanger of a ship as the research object and carried out numerical simulation with CFD software, which has theoretical reference significance for the design of this type of heat exchanger. Yang Song et al [3]. took PRHR HX in AP1000 as the research object and established a full-scale model with CFD software to simulate and analyze the flow and heat transfer characteristics of natural convection. The distribution of temperature field and flow field in the upper water tank and in the center tube bundle is calculated by establishing the calculation model of the connection between inside and outside the tube. Rushi Fu, Yong Li et al [4,5]. took different types of smooth tubes, ribbed tubes and wire-wound tubes as the research objects, compared and analyzed the process of boiling heat transfer outside tubes of various tubes by means of experiments. It is found that the boiling heat transfer coefficient can be improved to a certain extent by the two reinforced tubes. At present, there are many researches on special heat exchange tubes in passive waste heat removal system, but few researches on slotted low-finned tubes. In this paper, smooth tube and finned tube were used as the research objects, and an experimental device for passive residual heat removal was set up with heat conduction oil as the heat source. The heat transfer of the saturated boiling outside the tube of the two tube types was studied. The improved Wilson diagram method was used to calculate the experimental data, and then the enhancement rates of the two tube types were compared. It has a certain reference significance in the design of finned tube heat exchange. Experimental equipment In order to simulate the operating conditions of PRHR HX, the design process of this experiment refers to its design. The experimental system As shown in Figure 1, the experiment system of electric heating set by electricity to generate heat, until the oil temperature reaches set temperature of the heating tank, remains constant, at this time because the larger temperature difference between the heating tank and water tank, the interaction between the tube heat conduction oil by temperature difference and flow, transfer heat to the water in the tank, and then through the closed loop pipe flow back to the heating tank, so it forms circulation heat transfer process. The oil pump device was added in the experiment to provide additional driving force for the experiment when the natural convection driving force was weak and the complete circulating flow could not be realized. The experimental equipment mainly includes: (1) heating system; (2) experimental system; (3) Data acquisition system. The heating system, including electric heating sleeve, heating tank and some accessories, is the power source of the whole experiment; The experimental system includes a C-type heat exchange tube and a water tank, which is used to simulate the operation of PRHR HX and is the most critical part of the experiment. Data acquisition system needs to collect data including: heat exchange pipe fluid inlet and outlet and internal temperature, water temperature in the water tank, and the heat exchange pipe outer wall temperature, heating tank temperature, heat exchange pipe inlet and outlet pressure, the flow rate in the pipe. The temperature at the inlet of the C type heat exchanger tube is controlled by adjusting the temperature in the heating tank. Introduction of the experimental heat exchange tube Two types of heat exchange tubes, smooth tube and finned tube, are compared and analyzed. Both tubes are machined from 304C for excellent corrosion resistance. The length of the upper and lower horizontal sections is 400mm, and the length of the vertical sections is 1200mm. The technical parameters of smooth tubes and finned tubes are shown in Table 1 and Table 2. finned tube The finned tube used in the experiment had many "granular grooves" arranged in a spiral pattern at an Angle of 30° at the equidistance of the fin. Its physical image and partial enlarged view are shown in Figure 2. Structurally, these "granular grooves" can increase the heat transfer area of the heat exchange tube without increasing the space occupied by the tube, increasing the heat flux. In addition, the "granular groove" makes the fluid outside the tube a higher flow rate, which can strengthen the disturbance, destroy the formation and development of the liquid film boundary layer outside the tube, and strengthen the heat transfer effect. The outer diameter of the base tube of the finned tube is 19 mm, and the inner diameter is 15 mm. The fins are arranged in a spiral shape on the outer side of the tube, the fin spacing is 0.6mm, and 80 grooves are made around a circle. The inside heat exchange area of the tube is: The external heat exchange area of the tube is: The comparison between the two types of tubes shows that the external heat transfer area of the finned tube is 3.58 times that of the smooth tube. Original Wilson's graphic method Wilson's graphic method was first used by Wilson in 1915 to calculate the heat transfer coefficient outside the heat exchange tube [6]. It is a theoretical solution based on a large amount of data and has been adopted and improved by many scholars. The principle of the original Wilson diagram method is based on the thermal resistance in the heat exchange tube. The total thermal resistance Ra of the heat exchange tube is divided into the external thermal resistance Ro, the internal thermal resistance Ri, the fouling thermal resistance Rf on the inside and outside surface of the heat exchange tube and the thermal resistance Rw on the wall of the heat exchange tube [7]. Because the heat exchange tube needs to be cleaned and descaled, the fouling thermal resistance Rf is ignored in the calculation. The thermal resistance formula is: According to the basic theory of heat transfer, the above equation can also be changed: Where Rw is the thermal resistance of the wall of the heat exchange tube, (m 2 ·℃)/W; hi is the heat transfer coefficient of the fluid inside the tube, W/(m 2 ·℃); ho is the heat transfer coefficient of the fluid outside the tube, W/(m 2 ·℃). During the experiment, the fluid flow outside the tube keep constant, there is: Where C1 is the undetermined constant. According to Wilson's graphic method, when the fluid in the heat exchange tube is in turbulent state, its flow state and pipe diameter can be expressed by a parameter, namely, the velocity u inside the tube. The u value of flow velocity has a significant influence on the heat transfer coefficient of the inner surface of the tube, which changes proportionally and has the following relationship: Where C2 is the undetermined constant; u is the velocity of the fluid in the heat exchange tube, m/s; n is the velocity index. According to the formula, the thermal resistance in the heat exchange pipe is proportional to 1/u n . Substituting Equations (3) and (4) into Equation (2): It can be seen from Equation (5) that Ra is proportional to 1/u n . At the same time, the total heat transfer resistance Ra can be calculated through the heat balance analysis of the whole device, that is: Where LMTD is the logarithmic average temperature difference, ℃; Cp is the specific heat capacity of the fluid in the tube, J/(kg ℃); qm is the fluid traffic in the tube, kg/s; Δt is the temperature difference between inlet and outlet of the fluid, ℃. The above parameters can be measured experimentally, so the value of Ra can be calculated according to Equation (6). By adjusting the flow of the fluid, that is, by changing the velocity u, each velocity u corresponds to the total heat transfer resistance Ra, and the linear relationship between the flow velocity and the heat resistance meets the Equation (5). Taking Ra as ordinate and 1/u n as abscissa, the relation graph of 1/u n -Ra is obtained, as shown in Figure 3. Therefore, the in-pipe heat transfer coefficient of a smooth pipe can be expressed by Equation (4) Improved Wilson's graphical method There are two assumptions about Wilson's graphical method: (1) The thermal resistance of the fluid on one side does not change with time; (2) The heat transfer coefficient of the fluid on the other side has a fixed function relationship with the flow velocity. Based on the two points, many scholars have made more accurate calculations for Wilson's graphic method. Its general expression is: Where Nu is the Nussel number; C is the undetermined constant; Re is Reynolds number; Pr is the Prandtl number. The above equation can explain the relationship between heat transfer and velocity of fluid inside and outside the tube. In addition, in order to find a more general relationship, some scholars began to study the two-phase heat transfer with phase transition. The general expression can be expressed as: Where h is the heat transfer coefficient, W/(m 2 ·℃); m(u) is the fluid flow rate, kg/s; X is the dryness of the inlet fluid. The total heat transfer resistance can be expressed as: The above equation is fitted as shown in Figure 4. Calculate the slope of the graph, and its inverse is the value of C. Compared with the original Wilson graphic method, the improved method has better adaptability and more accurate calculation results. It should be noted that this method has its limitations, such as having to conform to the above two assumptions. Because the experiment working condition applicable to this method, it will not be discussed. Experimental regression correlation The relation of thermal resistance of each part is as follows: The above equation can be written as: Where K is the total heat transfer coefficient, W/(m 2 ℃); Ai is the heat transfer area inside the tube, m 2 ; Ao is the heat transfer area outside the tube, m 2 . ho is separated from Equation (13) by Wilson's graphic method. The specific method is as follow: The heat transfer coefficient inside the pipe can be solved by Dittus-Boelter empirical formula, that is: Where Ci is the intensification rate inside the heat exchange tube. The above equation can be written as: Where U is density, kg/m 3 ; D is the diameter of the heat exchange tube, m; P is dynamic viscosity, kg/(m·s); Cp is the specific heat capacity at constant pressure, J/(kg·℃); λ is the fluid thermal conductivity, W/(m·℃). The heat transfer coefficient of the fluid outside the tube can be obtained according to the correlation formula of Nusselt experiment: Substituting into the above equation: Where Co is the strengthening ratio outside the heat exchange tube. Substituting Equations (15) and (17) into Equation Make: Equation (19) can be expressed as: Refer to the material properties of the two heat exchange tubes, and combine with their structural parameters, take the experimental data as the values of x and y, select a number of groups of (x,y) data and represent a number of points in the coordinate system. Through formula regression, the regression formula between y and x is obtained, as shown in Figure 5 By observing Figure 5, it can be found that the points calculated by the improved Wilson graphical method have better linear regression. By observing the two lines in the figure, the slope of the smooth pipe a1=0.78 and the intercept b1=1.77, finned tube slope a2=0.3 and intercept b2=0.93. Substituting them into Equation (20), the fitting formulas of the two pipes can be obtained as follows: For smooth tubes: For finned tubes: According to the above method, the values of intensification rate Ci and Co were obtained, and the results were shown in Table 3. The smooth tube intensification rate outside the tube was 1.07, the intensification rate inside the tube was 0.89. The finned tube intensification rate inside the tube was 1.05. The difference between them and 1 was the theoretical calculation error, which were 0.07, 0.11 and 0.05 respectively, within the acceptable range. Therefore, it can be inferred that the calculated intensification rate outside the tube of finned tube is also reliable. The Co of the finned tube is 2.39, which is 2.23 times of that of the smooth tube. Substituting the calculated Co into Equation (17), the Nu modification formula for calculating the heat transfer coefficient outside the tube can be obtained. The formula of two pipes are shown as follows: For smooth tubes: According to the analysis of the intensification rate, the error is within the acceptable range. Therefore, the correlation formula of heat transfer coefficient calculation has practical engineering guiding significance. Conclusion The study takes the PRHR HX as the research object, and sets up an experimental device using heat conduction oil as the heat source. The technical parameters of smooth tube and finned tube are introduced, and the heat transfer area outside the tube of the two tubes is compared, and the experimental study of saturation boiling outside the tube is completed. The improved Wilson graphic method was used to calculate the experimental data, and the heat transfer law of the smooth tube and the finned tube and the Nu modification formula of the heat transfer coefficient of the two tubes were obtained. The results showed that: Smooth tube intensification rate outside the tube was 1.07, the intensification rate inside the tube was 0.89. The finned tube intensification rate inside the tube was 1.05. because of the smooth tube internal and external surface and finned tube internal surface are smooth, no intensification effect. So the three surface intensification rate calculated by improved Wilson's graphic method are in good agreement with the actual results. It is proved that this method can be used to calculate the intensification rate of heat exchange tubes. The intensification rate of the finned tube outside the tube is 2.39, which indicates that the finned tube has good heat transfer intensification effect.
3,757.6
2021-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Towards a Rationalization of Ultrafast Laser-Induced Crystallization in Lithium Niobium Borosilicate Glasses: The Key Role of The Scanning Speed : Femtosecond (fs)-laser direct writing is a powerful technique to enable a large variety of integrated photonic functions in glass materials. One possible way to achieve functionalization is through highly localized and controlled crystallization inside the glass volume, for example by pre-cipitating nanocrystals with second-order susceptibility (frequency converters, optical modulators), and/or with larger refractive indices with respect to their glass matrices (graded index or diffractive lenses, waveguides, gratings). In this paper, this is achieved through fs-laser-induced crystallization of LiNbO 3 nonlinear crystals inside two different glass matrices: a silicate (mol%: 33Li 2 O-33Nb 2 O 5 34SiO 2 , labeled as LNS) and a borosilicate (mol%: 33Li 2 O-33Nb 2 O 5 -13SiO 2 -21B 2 O 3 , labeled as LNSB). More specifically, we investigate the effect of laser scanning speed on the crystallization kinetics, as it is a valuable parameter for glass laser processing. The impact of scanning energy and speed on the fabrication of oriented nanocrystals and nanogratings during fs-laser irradiation is studied.Fs-laser direct writing of crystallized lines in both LNS and LNSB glass is investigated using both optical and electron microscopy techniques. Among the main findings to highlight, we observed the possibility to maintain crystallization during scanning at speeds ~ 5 times higher in LNSB relative to LNS (up to ~ 600 µm/s in our experimental conditions). We found a speed regime where lines exhibited a large polarization-controlled retardance response (up to 200 nm in LNSB), which is attributed to the texturation of the crystal/glass phase separation with a low scattering level. These characteristics are regarded as assets for future elaboration methods and designs of photonic devices involving crystallization. Finally, by using temperature and irradiation time variations along the main laser parameters (pulse energy, pulse repetition rate, scanning speed), we propose an ex-planation on the origin of 1) crystallization limitation upon scanning speed, 2) laser track width variation with respect to scanning speed, and 3) narrowing of the nanogratings volume but not the heat-affected volume. of oriented nano-crystals formation. Introduction As the world is progressively evolving towards a photonic future, there is a need for miniaturization and functionalization of photonic devices (including photonic chips, wavelength converters, lenses, retardation waveplates, waveguides, etc.) [1]. In this context, femtosecond (fs)-laser irradiation in glass materials is an attractive way to functionalization [2,3]. By using temporal ultra-short laser pulses (e.g., 10-1000 fs) with very high intensities (up to 10-100 TW/cm 2 ), highly selective local modifications (few µm 3 in volume) can be achieved thanks to multiphotonic absorption of the laser light. Different kinds of materials modifications can be obtained, depending on the glass material considered, as well as laser parameters (e.g., pulse energy, repetition rate, and scanning speed). For instance, in silica glass, both positive and negative index contrasts can be achieved [4,5]. The glass response to fs-laser is manifold, and includes, for instance, formation of defects, densification, fictive temperature changes, stress fields, crystallization, and so forth. The latter (crystallization) is a promising pathway in the functionalization of optical devices. The precipitation in glasses of several crystals has drawn interest for their nonlinear, or electro-optic properties, among which we can cite Fresnoite (barium or strontium-based [6]), barium borate (BaB2O4) [7], LaBGeO5 [8,9], or again Lithium niobate (LiNbO3) [10], but many others exist (see, for instance, [11] for an extended and thorough review). Given the plethora of crystals that can be induced from glasses, it is therefore important to comprehend the mechanisms that may lead to property tunability. In this work, we investigate the role played by scanning speed (v) vs. the energy during the crystallization process, and specifically draw our attention to nano-crystallization orientation and self-organized phase separation (so called nanogratings). From an application standpoint, the goal is to provide means to target future glass fabrication in order to maximize scanning speed and property response (e.g., second harmonic generation i.e., SHG, birefringence), while keeping scattering losses as low as possible. From a fundamental perspective, this work aims to better understand the mechanisms that drive light-matter structuring in these glasses. More specifically, we intend to highlight links between scanning speed and the resulting structuring (crystallization, its size and orientation distribution, and nanograting formation). To conduct this study, two glasses were fabricated with the melt-quenching technique, with the following batch composition (mol%): 33Li2O-33Nb2O5-34SiO2 (labeled as LNS), and 33Li2O-33Nb2O5-13SiO2-21B2O3 (labeled as LNSB). The addition of B2O3 is known to lower the crystallization temperature and increase the crystallization rate. Subsequently, each glass sample was irradiated by fs-laser while laser parameters are varied (scanning speed, pulse energy, and laser polarization). In the subsequent Introduction Sections (1.1 to 1.4), we provide a brief discussion on several important points that will give the reader context in order to better comprehend the Results and Discussion Sections. First, we discuss background information on oriented crystallization in lithium niobium silicate glasses. Secondly, we consider the role played by the incubation time and its impact on the time-temperature-transformation (TTT) crystallization diagram in LNS glass. Then, the temperature profile evolution (in space and time) after laser irradiation inside the glass is addressed, as well as the impact of pulse temporal overlapping. Finally, we highlight the choice of laser parameters, using a pulse energy-repetition rate landscape, which are prerequisites to induce oriented nano-crystallization. Oriented-Crystallization of LiNbO3 Induced by Fs-Laser in Lithium Niobium Borosilicate Systems In a series of papers [10,[12][13][14][15][16], our group has extensively investigated fs-laser-induced crystallization in lithium niobium silicate (generically labeled as LNS) glass, and more specifically the material response upon laser parameter changes including pulse energy (Ep), pulse repetition rate (f), and laser polarization direction (with respect to the scanning direction). Among these studies, it was pointed out that precipitated nanocrystals inside LNS could be oriented in space through a control of laser polarization. At low pulse energies (typ. ~ 0.5 µJ/pulse as per this paper), a maximum SHG optical response was observed perpendicularly to the writing laser polarization direction, no matter its orientation. This SHG response was attributed to the crystallization of LiNbO3 nanocrystals, the latter having their polar axes perpendicular to the laser polarization. This texturing of LiNbO3 nanocrystals was later confirmed by electron backscatter diffraction (EBSD). At slightly higher pulse energies (typ. ~ 1.25 µJ/pulse as per this paper), a second texture appeared, indicating that LiNbO3 polar axis was partly oriented in the direction of laser polarization in these conditions. The existence domain of this texture still must be confirmed. These results are in agreement with previous results [17]. Very recently ( [18]), we reported that B2O3 incorporation into LNS glass preserved the dynamical modifications yielding to SHG response, while keeping its polarization-dependence. This added an important parameter, namely the laser polarization, to the engineer's toolbox for optical device elaboration and tuning. Incubation Time and Temperature-Time-Transformation (TTT) Curve in Static Mode In the same previous work [18] as described above, we observed that the addition of B2O3 drastically reduced the incubation time with respect to B2O3-free (i.e., LNS) glass. The incubation time is the time taken, upon fs-laser irradiation, until the observation of the SHG signal. The variation of the incubation time with respect to the average laser power and upon the addition of B2O3 to the glass produces a displacement of the time-temperature-transformation (TTT) curve. Using results from [18], the incubation time is represented in Figure 1a, while the TTT curve for LNS is reported in Figure 1b. Now, to further pinpoint the decrease in the incubation time when adding B2O3 into LNS glass, and to make the link with the TTT diagram, we must consider the temperature distribution upon (pulsed) laser irradiation. For a single pulse with a given pulse energy (Ep), the associated thermal deposition (transfer to the lattice vibrations) leads to a temperature distribution profile T(r, t). The latter is as wide as the beam waist diameter immediately after the pulse is deposited, which corresponds to the period start as defined by t1 in Figure 1c. Following this initial step, the T profile broadens very rapidly due to heat diffusion following a bell shape, and consequently, the maximum temperature (at the center) conversely decreases since the amount of heat given to the system is constant. Finally, at the end of a period, T slowly reaches a minimum value, Tmin, towards the end of the period. This temperature profile evolution is illustrated in Figure 1c. [18]. (b) Representation of the crystallization domain for LNS glass in TTT (timetemperature-transformation) scheme; Tp: crystallization peak temperature, Tg: glass transition temperature. (c) Scheme of the temperature spatial distribution T(r, t) evolution after pulse deposition inside the glass, and as time evolves. We note that as the maximum temperature decreases with time, the width of the distribution increases (the heat energy in the system remains constant). As a side note, the beam waist radius w0 is typ. around 1 to 1.5 μm. Now in the multipulses regime, at radius r including the center, the temperature T(r, t) oscillates between Tmax (r, N) and Tmin (r, N) [19]. N corresponds to the number of pulses deposited before the time t, and τp is the period of the pulses. When T(r, t) is computed within one time period, it is clear that close to the center (r < 3w0, w0 being the width of the beam at 1/e) and for weak time overlapping pulses (τp >> τD, the heat diffusion time) there is no crystallization: 1) neither at the beginning of the period, T is too high (several thousand degrees) during a short time 2) nor at the end of the period, T is too low during a long part of the period (specifically when the pulse time overlapping is small). So, the "active part" of the period is between Tmax and Tmin. Consequently, our reasoning is based on the averaged power delivered within a pulse time period. The average temperature T (the mathematical expression can be found in Appendix A Equation (2) and (3)) appears to be a function of the averaged power P = E • f and of the number of pulses N. Note that T is constant within each period. For a given P, the T distribution is thus a bell shape curve around the beam axis that does not oscillate anymore but slowly evolves with respect to time i.e., N. Now looking at the TTT diagram in Figure 1b, several scenarios can be envisioned. First, if maximum T is just above the crystallization onset temperature, Tx, and if the irradiation time is at least equal to the incubation time, crystallization will occur. Now, if the maximum T increases slightly (though an increase of P) approaching to Tp (crystallization temperature peak), a larger area will be above Tx and crystallize. Finally, if T max > Tp, there will be a radius r > 0 where T = Tp that will crystallize first rather than the center of the bell. The apparent incubation time measured with the laser is thus a uniform decreasing function on T and so on P (see Section 4.3) as in Figure 1a. Heat Accumulation (Multi-Pulse Irradiation) Considerations and Impact on Crystallization Within heat accumulation regime, the spatial temperature profile evolution ( Figure 1c) described above for a single pulse is expected to be modified by subsequent pulses while the contributions from the former ones are not vanished yet. This will influence the temperature distribution in space. The time needed to reach a steady state of overlapping pulses (in time) can be associated with a minimal number of pulses, labeled as Nssm(r) in Appendix A (Equation (5)). In this case, a memory effect is at play from one pulse to the other. It seems that growth of nanocrystals and nanogratings is of such category. This may also explain that the incubation time is dependent on the average power and not only on pulse energy. Several key features can be extracted from modeling heat deposition and temperature profile in multi-pulse regime (see Appendix A for more details regarding the hypotheses and the calculations used): 1) when there is no pulse overlap in time; typ. when the pulse period (τP) is greater than a few hundreds of the diffusion time (τD ~ w ²/4DT, w is the beam waist radius at 1/e and DT is the thermal diffusivity): • Ep (and more specifically the absorbed part of Ep) sets the temperature amplitude (T0 in appendix A, T0 = 3⋅A⋅Ep/(4⋅π⋅ρ⋅Cp⋅w 3 )) and this is not dependent on the thermal conductivity. On the contrary, the shape of T(r, t) is from the solution of the Fourier's equation with one pulse and is defined by w, f and the thermal diffusivity DT. • Note that f/v controls the density of pulses per μm (PD) or the pulse overlap in space. PD is ranging from < 2000 to 200,000 pulses/μm in this present work. Moreover, the number of pulses deposited punctually is given by 2w PD (w is the beam radius). 2) when there is an important overlap between pulses in time (typ. τP < τD/10): • The average power P = Ep × f controls the temperature amplitude that is now dependent on the thermal conductivity κ, i.e., • • • with α being the absorption coefficient. The profile in space and time is close to the solution of the Fourier's equation with a continuous wave (CW) laser source [20] and the width of the distribution is simply dependent on α. • Note that as in the previous situation, f/v controls the density of the pulses per μm (or their overlap in space) ≡ duration of irradiation per point (= τP⋅2w ⋅f/v). 3) when the pulse overlap is intermediate (typ. τP ~ τD): An average approximation may be considered for the temperature. • Case n°1: T r, N = , dt : this corresponds to an average temperature during the total pulse period, numbered N. • Case n°2: we can say that below a given temperature (Tmin), no effect happens (e.g., below Tg for a crystallization, but another Tmin can be used for another process depending on the kinetics involved). No effect occurs also above a given Tmax (e.g., Tm or liquidus temperature in the case of crystallization). Consequently, the average temperature is bound as in the next expression T r, N = T r, t dt ) where t1(N) = inf (Tmax(r, N), Tm) and t2(N) = sup (Tg, Tmin(r, N)) . It is worth pointing out that for this suggestion, t (r, N) − t (r, N) < τ . Note: sup(.,.) and inf(.,.) mean the largest and the smallest values of the quantities in the brackets. Therefore, the important parameter to consider is the pulse overlapping in time. It is ruled by the ratio Rτ = τp/τD. For the glasses to be considered in this study, Rτ = 18 as DT = 9•10 −7 m 2 /s either for LNS or LNSB glasses. The number of overlapping pulses is around 10 −15 (see appendix A Equation (5)). This means that pulses deposited between 10 and 15 time periods before the last pulse have a negligible contribution to the pulse's summation. Note that our experiments herein are rather in the conditions of weakly overlapping pulses. Consequently, from the beginning of irradiation and after 10-15 pulses, a steady state is reached where T oscillations become repetitive. However, in most of the observed phenomena, the relevant quantity is the average temperature T (r, N) that becomes time independent after 15 pulses. It is written T (r, ∞) in Appendix A Equation (4) (for instance) and its time-independent shape is thus very convenient to discuss the effect of thermal treatment in scanning mode with a laser. In this work, we will mostly focus on the effect played by the scanning speed. As shown above, the latter plays a different role than the pulse energy (Ep). It must be pointed out that the heat diffusion speed (~ 4DT/w , i.e., a few 10 m/s in our experimental conditions) is much larger than the scanning speed (typ. <1 mm/s). Consequently, the scanning speed is expected to have a negligible effect on the temperature distribution profile. Phase Separation Induced by Fs-Laser in Lithium Niobium Silicate Glass and Choice of Laser Parameters in This Study In addition to precipitate oriented nano-crystals in LNS glass, it is possible to induce laser polarization-oriented phase separation upon laser irradiation under the form of lamellar-like structures perpendicular to the laser polarization [13]. These lamellas are quasi-periodic and form an array of crystalline phase (LiNbO3-rich) and glass phase (SiO2rich) at the sub-wavelength scale [13]. A direct consequence of this re-organization is the appearance of measurable form birefringence, which can be characterized under an optical microscope with polarized light through optical retardance measurements [13,21]. To study the effect of scanning speed on this phase separation, we first need to find the adequate laser conditions to fall within the domain of orientable nanocrystals. In a previous publication on LNS glass with a close composition to the LNS glass used herein, three laser-induced modification regimes were stated [14] as highlighted in Figure 2a using a Ep-f landscape. There is a first regime at low pulse energies where the material remains in a vitreous state. In this regime, both the refractive index and the etching rate are modified (isotropically), which is associated with a change of fictive temperature (a parameter characterizing the degree of disorder in glass). A second regime appears at higher pulse energies (in fact above an averaged incident laser power of 0.16 W), where nanocrystallization is observed, and that is textured such that the polar axis of the nanocrystals is perpendicular to the writing laser polarization. Then, increasing the pulse energy, a third regime appears (above an average incident power around 0.32 W) where larger crystallized zones with the same orientation appear. In this case, a part of the heat-affected volume (HAV) is melted and crystallizes driven by the peripheral part of the HAV that has not melted. In our conditions and as will be detailed in the Materials and Methods Section, we used two pulse energies: 0.5 and 1.0 µJ/ pulse (at f = 200 kHz), which correspond to 0.1 and 0.2 W, respectively. For completeness, based on previous experiments carried out in the samples investigated herein [18], we could identify the thresholds (T1, T2, T3) for the 3 regimes. Therefore, in Figure 2b we place the thresholds on a similar Epf landscape found for LNS. It is worth pointing out that the LNSB glass would exhibit slightly lower threshold values. With the experimental conditions considered in this paper, we are in the regime of oriented nano-crystals formation. Figure 2. (a) The three regimes of laser induced modifications in the pulse energy-repetition rate plane (Ep-f). The scheme on the left is for a LNS glass with a close composition than the one used in this paper ( [12]). (b) Regimes established from results obtained with the glass composition and pulse duration used in the present paper. We note that the energy thresholds are a bit lower for the LNSB glass. N.B., the mention "static" means no beam scanning (dot production, the relevant parameter is the duration of irradiation) contrarily to "scanning" (the relevant parameter is the scanning speed). Finally, in [12], the effect of the scanning speed in regime 2 has not been investigated deeply but pointed nevertheless an effect. At an energy of 1.4 µJ, the orientable possibility was seen for 25 µm/s with the polar axis perpendicular to the laser polarization; but for 10 µm/s there was another texture with the polar axis parallel to the laser polarization [17]. On the other hand, at high pulse energies (4.2 µJ), the authors from [22] obtained a perfect single texture (a third one with the polar axis aligned along the direction of scanning) in the whole heat-affected volume (HAV) for speeds lower than about 25 µm/s. However, at this pulse energy, the laser track width is a bit large for a single-mode waveguide (25 µm). Nevertheless, these works showed that controlling the scanning speed may enable the control of nano-/micro-crystallization of the laser track. Materials and Methods The two glasses investigated in this work were fabricated using the conventional melt-quenching technique. These glasses are the same as the ones used in Ref. [18]. The targeted compound concentration (in mol%) of each glass is: 33Li2O-33Nb2O5-34SiO2 (labeled as LNS) and 33Li2O-33Nb2O5-13SiO2-21B2O3 (labeled as LNSB). Briefly, powders of Li2O3, SiO2, and Nb2O5 (and H3BO3 for LNSB glass) were crushed together in acetone to homogenize the powder mixture. Subsequently, each powder batch was placed inside a platinum crucible and dried at 200 °C for ~ 2 h. Following this, the temperature was increased up to 1400 °C (LNS) and 1350 °C (LNSB) and set at these temperatures for 1.5 h. Then, the molten mixture was quenched between two metal plates preheated at around ~ 350 °C and maintained at this temperature for about 30 min. The quenched glass formed a plate about 1-mm thick. Finally, few pieces of each glass sample were polished on top and bottom surfaces until optical roughness was reached. Each glass sample was thus placed on a motorized translation stage and subsequently irradiated using a femtosecond laser (Satsuma, Amplitude Systèmes Ldt, Pessac, France, λ = 1030 nm, pulse duration = 250 fs, numerical aperture (NA) = 0.6). Each irradiation was performed with the laser focal point situated approximately 240 µm (in air) below the sample top surface. The translation stage enables a displacement in the XY plane, while the laser beam direction is along the Z direction. The experimental setup is displayed in Figure 3. Here, the reference in the direction of writing laser polarization is given by +X in the XY plane (illustrated by e in Figure 3); the scanning direction is identified as S in the same figure. . Experimental laser setup, and convention for the reference on the sample and its relation to the reference on the laser. S and e correspond to the scanning direction and the polarization direction, respectively. Note: in this work, Yy configuration corresponds to e parallel to S (e is along Y), and Yx configuration to e perpendicular to S (e is along X). Lines were written in the plane perpendicular to the laser propagation direction (i.e., the XY plane) at various scanning speeds (from 5 to 600 μm/s). A half-wave plate placed along the beam path controlled the linear polarization. 5 mm-long lines (at f = 200 kHz) were written inside each glass sample (LNS and LNSB), using two laser polarization configurations: parallel (labeled as Yy), and perpendicular (labeled as Yx) to the laser scanning direction (being along the Y axis, or S). Other parameters were varied, including pulse energy (0.5 and 1.0 µJ), and scanning speed. At this point, it is worth to point out that prior to each scanning, the sample was irradiated with a static beam with a time at least equal to the incubation time. The minimum scanning speed was set to 1 µm/s, and the maximum scanning speed was determined once no green light could be detected during the irradiation. This green light is characteristic of SHG induced by LiNbO3, and no light detected was indicative that no more crystallization occurred during the inscription (typically at speeds of few hundreds of µm/s at a pulse energy of 1 µJ). To investigate the differences of the lines inscribed in both LNS and LNSB and when scanning speed is varied, we used an Olympus BX51 optical microscope (Olympus). A first observation was done under white light to gain general insights on the laser track transparency, homogeneity, and dimensions. A second analysis was performed under polarized light to detect birefringence, characteristic of textured nano-scale phase separation, and neutral axis directions. Using a first-order (full wave) retardation waveplate, the slow and fast axes of the lines (if birefringent) have been determined. Finally, we measured the linear retardance (R) at the center of the irradiated lines, using the De Sénarmont compensator technique. R is proportional to the birefringence (B) through the relation R = B × L, where L is the thickness of the birefringent object. Finally, to better comprehend the origin of the differences observed in optical microscopy, the laser track cross-sections of the LNSB sample (at different speeds and for 0.5 µJ/pulse) were brought under a field-emission gun scanning electron microscope (Carl Zeiss Microscopy GmbH). (FEG-SEM ZEISS SUPRA 55 VP) equipped with an electron backscatter diffraction (EBSD) detector. The samples were side polished perpendicularly to the scanning direction and until the laser-cross section was exposed. Crystalline texture, size, and nano-/micro-texturing could be observed. Effect of Scanning Speed: Optical Microscope Observation Under White Light Illumination (Transmission Mode) In Figure 4a, the optical microscope images of some sections of the inscribed lines (Yx configuration, in transmission mode, top view, and upon white light illumination) for LNS and LNSB samples are displayed. At this point, it is worth pointing out that the black lines (at low speeds, typ. < 25 µm/s) were associated with a bright and well visible green light observation during the laser irradiation and the brown colored lines with less vivid green light intensity (although still visible). Finally, for the nearly transparent lines (at the highest scanning speeds), no green light could be detected during the irradiation. Detection of SHG is a direct indication if LiNbO3 crystals were precipitated during laser scanning. As can be seen from Figure 4, the samples are lightly colored (orange). Absorption spectra were recorded and are presented in Appendix B, Figure B1. They revealed a linear absorption tail starting below 500 nm and a cutoff wavelength in the UV region at around 350 nm for both glasses. However, no absorption band is detected at the laser wavelength that could be a source of strong linear absorption. From Figure 4a, a very different behavior is observed with respect to each glass sample. For LNS at 0.5 µJ/pulse no crystallization could be detected at scanning speeds above ~ 50 µm/s, and there was a sharp transition between crystallized lines to no crystallization. For LNSB glass, crystallization was detected up to ~ 225 µm/s for the same pulse energy regime and configuration. Additionally, the transition regime from dark lines to transparent ones was incremental. Comparison made between LNS and LNSB for a higher pulse energy (1.0 µJ) shows no crystallization at scanning speeds beyond ~100 µm/s for LNS glass and beyond ~ 600 µm/s for LNSB. Moreover, the line width fluctuates in a periodic fashion for LNS whereas it is very weak for LNSB. This is rather similar to what has been reported in [23] using a CW laser 1080 nm, 1.4-1.8 W, NA = 0.8, at the surface. Finally, the averaged heat-affected volume (HAV) width for each line is reported in Figure 4b, for Yy and Yx configurations. In both LNS and LNSB, increasing the pulse energy causes, on the first order, an increase of the HAV width. For LNSB glass at low pulse energy, the laser track thickness slowly decreases as the scanning speed increases. On the other hand, at 1.0 µJ/pulse, the laser track width increases and reaches a maximum at around 150-250 µm/s, then progressively decreases. Birefringence Measurements and Polarization-Dependent Orientation Based on the previous observation under optical microscopy, we could classify three (3) types of line inscription regimes: one at low speed (lines appear dark, little to no light passing through them), one at medium speed (brownish lines, with light passing through them), and one at high speed (no detectable crystallization, and light fully passing through them). These three-speed regimes are reported in Figure 5a for lines written in LNSB using increasing speeds: 1, 75, and 350 µm/s. To further investigate the differences between these speed regimes, we used different illumination conditions: 1) transmission with natural light; we see rough modifications 2) transmission with crossed polarizer (P) and analyzer (A) (polarized light) 3) same as 2) with a full waveplate inserted with its orientation perpendicular to the scanning direction; see Figure 5a. In 2): the background (glass) whatever the sample orientation appears dark due to glass isotropy. The lines are also dark when they are placed parallel to either P or A. On the contrary, when the lines (low and medium speed regime) were rotated at any other angle than parallel (//) to P or A, light passes through them, with a maximum at an angle of 45° with respect to both A and P axes (with a precision of ± 5°). This feature was maximized for the medium speed regime (75 µm/s in Figure 5a) indicating that the orientations of the neutral axes are parallel and perpendicular to the line scanning direction. In 3): with the insertion of a full retardation waveplate and the neutral axes at 45° from A and P, we distinguish the slow axis from the fast axis. When the slow axis of the line is parallel or perpendicular to the slow axis of the full retardation plate (at 550 nm), it appears blue or yellow, respectively. Here, whatever the investigated configurations, the slow axis of the written line is found perpendicular to the writing laser polarization. This agrees with [21], and is attributed to i) the formation of quasi-periodic lamellas with two different refractive indices, and ii) not to the birefringence related to the oriented LiNbO3 nanocrystals contained in one of the lamellas. Finally, retardance (R) was quantitatively measured at the center of each irradiated line as reported in Figures 5b,c. In LNS glass, the retardance arises from the formation of a sub-wavelength and lamellar-like structure with periodic alternation of LiNbO3-rich regions (crystallized) with LiNbO3-poor (glass rich) ones, leading to form birefringence, hence retardance [21]. From Figures 5b and 5c, we observe general trends with respect to the retardance of the lines. At the lowest speed (1 µm/s) the measured retardance is null and this for each glass and at every pulse energy (either 0.5 or 1.0 µJ). Now, in LNSB, as the scanning speed is progressively increased, the retardance also increases, until it reaches a maximum. Lowering the pulse energy (going from 1.0 to 0.5 µJ) results in the enhancement of the retardance. In LNSB, this increase approximately corresponds to a factor of 4 (up to 200 nm). At 0.5 µJ retardance disappears beyond 350 µm/s, whereas for 1.0 µJ it is roughly constant until 600 µm/s. For LNS, the maximum is always roughly 100 nm. However, if retardance increases just from a few µm/s until 50 µm/s at 0.5 µJ, it starts to increase the latter by 1.0 µJ but both exhibit a steep decay at 200 µm/s. Although at higher speeds the lines are not crystallized, at low speeds crystallization occurs. Therefore, the low-speed regime exhibits a birefringent response, but the white light is scattered as it travels through the crosssection due to the presence of large crystallites along the laser track. Finally, no writing polarization effect is detected on the retardance amplitude, and the maximum retardance response found for LNSB is about twice larger compared to LNS. Investigation of Scanning Speed Effect on The Laser Track Morphology in LNSB glass To better understand the various trends observed when the scanning speed is varied, we investigated the morphology and nanostructure using scanning electron microscope imaging and electron backscatter diffraction (EBSD). In Figure 6a, we report the laser track cross-section (for 0.5 µJ/pulse, Yy configuration) SEM micrographs and EBSD diffraction pattern images taken in LNSB sample, at three different speeds: 1, 10, and 75 µm/s. As one can rotate the lamellas by rotating the laser polarization, the choice of Yy configuration was decided for investigating the crystallization in the plane of the lamellas (see for instance [14]). We first confirm the presence of LiNbO3 nanocrystals, and no other crystalline phase was detected, i.e., all Kossel's figures were indexed. Secondly, we observe that the crystallization is rather homogeneously distributed. As it has been seen before in a similar analysis performed on SiO2 [24], there are some regions that are not phase separated. We can suppose that this is the case at the center of the laser track, especially for 25 and 50 µm/s. However, we cannot completely rule out the possibility to partially investigate some areas that are between two crystalline zones, hence they would appear amorphous. We observe a concentration of nanocrystals at the laser track center in the head as the speed increases, whereas at a low speed we observe larger regions of similar crystalline orientation, i.e., similar color in the inverse pole figure (IPF). The average size of the regions of the same orientation (single texture) as a function of scanning speed is reported in Figure 6b (in magenta), along with the error bar, which corresponds to the standard deviation (providing information of the size distribution). Additionally, we also report the evolution of the number of regions as a function of scanning speed. This figure emphasizes the observation of larger crystalline domains with the same orientation in the low-speed regime relative to the high-speed one. Interestingly, beyond 25 µm/s, we observe a plateauing in the crystalline size evolution (averaged size centered at 100 nm) with a narrower distribution (small standard deviation). This is also accompanied with a rather constant number of a single texture region: approximately 600 to 800 regions from 10 to 175 µm/s, while this number drops to 300 for the lowest scanning speed (1 µm/s). Lastly, in Figure 6c, the pole figures of LiNbO3 c axis reveal a dominant texture, where the c axis is found preferentially perpendicular to the laser polarization for all laser tracks. This is consistent with previous works (see for instance [10] and [18]). However, we observe that, as the scanning speed increases, the preferential orientation of the c axis becomes less contrasted. In addition, SEM imaging was performed on the laser track cross-section of LNSB sample but this time in the Yx configuration. This allows seeing any lamellar-like nanostructuring as already suspected based on the observation of birefringence. This variety of nanogratings, previously observed in LNS glass ( [13]) within the crystallized region, "grow" perpendicularly to the laser polarization. The SEM micrographs reported in Figure 7 show some interesting features worth pointing out. First, the observed crystallized volume becomes smaller as the scanning speed increases (in yellow). Additionally, the length, but not so much the width, of the region where lamella-like structuring is observed, framed by the orange dashed line, progressively decreases as the scanning speed increases. At low speeds (< 25 µm/s), there is a complete overlap between the crystallized and lamellas regions, while it is much reduced at higher speeds. Similar observations have been reported in [12]. At low pulse energy (0.5 µJ/pulse), the lamella area also coincides with the crystallized one. But when energy is increased, the lamella area becomes smaller with respect to the crystallized one, i.e., the overlap is reduced. Note that lamellas forming nanogratings are resulting from direct interaction with the laser light whereas crystallization is not. Temperature and Its Distribution Profile During Irradiation in LNS and in LNSB Glasses The laser energy deposition inside the material is achieved through a nonlinearmultiphotonic-absorption effect. The absorbed energy is then redistributed in the lattice in form of heat, typically in a Gaussian-like shape (in space). After a short stage of nonlinearity, the absorbed energy proportion is independent of the pulse energy [25]. One advantage of using a converging beam is that the absorption is triggered in-depth and then controllable. In pulsed regime, the time for the instantaneous temperature to reach a steady state for our compounds and laser conditions can be shown from the Fourier's equation to be around 270•τD (see Appendix A), with τD = w ²/4DT, w0 ~ 1 µm) and DT(LNS) = 9×10 −7 m²/s [21]. This gives τD ≈ 0.28 µs and thus 76 µs to reach the steady state or 15 pulses in our conditions (200 kHz). This is indicative that we have a weak temporal overlapping between pulses, and that we are close to the so-called "single pulse regime" discussed in the Introduction. Moreover, the thermal diffusivity D = κ/[ρC ], where κ is the thermal conductivity, ρ the density, and C the heat capacity, should not be drastically different when moving from LNS to LNSB. Indeed, the partial substitution of 21 mol% SiO2 with B2O3 into the glass system is not expected to change the thermal diffusivity by much (see for instance Table 8.20 in Ref. [26]). Additionally, the ρ and C values, extracted from Sciglass software, are ρ (LNS, 20 °C) = 3482 kg/m 3 Consequently, the spatial temperature distribution profiles, for both LNS and LNSB glasses, are very similar for a given set of conditions. This is in good agreement with our experimental results where the HAV widths appear not to vary much with the chemical composition ( Figure 4). Nucleation and Growth in LNS and LNSB Glasses: Tendencies and Comparisons One interesting parameter to investigate differences in crystallization kinetics between the two glasses is the reduced glass transition temperature, Tgr, defined as Tgr = Tg(K)/TL(K), where Tg is the glass transition temperature and TL is the liquidus temperature (or the melting temperature). To get Tg values, we ran differential scanning calorimetry (DSC) scans on the samples (LNS and LNSB), in addition to two other glasses with an incremental amount of B2O3. The results are reported in Figure 8. The Tg could not be easily distinguished for LNS glass, and therefore we took the value provided by Sciglass (Tg ≈ 579.1 °C). On the other hand, we obtained Tg ≈ 520 °C for LNSB glass with 21 mol% of B2O3. For both glasses, we used TL = 1257 °C (corresponding to the melting temperature of LiNbO3 as in [27]); we obtained Tgr(LNS) ≈ 0.56 and Tgr(LNSB) ≈ 0.52. Additionally, and for completeness, we observe that the progressive addition of B2O3 shifts down the crystallization peak to lower temperatures: by about 150 °C between LNS and LNSB with 21 mol% B2O3. The Tgr values are only indicative and further work (including viscosity measurements) is required to more precisely ascertain these values, such as melting temperatures (which in glass can be defined as the temperature for which the viscosity reaches log (viscosity, Pa•s) = 1). This will be important in the future, since the melting temperature of LNSB glass is lower than LNS, and this would have the tendency to shift the Tgr value of LNSB slightly to higher values. Aside from these caveats, the differences of Tgr between the glasses can be, at least quantitatively, compared to the maximum nucleation rate (Imax) and time lag of nucleation (incubation time) reported in Figure 4.3 of [28]. Interestingly, these observed differences in Tgr can lead up to several orders of magnitude higher Imax values for LNSB relative to LNS, while the incubation time is correspondingly two orders of magnitude lower [28,29]. The detection of SHG (through intense green light detection) for LNSB is much faster than for LNS upon laser irradiation (up to 20x faster) [18] and indicates a faster nucleation rate. Additionally, by looking at Figure 2 from [29], the maximum growth rate can be crudely estimated. For LNS, it would be in the 100 µm/s range, while it would be higher for LNSB by an order of magnitude. Once again, these trends agree well with the experimental observations. At a pulse energy of 1.0 µJ, crystallization (through SHG observation) is observed up to 100 µm/s in LNS, while it is detected up to 600 µm/s in LNSB. For completeness, Tgr is determined for a glass but excluding migration of chemical species. If the latter occurs (e.g., in Sr2TiSi2O8 glasses as in [30] or other silicate glasses [3]), the situation becomes more complex and growth will become compositiondependent and temperature-dependent. Another interesting point to highlight is that the net crystal growth rate in glass is directly related to the mobility of the atoms, i.e., the diffusion coefficient (Wilson-Frenkel's theory [31]), which, in turn, is proportional to the inverse of the viscosity through the Stoke-Einstein's relation [28]. Moreover, incorporation of B2O3 into the silicate glass melt is known to decrease viscosity [26]. Another observation is that the LNSB glass yielded to some occasional surface crystallization upon cooling, whereas it was not observed for LNS glass. These are the first indications of the more pronounced tendency to crystallize for LNSB relative to LNS. Additionally, we can estimate the Hruby Criterion kH = (Tx-Tg)/(Tm-Tx), which is indicative of the tendency of glass to crystallize (especially upon heating) [32]. In this criterion, Tx stands for the onset crystallization temperature, Tg the glass transition temperature, and Tm the melting temperature (here taken as Tm of LiNbO3). For LNSB, Tg ≈ 520 °C and Tx ≈ 560 °C. For LNS, Tg ≈ 579.1 °C (Sciglass) and Tx ≈ 700 °C (see for instance Ref. [33]). This yields to kH ≈ 0.06 for LNSB and kH ≈ 0.22 for LNS. The smaller kH value for LNSB (≈ 4 times lower) with respect to LNS indicates its higher tendency to crystallization. What Does Limit Crystallization as The Scanning Speed is Increased? To answer this question, let us first briefly recall what has been learned from the Introduction: • The incubation time ti at a given temperature is the time for the glass to produce a volumetric number of nanocrystals that gives rise to detectable SHG intensity. This time provides information on the TTT curve that binds the domain of crystallization in the frame of homogeneous nucleation assumption [34] (see Figure 1b). • The spatial temperature distribution T(r, t(N)) ( Figure 1c) has a maximum directly related to the pulse energy (Ep), the heat capacity (Cp) and the irradiated volume (width and length): this situation is valid for pulses with no overlapping in time. • An average curve can be obtained by averaging the T(r, t(N)) curve on the period of the pulse T (r, N) = ( , ( )) dt . This curve becomes stationary after about 270τD or N > Nssm(0) = 15 pulses (see Appendix A Equation (5)). • A time-averaged temperature curve is a convenient mean to consider the temperature spatial distribution, and to understand the effective incubation time (teff) obtained from laser irradiation. This last is close to the shortest one in the possible crystallization range from Tg to the maximum average temperature (Tmaxi) controlled by the average light power P : t E · f = min t (T ) where ti(T) is the domain boundary in Figure 1b. Thus, for low-average power (P), T is close to Tg, teff is large, but as P increases, teff decreases drastically until a value defined by the TTT curve (at the Tp temperature), and then slows down weakly for further increase of P. This explains the shape of the curve shown in Figure 1a. When the beam is moving (let us say along x), considering that the steady state is rapidly reached, T (x − vt, y, z, ∞) is a bell shape curve moving. Considering one point in the material, its temperature travels along this curve, increasing and decreasing: this is the local treatment and the bell shape that we know in space is translated in time domain. This is what we call the thermal treatment curve. Following these previous considerations and based on our results, in Figure 9 we sketch the TTT diagrams for LNS ( Figure 9a) and LNSB (Figure 9b), considering the differences in Tg, Tx, Tp, teff while Tm is assumed constant. On this figure, several heat treatments (heating-cooling curves) are drawn: note that the mean power is fixed so the maximum T is a constant and will not change as a function of scanning speed, as discussed earlier. Consider first the diagram of Figure 9a, which is for LNS glass. The scanning speed (v) is chosen so that the heat treatment does not reach the crystallization domain. Now, if we consider scanning at a speed of v/2, the treatment may penetrate the crystallization domain (i.e., the nuclei have time to form and to grow). Reversely, at higher scanning speeds (e.g., 2v in the sketched diagram) no crystallization will be detected. The situation is different for LNSB as the incubation time is much shorter and Tp much lower. This has the effect of shifting the crystallization domain to lower temperature and closer to the ordinate axis, as illustrated in Figure 9b. Therefore, for the same scanning speed v, the treatment curve will now penetrate the crystallization domain. It is thus possible to increase the scanning speed until a value given by the experiment (i.e., 5v) to reach the critical value beyond which the crystallization stops. Figure 9. Scheme of the effect of the scanning speed on the crystallization for the two glasses ((a) LNS and (b) LNSB) investigated in this paper. 1) The crystallization domains are sketched based on the following rules: they are limited by the Tg on bottom side and Tm on top side; the nucleation top side is not changed because it only depends on the thermodynamics driving force of the nuclei that we assume the same for both glasses [34]. However, the growth rate is increasing with the strong decrease of the viscosity. These changes lead to a strong decrease of the incubation time. 2) The maximum average temperature is defined by pulse energy and the repetition rate (Ep × f). It does not change on the scanning speed (v) and very little on the chemical composition. Whatever the speed, the coincidence of the time scales between TTT and treatment curve is fixed when the treatment curves cross Tg, considered to be the starting time for crystallization (what happens below Tg is expected to have insignificant effect on the crystallization kinetics). Evolution of The Heat Affected Volume (HAV) Width and Length with Respect to The Scanning Speed V Based on our previous discussion, we can now understand better the evolution of HAV width with respect to the scanning speed (as observed in Figure 4, for instance). To observe crystallization, the temperature should overcome Tx, and this temperature is slightly dependent on the time of irradiation (or the equivalent in laser irradiation: the number of pulses). This can be visualized from Figure 10: when the heat treatment time increases, the crystallization is possible at slightly lower temperatures (see the red portion in the TTT diagram). Consequently, the slight decrease of the HAV width with v observed in Figure 4b is attributed to the combination of 1) incubation time to reach the crystallization growth boundary at a given temperature and 2) the decrease of deposited pulse density when v is increased. There is also the possibility that a smaller number of pulses yields to less created defect levels absorbing in the bandgap [5], decreasing the energy transfer from the light through linear absorption thus decreasing T and HAV width. The length (or depth) of the HAV after the steady state is, like the width (defined by the beam waist w0), defined by the light absorption length (i.e., 1/α at 1/e with α being an effective absorption coefficient beyond the non-linear absorption stage [25]). Additionally, the related diffusion time (say τDz) is ~ 1/4•α 2 DT. As 1/α >> w0 (at least by a factor of 10), τDz >> τDr and the steady-state temperature along the beam propagation axis would be reached much later (2 or 3 order of times later) than along the radial direction. The total number of pulses deposited punctually (at f = 200 kHz) is above 400,000 (for v = 1 μm/s) and 4000 (for v = 100 μm/s). This is larger than the pulses needed to reach the steady-state temperature (~1500). In this case, the length follows a similar effect than for the width, so the length is shortened with the speed increase. This is not the case at speeds such as v = 500 μm/s, for which the number of pulses deposited is only 800. Therefore, the steady state is not reached, and the temperature is smaller than the steady state one, decreasing even more the length at T (r, N) = T . Figure 10. Sketch explaining the effect of scanning speed on the heat-affected volume (HAV) width. As the scanning speed decreases, the number of pulses deposited for a given volume increases. Consequently, the crystallization onset is reached at lower temperatures. Nanogratings Region Inside The HAV Area, but Not Always Coinciding With It In Figure 7, we highlighted the difference between the HAV area and the area where lamellas were present. At low speed (< 25 µm/s) and low energy (0.5 µJ/pulse), both regions coincide but it is no more the case when either the energy or the speed are increased (> 25 µm/s). We mentioned before that the HAV dimensions are related to the T distribution because HAV boundary is defined by a minimum T for inducing an effect (e.g., Tx for crystallization). When the temperature amplitude increases, the temperature shape distribution (related to the diffusion time) remains the same. However, the radius where T reaches Tx is pushed further out. On the other hand, the width of the nanograting region (where there are lamellas) does not change much. Indeed, it is defined by the width of the beam since nanogratings are enabled by a light-plasma interaction. At low energy and low speed, the top of the temperature distribution penetrating the crystallization domain (as speed v/2 in Figure 9a) may define a minimum width and coincides with the width of the beam. At higher energy, the amplitude of the T distribution increases and the HAV width becomes larger than the beam width. When the speed is increased, the T distribution does not change much, but the lamellar-structured volume is likely to shrink. The latter can be explained making the following arguments: 1) The length and the number of the nanoplanes increase with the deposited pulses number (2w •PD) above some number that is why the birefringence is also increasing [35]. 2) The form birefringence increases also with the pulse energy above a threshold. We can thus assume that the self-organized volume depends on the dose (I(x,y,z)•PD-D0), where I(x,y,z) is the local light intensity, and D0 is a dose threshold above which self-organization structures appear. 3) If we note that I(x, y, z) = I(x, y, z ) · exp (−α(z − z )), where z0 is the depth where nanostructure begins to appear [36], 4) therefore, the extension of the self-organized volume is defined by local dose above the minimum dose D0, i.e., P · I(x, y, z ) · exp −α(z − z ) ≥ D or z − z ≤ ln • ( , , ) . Hence, when v increases, PD decreases, and thus the self-organized volume as it is observed experimentally. Phenomenological Link Between Birefringent Response and Scanning Speed By investigating the microstructure and crystal size and distribution inside the laser track ( Figure 6), we observe that larger crystallized domains made up by LiNbO3 nanocrystals with the same orientation are present at low speeds (< 25 µm/s) and can reach up to several hundreds of nm in size. These "particles" with a size around λare strongly scattering light resulting in "black" lines in natural light transmission as we observe in Figure 5a. On the other hand, increasing the scanning speed will avoid crystallites to grow, and they will only be a few ten percent (in size) of the microscope light wavelength (550 nm). Interestingly, we observed that this size distribution of crystallized domains is rather similar for lines written at speeds > 50 µm/s in LNSB ( Figure 6). Under optical microscopy, there is strong light scattering but nevertheless some light still passes through them. This tuning of scanning speed may be an efficient way to control photonic device scattering losses related to the crystals' size. From Figure 7, the presence of lamellar-like structures is taken responsible of the birefringent response of the inscribed lines [21]. The volume occupied by the lamellar-like structure becomes smaller as the scanning speed increases (and the retardance decreases). Maximizing this lamellar-like structuring would therefore bring higher retardance values and low speeds may be preferred. However, an associated low speed tradeoff is the formation of larger regions with single texture, which ultimately would cause light scattering as well. Therefore, there exists a sweet spot in the medium speed range regime where the laser track retardance response is maximized. This sweet spot is much wider for LNSB relative to LNS: typ. by a factor of 4 or 5 as it can be seen in the retardance measurements reported in Figure 5b,c. Based on SEM-EBSD analysis, we observed that both the width and the length of the crystallized volume change slightly as the scanning speed is varied. We have determined the orientation distribution of the nanocrystals by SEM-EBSD. We found that a preferential texture with LiNbO3 c axis perpendicular to laser polarization could be preserved when scanning speed was increased, but its contrast decreased. In addition, while the width of the nanogratings region does not change much, the length is drastically reduced when the scanning speed increases. By using a simple approach based on temperature distribution averaged on a pulse time period, along with its characteristics in a stationary state, we showed that the irradiations were performed in the regime of weak time overlapping between pulses or heat accumulation: only 15 pulses are needed to reach 94% of the temperature steady-state value. Key results can be summarized as follows: • We identified the variations of the thermal properties (Tx, Tp, incubation time) with respect to the glass chemical compositions. From these data, we deduced the variation of the crystallization domain in the time-temperature framework. The lower part of the crystallization domain in the TTT diagram (growth limited) is shifted to lower temperatures when B2O3 is added to the glass matrix. As a result for LNSB, the minimum incubation time is reduced by a factor larger than 5 with respect to LNS. • We stated that the effective incubation time is better translated by a number of pulses whatever the repetition rate and is a function of the average laser power (rather than pulse energy only) and the number of pulses (equivalent to irradiation duration). • We specified the concepts of spatial pulse overlap (pulse density that means irradiation time) and time pulse overlap, which govern the heat accumulation and the average temperature. • We explained the scanning speed limit to induce crystallized lines as follows: there exists an effective irradiation time that is related to the width of the average temperature spatial distribution, and that is translated in the temporal domain during scanning. When the heat treatment time becomes smaller than the incubation time, crystallization cannot occur. Interestingly, we can deduce that faster scanning speeds while preserving the nano-crystallization are possible when the mean power is increased as illustrated in Figure 11. The objective is that the treatment curve crosses the nose of the crystallization domain. If the speed is increased, the mean power must be increased in relation. Quite rapidly, the treatment curve overcomes the melting temperature, and the mechanism evolves from solid-solid to liquid-solid crystallization. • We explained the weak decrease of the HAV width when the scanning speed is increased (or the decrease of dot diameter with the decrease of the irradiation time in static irradiation) by the decrease of the incubation time with the increase of the onset temperature Tx. Crystallization starts thus at a higher temperature and this corresponds to a radius closer to the center. Two processes explain the length decrease: the same phenomena as above but in addition the decrease of the temperature amplitude due to a smaller absorption coefficient for smaller pulse densities. • The width of the nanograting region coincides with HAV at low speeds and low energy because only a small part of the T distribution (in fact the top) is touching the crystallization domain. Assuming that these self-organized structures are dependent on the light dose (I(x,y,z)•PD) above a threshold dose (D0), when the speed is increased, the local dose decreases on all points of the irradiated volume. Consequently, the volume where the dose overcomes D0 also decreases. Finally, we observed that birefringence response of the inscribed line is a function of scanning speed. This is related to the variation of the nanograting region volume as explained above. On the contrary, the crystallized volume varied only weakly with the scanning speed, as well as the nanocrystal density. As a matter of fact, the latter is defined by the average temperature distribution, which is fixed by both pulse energy and repetition rate. However, the contrast of orientation is larger for low speed and this is likely to be due to the larger volume taken by nanogratings. Conflicts of Interest: The authors declare no conflict of interest. Appendix A Foreword: the formulas below are deduced from the Fourier's equation with temperature independent and uniform coefficients, of a fs multipulse irradiation with the hypothesis that the pulse energy is deposited in a time short enough before active thermal diffusion with a spherical geometry around the focus. This simplified model is enough for reasoning and allows to manage simple expressions. After a while corresponding to a number of pulses Nss, the system reaches a steady state, beyond which the temperature at the distance r from the center of the focus oscillates between Tmin(r, ∞) and Tmax(r, ∞) within the period [19,37]. In addition, when the temperature is averaged on the period, there is no more time dependence arising from the pulsed character and discussions on the structural modifications are rendered easier. We point out that a paper will be published in the near future that will extensively describe the subsequently established equations that express analytically the results obtained numerically in Refs. [19,37] and other references. To help the Reader, the designation, definition, and units of each variable employed in the equations below are listed in Table A1 at the end of Appendix A. The instantaneous temperature increase (T) from the room temperature at time t and distance r from the center of the beam can be shown to be [38]: with τp = 1/f, τD = w 2 /4DT, Rτ = τp/τD, T0 = 3•A•Ep/(4•π•ρ•Cp•w 3 ), f is the pulse repetition rate, w is a beam waist radius (at 1/e). Performing an averaging on the period, we get the average temperature expression: That can be expressed as This quantity increases at the beginning of the irradiation and reaches a steady state limit that is: We note that the maximum temperature according to the radius is at the center and has the value: N.B.: for cylindrical geometry, the expression of the maximum temperature is slightly modified with the introduction of an absorption coefficient resulting from non-linear absorption. We read: T = · · · · T (0, ∞) = · · · (7)
13,741
2021-03-15T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Transcriptomic Profiling of DNA Damage Response in Patient-Derived Glioblastoma Cells before and after Radiation and Temozolomide Treatment Glioblastoma is a highly aggressive, invasive and treatment-resistant tumour. The DNA damage response (DDR) provides tumour cells with enhanced ability to activate cell cycle arrest and repair treatment-induced DNA damage. We studied the expression of DDR, its relationship with standard treatment response and patient survival, and its activation after treatment. The transcriptomic profile of DDR pathways was characterised within a cohort of isocitrate dehydrogenase (IDH) wild-type glioblastoma from The Cancer Genome Atlas (TCGA) and 12 patient-derived glioblastoma cell lines. The relationship between DDR expression and patient survival and cell line response to temozolomide (TMZ) or radiation therapy (RT) was assessed. Finally, the expression of 84 DDR genes was examined in glioblastoma cells treated with TMZ and/or RT. Although distinct DDR cluster groups were apparent in the TCGA cohort and cell lines, no significant differences in OS and treatment response were observed. At the gene level, the high expression of ATP23, RAD51C and RPA3 independently associated with poor prognosis in glioblastoma patients. Finally, we observed a substantial upregulation of DDR genes after treatment with TMZ and/or RT, particularly in RT-treated glioblastoma cells, peaking within 24 h after treatment. Our results confirm the potential influence of DDR genes in patient outcome. The observation of DDR genes in response to TMZ and RT gives insight into the global response of DDR pathways after adjuvant treatment in glioblastoma, which may have utility in determining DDR targets for inhibition. Introduction Glioblastoma is the most common and aggressive type of primary malignant brain tumour in adults. The diagnosis of glioblastoma, updated in the WHO Classification of Tumors of the Central Nervous System 2021, involves a combination of histological (microvascular proliferation or necrosis) and molecular characteristics, including the criteria of having the isocitrate dehydrogenase (IDH) wild-type gene [1]. Standard treatment involves safe maximal surgical resection of the tumour; however, since glioblastoma has an indistinct tumour border, complete resection is not usually possible. Consequently, Ethics This study was approved by the Human Research Ethics Committee of the University of Newcastle (H-2020-0389). TCGA Glioblastoma Cohort To study the baseline DDR profile in glioblastoma, fragments per kilobase million (FPKM) values were collated from 140 IDH wild-type glioblastoma patients within the TCGA database [11]. FPKM values were converted to transcript per kilobase million (TPM) values [12] and single-sample gene set enrichment analysis (ssGSEA) was performed on the TCGA data in R. Gene sets were assigned corresponding to the major DDR pathways including BER, NER, mismatch repair (MMR), HR and NHEJ (Table S1). Enrichment scores for each pathway were converted to log-transformed z-scores for data visualisation. RNAseq by expected maximisation (RSEM) data were collated for 84 DDR genes (Table S2). Patient groups were stratified into "high" and "low" expression based on a median split of RSEM expression values per gene. The log-rank test was used to find potential differences in overall survival (OS) between groups. Multiple Cox regression was implemented on genes with significant difference in OS from the log-rank test (p < 0.05), using clinical covariables that were determined to be significant through univariate Cox regression (Table S3). RNA Sequencing Analysis RNA sequencing data from all 12 glioblastoma cell lines were obtained from the publicly available QMIR database (https://www.qimrberghofer.edu.au/commercial-collaborations/ partner-with-us/q-cell/ accessed on 10 May 2021). RNA extraction methods and RNA sequencing analysis from the cell lines are described in Stringer et al. [13]. ssGSEA was performed on TPM values for each cell line, using the same gene sets as used in the TCGA cohort (Table S1). Cell Viability Assay To assess cell viability, 96-well plates were coated with Matrigel under ice-cold conditions prior to plating with cells. Adherent glioblastoma cell lines were passaged and seeded at 4000 Cells/well in 96-well plates overnight, before treatment with a clinically relevant dose of TMZ (35 µM) [14] or RT (2 Gy). After 7 days, 50 µL of a 2 mg/mL solution of MTT Formazan (Sigma-Aldrich, USA) and Dulbecco's Phosphate Buffer Saline (DPBS, without magnesium chloride and calcium chloride Gibco TM , Waltham, MA, USA) was added to each well and incubated for 3 h. The medium/MTT solution was aspirated and DMSO (120 µL) added into each well. Each plate was shaken using an IKA ® MS 3 basic shaker (Sigma Aldrich, Saint Louis, MO, USA) at 600 rpm for 2 min. Absorbance was read at 570 nM using the SPECTROstar ®Nano microplate reader (BMG LABTECH, Ortenberg, Germany). Comparison of Cell Line DDR Profile and Treatment Response Glioblastoma cell lines were stratified into two cluster groups (C1 and C2) based on hierarchical clustering of ssGSEA scores in each DDR pathway. Cell lines were also assigned into "high" or "low" expression groups according to each DDR pathway based on a median split of ssGSEA scores. Differences in TMZ or RT cell viability between cluster groups or "high" vs. "low" DDR expression groups was assessed through an unpaired student's t-test. Time Course and Quantitative PCR Four glioblastoma cell lines (HW1, FPW1, SB2b and MN1) were seeded at 300,000 cells/well in 6-well Matrigel-coated plates overnight and treated with a clinically relevant dose of RT (2Gy), followed by TMZ (35 µM) one hour later. Cells were harvested at 2, 24 and 48 h after TMZ treatment and extracted for RNA using the AllPrep DNA/RNA/Protein Mini Kit (Qiagen, Germantown, MD, USA). RNA was converted to cDNA using the High-Capacity cDNA Reverse Transcription Kit (ThermoFisher Scientific, Waltham, MA, USA). In accordance with supplier instructions, gene expression of 84 DDR genes (Table S2) was examined using a TaqMan™ ® Gene Expression Custom Array Card (ThermoFisher Scientific, Waltham, MA, USA). Samples were run as biological triplicates using the QuantStudio™ 7 Pro Real-Time PCR System (ThermoFisher Scientific, Waltham, MA, USA). The geometric means of housekeeping genes (Table S2) were used to determine the absolute expression and fold changes of target genes for each cell line using the ∆Ct and ∆∆Ct method. Differentially expressed genes (DEGs) were identified within each cell line as significantly expressed genes for a particular treatment and time point when compared to the untreated control, using Dunnett's multiple comparison test of absolute expression values [15]. Statistical Analysis The statistical analyses including the Mann-Whitney test, the Kruskal-Wallis test and Dunnett's multiple comparison test were conducted in GraphPad Prism 7. Unsupervised hierarchical clustering (Ward's method) of ssGSEA scores was performed in R using the 'stats' package and visualised using the 'ComplexHeatmap' package. Survival analyses using the log-rank test and Cox regression were performed in R using the 'survival' package. p-values < 0.05 were considered significant. Expression of DDR Genes and Association with Patient Survival First, we analysed RNA sequencing data from 140 IDH wild-type glioblastoma patient samples in the TCGA to determine distinct DDR profiles using ssGSEA. This method calculates enrichment scores of gene sets within a single sample and thus represents the degree to which such gene sets are up-or downregulated within a sample [16]. Five gene sets were used in this study, representing the five canonical DDR pathways (BER, MMR, NER, HR and NHEJ) [17]. Hierarchical clustering of ssGSEA scores from each DDR pathway identified three distinct clusters (TC1-TC3), wherein TC3 had the highest gene expression of each DDR pathway, followed by TC2, and lastly TC1, which had the lowest gene expression in each DDR pathway ( Figure 1A). There was a trend of small increases in the proportion of MGMT methylated patients when comparing cluster C1 through to C3 (C1 = 34.2% (13/48), C2 = 41.5% (17/41), C3 = 53.3% (16/30)). A similar trend occurred for the proportion of TP53 alterations (C1 = 13% (7/54), C2 = 20.8% (11/53), C3 = 33.3% (11/33)). Although distinct DDR gene profiles were apparent, no survival difference was found across clusters ( Figure 1B). Next, using the same TCGA patient cohort, we asked whether the expression of individual DDR genes could predict OS outcomes of glioblastoma patients. "High" and "low" expression groups were determined for DDR genes and Kaplan-Meier survival analysis was performed. After accounting for covariates using Cox regression (Table S3), high expression of DDR genes ATP23, RAD51C and RPA3 was independently associated with poorer overall patient survival ( Figure 2). ). The Kruskal-Wallis test was used to compare ssGSEA scores of each pathway between clusters, with the same trend followed across all DDR pathways: TC1 < TC2 < TC3 (p < 0.01). MGMT methylation status and TP53 alterations (SNVs or homozygous deletions) are also depicted for respective patient samples. (B) Kaplan-Meier plots of OS are shown for patients within DDR clusters (TC1-red; TC2-yellow; TC3-green). The log-rank test was performed between each combination TC3). The Kruskal-Wallis test was used to compare ssGSEA scores of each pathway between clusters, with the same trend followed across all DDR pathways: TC1 < TC2 < TC3 (p < 0.01). MGMT methylation status and TP53 alterations (SNVs or homozygous deletions) are also depicted for respective patient samples. (B) Kaplan-Meier plots of OS are shown for patients within DDR clusters (TC1-red; TC2-yellow; TC3-green). The log-rank test was performed between each combination of clusters, revealing no significant OS differences between clusters. Statistical significance was determined with a p-value < 0.05. Next, using the same TCGA patient cohort, we asked whether the expression of individual DDR genes could predict OS outcomes of glioblastoma patients. "High" and "low" expression groups were determined for DDR genes and Kaplan-Meier survival analysis was performed. After accounting for covariates using Cox regression (Table S3), high expression of DDR genes ATP23, RAD51C and RPA3 was independently associated with poorer overall patient survival ( Figure 2). Kaplan-Meier plots of OS for high and low expression of ATP23, RAD51C and RPA3 in the TCGA glioblastoma cohort. Patients were stratified into "high" (red) and "low" (blue) expression groups based on a median split of RNA expression for each respective DDR gene. Across all samples (n = 140), high expression of ATP23 (A), RAD51C (B) and RPA3 (C) was significantly associated with lower OS. Log-rank p-values and hazard ratio prior to multiple Cox regression are shown. Significance was established in genes with p-value < 0.05 after both a log-rank test and multiple Cox regression of significant clinical features (i.e., therapy). Expression of DDR Pathways Influence Treatment Response in Glioblastoma Cell Lines Next, we examined the baseline gene expression profiles of DDR pathways in 12 patient-derived glioblastoma cell lines (Table S4) using ssGSEA. Hierarchical cluster analysis identified two distinct clusters, C1 and C2 ( Figure 3A). C1 had high expression of BER and NER genes, while C2 was significantly upregulated in MMR and HR (p = 0.004) genes Kaplan-Meier plots of OS for high and low expression of ATP23, RAD51C and RPA3 in the TCGA glioblastoma cohort. Patients were stratified into "high" (red) and "low" (blue) expression groups based on a median split of RNA expression for each respective DDR gene. Across all samples (n = 140), high expression of ATP23 (A), RAD51C (B) and RPA3 (C) was significantly associated with lower OS. Log-rank p-values and hazard ratio prior to multiple Cox regression are shown. Significance was established in genes with p-value < 0.05 after both a log-rank test and multiple Cox regression of significant clinical features (i.e., therapy). Expression of DDR Pathways Influence Treatment Response in Glioblastoma Cell Lines Next, we examined the baseline gene expression profiles of DDR pathways in 12 patient-derived glioblastoma cell lines (Table S4) using ssGSEA. Hierarchical cluster analysis identified two distinct clusters, C1 and C2 ( Figure 3A). C1 had high expression of BER and NER genes, while C2 was significantly upregulated in MMR and HR (p = 0.004) genes ( Figure 3A). NHEJ, although appearing upregulated in C1 ( Figure 3A), was not significantly altered between C1 and C2 (p = 0.154). To determine whether cell line DDR gene expression clusters had similarities to the TCGA gene expression clusters, hierarchical clustering was performed on combined ssGSEA scores of TCGA samples and cell lines. All cell lines from C1 clustered within a combined cluster resembling TC1, while all cell lines belonging to C2 did not associate with any TCGA DDR cluster ( Figure S1). To investigate the extent at which baseline gene expression in DDR pathways contributes to differential treatment response, cell viability of glioblastoma cells treated with clinically relevant doses of TMZ (35 µM) and/or RT (2Gy) was assessed ( Figure 4). Across the cell lines, there was a differential response to TMZ, while most cell lines had similar sensitivity to single-dose RT ( Figure 4). MGMT methylation status is a clinical biomarker of TMZ sensitivity [18]. As expected, MGMT methylated cell lines had a significantly reduced cell viability than MGMT unmethylated cell lines (p = 0.004) in response to TMZ ( Figure S2). Three cell lines (HW1, FPW1 and SJH1) were identified as TP53 mutants; however, their sensitivity to TMZ or RT was not significantly different compared to TP53 wild-type cell lines (p = 0.39 and 0.86, respectively). Whilst distinct differences in DDR gene expression were observed between clusters C1 and C2, no significant differences in cell viability were observed in response to TMZ (p = 0.32) or RT (p = 0.097) ( Figure 3B,C). When stratified into "high" and "low" DDR expression groups, treatment response varied in certain pathways ( Figure 3D,E). High gene expression of the NER pathway was associated with more TMZ resistance (p = 0.032), while cells with high MMR gene expression were more sensitive to TMZ (p = 0.0039) ( Figure 3D). Interestingly, TMZ response did not significantly differ between high and low expression of BER, HR and NHEJ genes. When comparing RT response, high HR gene expression was associated with increased resistance to RT (p = 0.0038) whilst there was no significant difference in other DDR pathways ( Figure 3E). Collectively, these results suggests that transcriptomic expression of NER and MMR pathways may influence TMZ sensitivity, while gene expression of the HR pathway may influence RT response. days, cell viability assessed using MTT assay. Cell lines were grouped and compared using a student's t-test to assess differences in TMZ and RT response based on DDR cluster (B,C) as well as "high" and "low" expression of DDR pathways (D,E). There was no significant difference in cell viability between the C1 or C2 cell line clusters after TMZ or RT treatment (B-C). TMZ sensitivity was associated with high MMR gene expression, while TMZ resistance was associated with high NER gene expression (D). RT resistance was associated with high HR gene expression (E). p-values < 0.05 were considered significant (* <0.05) (ns = non-significant). To investigate the extent at which baseline gene expression in DDR pathways contributes to differential treatment response, cell viability of glioblastoma cells treated with clinically relevant doses of TMZ (35 µM) and/or RT (2Gy) was assessed ( Figure 4). Across the cell lines, there was a differential response to TMZ, while most cell lines had similar sensitivity to single-dose RT (Figure 4). MGMT Cell lines were treated with a clinically relevant dose of TMZ (35 µM) or RT (2 Gy) for 7 days, cell viability assessed using MTT assay. Cell lines were grouped and compared using a student's t-test to assess differences in TMZ and RT response based on DDR cluster (B,C) as well as "high" and "low" expression of DDR pathways (D,E). There was no significant difference in cell viability between the C1 or C2 cell line clusters after TMZ or RT treatment (B,C). TMZ sensitivity was associated with high MMR gene expression, while TMZ resistance was associated with high NER gene expression (D). RT resistance was associated with high HR gene expression (E). p-values < 0.05 were considered significant (* p < 0.05) (ns = non-significant). sion were more sensitive to TMZ (p = 0.0039) ( Figure 3D). Interestingly, TMZ response did not significantly differ between high and low expression of BER, HR and NHEJ genes. When comparing RT response, high HR gene expression was associated with increased resistance to RT (p = 0.0038) whilst there was no significant difference in other DDR pathways ( Figure 3E). Collectively, these results suggests that transcriptomic expression of NER and MMR pathways may influence TMZ sensitivity, while gene expression of the HR pathway may influence RT response. Upregulation of DDR Genes after Standard Treatment in Glioblastoma Cell Lines Few studies have specifically examined the expression of multiple DDR pathways in glioblastoma cells treated with TMZ or RT to determine the timing of treatment-induced DDR pathway activation and the extent at which they repair DNA. Here, we used qPCR to determine the expression of 84 DDR genes in glioblastoma cell lines (SB2b, FPW1, MN1 and HW1) treated with a clinically relevant dose of TMZ and/or RT at 2, 24 and 48 h posttreatment. These cell lines represented to various degrees different cluster groups, MGMT Upregulation of DDR Genes after Standard Treatment in Glioblastoma Cell Lines Few studies have specifically examined the expression of multiple DDR pathways in glioblastoma cells treated with TMZ or RT to determine the timing of treatment-induced DDR pathway activation and the extent at which they repair DNA. Here, we used qPCR to determine the expression of 84 DDR genes in glioblastoma cell lines (SB2b, FPW1, MN1 and HW1) treated with a clinically relevant dose of TMZ and/or RT at 2, 24 and 48 h post-treatment. These cell lines represented to various degrees different cluster groups, MGMT statuses, and responses to TMZ or RT treatment (Figures 3 and 4). Within the context of this study, DEGs were identified as significantly up-or downregulated genes in treated cells compared to the untreated control at each specific time point. When observing the frequency and distribution of DEGs across all cell lines, several trends appeared. Figure 5 summarises the accumulated degree of DDR upregulation and downregulation in all four cell lines treated with TMZ, RT, or TMZ + RT ( Figure 5A), as well as the proportion of DEGs belonging to DDR pathways for the average cell line ( Figure 5B-G). Furthermore, Figure 6 depicts the 16 most frequently significantly up-or downregulated genes across all cell lines, treatments, and time points. Notably, the majority of DEGs across all four cell lines were upregulated (88%). There appeared to be variability between cell lines and treatments ( Figure S3); however, an overall trend across cell lines was the predominant upregulation of genes within 24 h after treatment, especially in RTand TMZ + RT-treated cells ( Figure 5A). Differences in the frequency of DEGs were apparent between treatments across time points. On average, TMZ induced the lowest frequency of DEGs, whereas RT induced the most robust response causing the highest frequency of DEGs across each time point ( Figure 5B,C). The combination treatment induced more DEGs than TMZ alone, but surprisingly less DEGs than RT alone ( Figure 5D). This suggests that the addition of TMZ to RT-treated cells disrupts the dynamics of DDR in this context. Table S2. BER-blue; MMR-red; NER-green; cell cycle-purple; HR-orange; NHEJ-black; other-brown. Table S2. BER-blue; MMR-red; NER-green; HR-purple; NHEJ-orange; other-black. The frequency of DEGs belonging to specific DDR pathways was also distinct for each treatment between time points. Although variability in response was observed between cell lines ( Figure S4), trends appeared when considering the average expression across all four cell lines. On average, cells treated with TMZ at 2 h had an upregulation of NER genes while several genes involved in BER, HR, NHEJ and MMR were upregulated 24 h after treatment ( Figure 5B). Cells treated with RT had upregulation in all DDR pathways and remained consistent for the 2 and 24 h time points before a reduction in expression at 48 h ( Figure 5C). For TMZ + RT-treated cells at 2 h, the BER, MMR, NER and HR pathways were upregulated, while NHEJ and NER genes increased in expression at 24 h post TMZ + RT treatment ( Figure 5D). Then, at 48 h post TMZ + RT treatment, most genes returned to baseline levels where fewer genes were upregulated with NER expression the most prominent. Discussion Glioblastoma is an extremely aggressive and treatment-resistant disease, often prone to recurrence and poor patient survival due to the failure of standard treatment. The activation of DDR pathways is a significant factor in reducing treatment efficacy, enabling efficient repair of treatment-induced DNA damage, and increasing the likelihood of tumour cell survival [5,6]. We investigated DDR through a transcriptional lens to identify whether DDR is a significant feature in glioblastoma survival and response to standard treatment. Firstly, we identified three distinct clusters (TC1-3) of TCGA glioblastoma patients based on the overall expression of DDR pathway genes using ssGSEA. Despite the clusters displaying low, moderate, and high DDR gene expression, respectively, no significant OS differences were observed between clusters. A study by Meng et al. [19] found low DDR gene expression to indicate favourable prognosis in a combined cohort of low-grade glioma (LGG) and glioblastoma patients; however, consistent with our findings here, no survival difference was apparent in glioblastoma patients alone. Interestingly, a trend appeared, whereby the proportion of TP53 alterations increased from TC1 to TC3 and thus aligned with the extent of DDR expression across each cluster. This may be the case as TP53 alterations can enhance genomic stress within rapidly dividing cells and thus induce an increased activation of DDR to counteract such stresses [20]. Furthermore, a similar trend occurred for MGMT methylation, where the proportion of MGMT methylated patients increased from TC1 to TC3. Given that MGMT methylation plays a significant prognostic role in determining a longer overall survival in glioblastoma patients [18], its higher proportion within TC3, together with low samples sizes, may play a factor as to why no significant survival was observed even though higher DDR expression was evident. De- When considering the frequency of specific genes across all time points and treatments, genes from several pathways were represented, in particular HR and BER genes. NEIL3 and CCNO (BER), XRCC2, RAD54L and ATM (HR), DDB2 (NER), and MSH2 (MMR) were among the most differentially expressed and upregulated ( Figure 6). The NER gene, ERCC8, was the most frequently downregulated (n = 5) and appeared in cells treated with TMZ or TMZ + RT (Figure 6), suggesting that TMZ may influence its expression. From the three prognostically significant genes identified from the TCGA cohort, RPA3 and RAD51C were upregulated only once across all cell lines while ATP23 appeared to be upregulated in FPW1 cells and downregulated in the HW1 cell line (Table S5). Overall, these results emphasise the broad response of DDR genes and pathways after DNA-damaging treatment. Discussion Glioblastoma is an extremely aggressive and treatment-resistant disease, often prone to recurrence and poor patient survival due to the failure of standard treatment. The activation of DDR pathways is a significant factor in reducing treatment efficacy, enabling efficient repair of treatment-induced DNA damage, and increasing the likelihood of tumour cell survival [5,6]. We investigated DDR through a transcriptional lens to identify whether DDR is a significant feature in glioblastoma survival and response to standard treatment. Firstly, we identified three distinct clusters (TC1-3) of TCGA glioblastoma patients based on the overall expression of DDR pathway genes using ssGSEA. Despite the clusters displaying low, moderate, and high DDR gene expression, respectively, no significant OS differences were observed between clusters. A study by Meng et al. [19] found low DDR gene expression to indicate favourable prognosis in a combined cohort of low-grade glioma (LGG) and glioblastoma patients; however, consistent with our findings here, no survival difference was apparent in glioblastoma patients alone. Interestingly, a trend appeared, whereby the proportion of TP53 alterations increased from TC1 to TC3 and thus aligned with the extent of DDR expression across each cluster. This may be the case as TP53 alterations can enhance genomic stress within rapidly dividing cells and thus induce an increased activation of DDR to counteract such stresses [20]. Furthermore, a similar trend occurred for MGMT methylation, where the proportion of MGMT methylated patients increased from TC1 to TC3. Given that MGMT methylation plays a significant prognostic role in determining a longer overall survival in glioblastoma patients [18], its higher proportion within TC3, together with low samples sizes, may play a factor as to why no significant survival was observed even though higher DDR expression was evident. Despite this, we investigated individual DDR gene expression and their influence on OS outcomes. From the 84 DDR genes assessed, the high expression of ATP23, RAD51C and RPA3 was independently associated with poor OS outcomes in the TCGA IDH wild-type glioblastoma cohort. All three genes play roles in the repair of DSBs and may enhance treatment resistance. For instance, ATP23, a commonly amplified gene within glioblastoma, is involved in NHEJ and is upregulated in response to RT [21]. RAD51C plays an important role as a stabiliser of complexes involved in HR [22], while RPA3 is part of the threesubunit replication protein A (RPA) complex involved in HR and DSB repair and has been implicated in glioblastoma OS outcome [23]. These data suggest that DDR gene expression influences patient outcome and warrants further investigation on the role DDR plays in glioblastoma treatment resistance. Stringer et al. [13] described efforts to fully characterise the 12 patient-derived glioblastoma cell lines, including use in a xenograft model to show they are morphologically representative of the patient's original tumour. This would suggest that the cell lines used in the current study are a good representation of the original tumour. We investigated the DDR baseline gene expression in these cell lines and found two distinct clusters (C1 and C2) with differential pathway expression. When compared to the glioblastoma TCGA cohort, four cell lines in the C1 cluster aligned with TC1 cluster of the TCGA cohort with respect to baseline DDR gene expression. The TCGA cohort was a significantly larger sample size at 140 cases, and thus it was surprising that 8 of the cell lines did not align with a TCGA cluster in baseline DDR gene expression. Furthermore, the C1 and C2 cell line clusters do not appear to have any relationship to known characteristics of glioblastoma such as MGMT methylation status. In regard to these discrepancies between the TCGA data and the baseline data from the 12 patient-derived glioblastoma cell lines, Stringer et al. showed in their original publication of the cell line data that 7 of the 12 cell lines maintained a molecular signature equivalent to that from the original tumour tissue [13]. As such, we cannot exclude the possibility that the cell culture conditions affected the baseline gene expression signatures [24]. However, with respect to the cell lines treated with TMZ and/or RT, our analysis used a non-treated control which will negate the effect of factors such as cell culture conditions, as the only change between groups is the treatments. With respect to the aim of investigating DDR gene expression and effects on treatment response, there was no difference in response to either RT or TMZ between the two cell line clusters, suggesting that the cell lines were responding in a similar manner to the treatment despite a difference in baseline DDR gene expression. Further investigation of the characteristics of the glioblastoma cell lines may help to explain the differences in the C1 and C2 DDR clusters. When examining individual pathway expression in the cell lines, we showed that genes in DDR pathways are associated with response to TMZ or RT. Cells with a higher expression of NER genes were more resistant to TMZ, while high expression of MMR genes conferred TMZ sensitivity. Previous studies have suggested similar trends with respect to NER components [8,25], while alterations in MMR genes such as MSH6 have been associated with TMZ resistance and recurrent glioblastoma tumours [26]. Our data also showed that gene expression of the HR pathway was inversely associated with RT response. The inhibition of HR components has enhanced radiosensitivity in glioblastoma cells [27,28], hence cells with a higher expression of HR genes may be more likely to survive than tumours which have a lower baseline expression [4,6,17,29]. Further investigation is needed to explore these results in more depth. A limitation to this analysis is the bulk transcriptomic lens of tumours prior to any treatment, which does not adequately reflect changes in DDR expression from standard treatment. Thus, studying DDR across a pre-and post-treatment time course may reveal greater insight. In this regard, one of the important aspects of the work presented here is the time course analysis of DDR gene expression response, showing that several genes and pathways are upregulated in response to standard treatment, especially RT. These data may inform the feasibility of targeting DDR components to enhance treatment response in glioblastoma patients. One such approach is being explored through the development of small-molecule inhibitors of DDR proteins including poly (ADP-ribose) polymerase (PARP), Wee1, checkpoint kinase 1 and 2 (Chk1 and Chk2), ataxia-telangiectasia mutated (ATM), ataxia-telangiectasia and rad3-related (ATR), and DNA-dependent protein kinase (DNA-PK) [4,30]. Across all treatments, DDR gene expression changes occurred within a 24 h period after treatment. RT had the greatest response, with numerous pathways showing gene upregulation, suggesting activation at 2 and 24 h after treatment. TMZ on the other hand had a lower level of DDR gene upregulation. The composition of genes was also different to RT, with a predominance of NER genes expressed at 2 h before a sharp increase in BER gene upregulation, as well as upregulation in HR, MMR and NHEJ genes at 24 h. These observations agree with the understanding that O 6 -meG adducts caused by TMZ form DNA breaks only after several replication cycles have occurred [31,32], in contrast to RT, which results in immediate DNA damage formation [29]. The combination of TMZ and RT resulted in a comparative decrease in the number of differentially expressed DDR genes compared to RT alone across all time points, and the up-or downregulation of DDR genes from either RT or TMZ alone was not always observed in the cells treated with TMZ + RT. This may have occurred as the two treatments alone produce varying degrees of DDR activation across different time points and pathways. Across the four cell lines, several DDR genes were frequently observed as differentially expressed. The most notable and frequent included NEIL3, XRCC2, CCNO, RAD54L, ATM, DDB2, and MSH2, all of which have been linked to treatment resistance in glioma or solid tumours such as colorectal cancer [10,26,[33][34][35][36][37][38]. Interestingly, the expression of RPA3 and RAD51C rarely changed although being associated with OS outcomes in the TCGA cohort, while ATP23, was either upregulated or downregulated depending on the cell line. This suggests that baseline expression and associated patient outcome may not entirely capture the direct role such genes play in DDR when examined across time, and further investigation is needed. This study also shows the upregulation of several NER genes in response to TMZ and RT, which is an underreported DDR pathway in glioblastoma. ATR, a global sensor of DNA damage, has been implicated in NER activation [39] and is thus a potential therapeutic target. The transcriptional lens of this study, however, cannot conclusively answer this and an in-depth analysis is required to elucidate the exact function that NER and its components play in response to the standard treatment of glioblastoma, which may reveal druggable targets for its inhibition. Furthermore, our study of DDR expression has focused on characteristic DDR genes with less emphasis on DNA polymerases and ligases that play overlapping functions across pathways [40][41][42]. Thus, future work will seek to include these genes with overlapping functions to gain greater insight into the response of DDR pathways and their influence on treatment resistance. Overall, this study reveals an influence of DDR genes and subsequent pathways in glioblastoma cellular responses to treatment. Specific clusters of DDR expression failed to show significant differences in patient survival outcome or cell line response to TMZ or RT. However, our analysis revealed that the high expression of three DDR genes associated with poorer overall patient survival, while expression of MMR, NER and HR influenced sensitivity to TMZ or RT in glioblastoma cell lines. Our results suggest that the DDR is primarily upregulated within a 24 h period after treatment of TMZ and/or RT, with distinct trends of DDR activation apparent between treatments. Such data give insight into the changes in DDR gene expression in response to standard treatment and the potential for targeting commonly upregulated DDR components to produce radio-or chemo-sensitising agents. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/cells11071215/s1, Figure S1: Combined hierarchical cluster of glioblastoma cell lines and TCGA patients; Figure S2: Cell viability of MGMT methylated and unmethylated glioblastoma cell lines treated with TMZ; Figure S3: Distribution of DEGs across time points and treatments; Figure S4: Frequency of DEGs per cell line; Table S1: Gene list of DDR pathways for ssGSEA analysis title; Table S2: List of DDR genes and housekeeping genes; Table S3: Overall survival cox regression analysis of clinical features in glioblastoma TCGA patients (n = 140); Table S4: Molecular characteristics and DDR gene expression profiles of patient-derived glioblastoma cell lines; Table S5 Informed Consent Statement: Not applicable. Data Availability Statement: Research data are stored in an institutional repository and will be shared upon request to the corresponding author.
7,725.2
2022-04-01T00:00:00.000
[ "Biology", "Medicine" ]
A Fast Smoothing Algorithm for Post-Processing of Surface Reflectance Spectra Retrieved from Airborne Imaging Spectrometer Data Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. Introduction Since the mid-1980s, hyperspectral imaging data have been collected with different types of imaging spectrometers from aircraft and satellite platforms. Because solar radiation along the sun-surface-sensor path in the 0.4−2.5 μm visible and near-IR spectral regions is subject to absorption and scattering by atmospheric gases and aerosols, hyperspectral imaging data contain atmospheric effects. In order to use hyperspectral imaging data for quantitative remote sensing of land surfaces and ocean color, the atmospheric effects must be removed. There are now a number of model-based atmospheric correction algorithms for retrieving surface reflectances from hyperspectral imaging data. These include, but are not limited to, the ATmosphere REMoval algorithm (ATREM) [1,2], the High-accuracy ATmospheric Correction for Hyperspectral Data (HATCH) [3], the Atmosphere CORrection Now (ACORN) [4], the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) [5], the Imaging Spectrometer Data Analysis System (ISDAS) [6], and a series of Atmospheric and Topographic Correction (ATCOR) codes [7,8]. The surface reflectance spectra retrieved with radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra can also contain artifacts due to errors in radiometric and spectral calibrations. Although models have been improving with time, they are not yet at the level where all artifacts are smaller than sensor noise. Figure 1 shows an example of a reflectance spectrum derived with ATREM from AVIRIS data [9] acquired over Cuprite (NV, USA) in June, 1995. Due to small errors in assumed wavelengths and errors in line parameters compiled on the HITRAN database [10], small spikes (particularly near the centers of the 0.94-and 1.14-µm water vapor bands) are present in this spectrum. These spikes have distracted geologists who are interested in studying surface mineral features. Clark et al. [11] pioneered a hybrid approach for the derivation of laboratory-quality surface reflectance spectra from AVIRIS data. They used a combination of ATREM and field spectral measurements over a single ground calibration site. In this case the use of ATREM allows improved atmospheric corrections at elevations that are different from the calibration site, and the ground calibration removes residual errors commonly associated with sensor artifacts and radiative transfer models. In many situations, researchers do not have any field-measured reflectance spectra for suppressing residual errors. Boardman [12] first developed the Empirical Flat Field Optimal Reflectance Transformation (EFFORT) method to suppress residual errors in ATREM-derived surface reflectance spectra without the need for field-measured reflectance spectra. In this method, the complete spectrum for each pixel in the 0.4-2.5 μm range is fitted with a low order polynomial. It is sometimes difficult to make reasonable matches over the entire (or "global") spectral range. In this article, we describe another technique, which fits spectra "locally" in the spectral domain using moving filters derived with a cubic spline smoothing algorithm, for quick post processing of ATREM-derived reflectance spectra from imaging spectrometer data. Results from analysis of AVIRIS data acquired over the Cuprite mining district in Nevada in June of 1995 and over Ivanpah in California in April of 2010 are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. Methodology In order to describe the smoothing technique, we first describe the commonly used cubic spline "fitting" technique, then we describe the cubic spline "smoothing" technique. Cubic Spline Fitting The cubic spline fitting technique is a powerful numerical method and has been widely used in engineering and scientific computing. For example, Numerical Recipes [13] provides standard subroutines, using cubic spline fitting method, for interpolating data between points. In order to describe mathematically the cubic spline fitting technique, we consider an interval α ≤ x ≤ b, and subdivide it by a mesh of points corresponding to the location of the data at α = X 0 < X 1 <…< X j−1 < X j …< X J = b. An associated set of the observed data is prescribed by y o , y 1 ,..., y j ,..., y J . We seek an interpolating function h(x), which is defined in the interval a,b [ ]. Its first and second derivatives are continuous on a,b [ ] and it coincides with a cubic polynomial in each subinterval X j −1 ≤ x ≤ X j , and satisfies the relationship h j = h X j ( ) = y j . Figure 2 illustrates the function h(x). As adapted from Ahlberg [14], the function h(x) in the interval X j −1 ≤ x ≤ X j can be expressed as (for convenience, we assume the problem of equally spaced samples with a step size of Δ ): : : : where: {s j }, the spline coefficients, can be interpreted as the normalized second derivatives. (2) Figure 2. An illustration of the interpolating function h(x). The polynomials (2) in adjacent segments are continuous at the knots: The first derivative is continuous at the knot provided that: The second derivative is continuous at the knots: The polynomials (2) are determined by specification of {s j }. The selection of these spline coefficients can involve any number of imposed weak constraints that characterize the spline fitting. One of the constraints is the minimization of the second derivative. Because: it follows that the quantity s j −1 ] is to be minimized in any kind of variational selections of {s j }. The simplest quadratic form to minimize is However, this is not enough to guarantee continuity of the derivatives at the knots. A method to incorporate the condition: must be found. This is done by introducing Lagrangian multipliers. The simple spline formulation for the minimization is: where λ j s are the Lagrangian multipliers. These conditions are exactly satisfied upon completion of the minimization so that zeros are in effect added to the quantity to be minimized. The procedure of solving those λ j s, therefore, the spline coefficients {s j }, and the interpolating splines {h j }, is similar to that of spline smoothing to be described in the next section. Cubic Spline Smoothing In the spline fitting technique described above, the {h j } are taken to represent errorless data or observations, and the spline passes each point y j . However, there can be circumstances that the observations are contaminated and unwanted noise is present. For example, in our case raw spectra exhibit coherent saw-tooth like "noise". Under these circumstances, the data integrity condition should be relaxed. This can be done by adding a weak constraint term, (8), where y j is the observed data, and only a "best fit" should be sought. The smoothed spline { h j } does not necessarily pass original observed data { y j }, unlike the case of the spline fitting. An appropriate discrepancy sum can be formed as: where τ 2 is an adjustable weighting factor. As it increases, the tension of the spline smoothing increases, i.e., the curve "flattens out". On the other hand, as it decreases, the observed data are reproduced more closely at the expense of increased curvature. The variations on the spline coefficients are tabulated as: : : : : : The variations on the multipliers lead to: Since the spline does not pass the data { y j }, the {h j } are no longer fixed; their variations are listed below: : : : Combining terms in (12), we have: Each of the combinations s j −1 + 4s j + s j+1 [ ] ; j = 1,...,(J − 1) in (10) can be replaced by their equivalents from (11) to obtain the following equations: The (14) can be replaced by the groupings in (13): : : : : : where: The {λ j } are then found as solutions of: where A is the pentadiagonal matrix Here the matrix elements are constants and given by: and: Given the {λ j }, the {h j } can be determined directly from (13), and the spline coefficients {s j } can be found as a tridiagonal solution of (10). Procedures for Post Processing of ATREM-Derived Reflectance Data The small spikes, as seen in Figure 1, are systematically present in all spectra in an ATREM output data cube (2-d spatial plus 1-d spectral). We hope to make mild "gain" adjustments to remove these small spikes during the post processing of the ATREM data cube. Specifically, we hope to find a gain function, g( λ ), which contains all the weak spikes and which has a mean value close to 1. The multiplication of g(λ ) to the ATREM output spectra should allow the removal of the systematic small spikes. Several steps are involved in the post processing of an ATREM output data cube. They are: (a) The cubic spline smoothing technique described in Section 2 is applied to each of the spectra in the ATREM data cube. As a result, an intermediate "smoothed" data cube is produced. Because the cubic spline smoothing technique fits a spectrum "locally" in the spectral domain, most of the smoothed spectra at this stage match quite well with the ATREM spectra. If the spectra were fit with low order Legendre polynomials "globally", only a minor fraction of the smoothed spectra would match well with the ATREM spectra. (b) The average reflectance, ρ avg , is calculated for each of the spectra in the ATREM output data cube. (c) For each pixel, the standard deviation, σ , between the ATREM spectrum and the "smoothed" spectrum is calculated. (d) For an AVIRIS scene, a scatter plot of σ / ρ avg vs. ρ avg is made. Pixels with σ / ρ avg values in the lower twenty percentile are identified. (e) For each of the pixels identified in Step d, a ratio spectrum ("smoothed" spectrum/ATREM spectrum) is calculated. The desired gain spectrum, g(λ ), is obtained by averaging all the ratio spectra. Figure 3 shows an example of a gain spectrum, which contains a number of weak spikes in the 0.4-2.5 µm spectral region. (f) Each of the spectra in the ATREM output data cube is multiplied by the gain spectrum to obtain the "final" smoothed data cube. Our algorithm for smoothing the ATREM output data cube is fast. It takes approximately 30 s on a Mac computer with a 2.66 GHz Quad-Core Intel Xeon processor to process one complete data cube with a dimension of 614 samples, 972 lines, and 224 bands. Sample Results The cubic spline smoothing algorithm described above was implemented on an AVIRIS server computer at the NASA Jet Propulsion Laboratory for post-processing large volumes of ATREM-derived reflectance data cubes from AVIRIS radiances acquired during major field experiments. Below we present results from one set of AVIRIS data acquired over the Cuprite Mining District in Nevada in June, 1995 and another over Ivanpah playa in California in April, 2010. Figure 4 shows a comparison among an ATREM reflectance spectrum (lower line) over a single pixel within the 1995 AVIRIS Cuprite scene , the smoothed spectrum (middle line), and the reflectance spectrum obtained with the well-known empirical line method (upper curve) [15]. For clarity, the spectra in Figure 4 are vertically displaced. The general shapes of these spectra in the 0.4-1.26 µm, 1.5-1.75 µm, and 2.0-2.5 µm wavelength intervals are very similar. Major mineral features in the 2.0-2.5 µm region are seen in all the spectra. The un-smoothed ATREM spectrum has quite a few weak spikes. These spikes are largely removed in the smoothed spectrum. The spectrum derived with the empirical line method shows weak inverse water vapor features near 0.94 and 1.14 µm. This indicates that the method results in a slight over-correction of atmospheric water vapor absorption effects for this pixel. Figure 5A shows six ATREM reflectance spectra (vertically displaced for clarity). These spectra have distinct mineral absorption features in the 2.0-2.5 µm spectral region. Weak spikes (for example near 1.14 µm) are systematically present in all the spectra. Figure 5B shows the corresponding smoothed spectra, which look very similar to laboratory-measured reflectance spectra, particularly in the 2.0-2.5 µm spectral region. Weak spikes are all removed. A broad iron feature near 0.9 µm is seen nicely in one spectrum-the 4th spectrum from top. Figure 5C shows six spectra derived from the AVIRIS data with the empirical line method. Mineral features in the 2.0-2.5 µm region are recovered quite well with this method. However, water vapor features in the 0.94 and 1.14 µm regions are either over-or under-corrected. The broad iron feature in the 4th spectrum from the top is not clearly seen due to the over-correction of atmospheric water vapor absorption effects. By comparing Figured 5A-C, it is seen that major mineral features are preserved after the spectral smoothing. Figure 6B is the ATREM-derived surface reflectance spectrum over a soil pixel located at the center of the red box in Figure 6A. The curve is not spectrally smooth, particularly in the 0.86-1.20 μm range, due to residual errors in the ATREM atmospheric correction process. The solid line (vertically displaced for clarity) in Figure 6B is the smoothed spectrum. The spectrum in the 0.86-1.20 μm range becomes much smoother after the application of the cubic spline smoothing algorithm. By comparing the two curves in Figure 6B, it is also seen that major mineral absorption features centered near 2.20 and 2.34 μm are preserved after spectral smoothing, and no artificial absorption features in the entire 0.4−2.5 μm spectral range are introduced by the smoothing algorithm. Figure 6, except for a spectrum over green vegetation covered area. The general shapes of the spectrum in the 0.5−2.5 μm range after smoothing (solid line in Figure 7B) are similar to those of green vegetation reflectance spectra measured in laboratories. Figure 8A is similar to Figure 6A. Figure 8B shows comparisons among ATREM-derived surface reflectance spectrum (dotted line) over an Ivanpah playa pixel located at the center of the red box in Figure 8A, the smoothed spectrum (solid line and vertically displaced by 0.075 in reflectance unit), and a field-measured spectrum (dash-dotted line and vertically displaced by 0.15 in reflectance unit). A major mineral feature centered at 2.2 μm is seen in all the spectra. Ivanpah, California In order to quantify improvements in smoothness, first derivatives for all the spectra in the Ivanpah scene before and after application of the smoothing algorithm were calculated. After smoothing, the average absolute spectral derivative for all the bands, excluding those bands centered near 1.38-and 1.88-μm strong water vapor band absorption regions, over the scene decreased by 14%. The magnitude of the decrease is larger for a number of bands, including some bands located within the 0.94-and 1.14-μm water vapor band absorption regions where larger spectral residuals are observed (see Figure 7B). The scene-averaged spectral derivative for a band centered near 1.11 μm decreased by 20%. The decrease in spectral derivatives demonstrates the improvement in smoothness after applications of the cubic spline smoothing algorithm. Discussion and Summary During our development of the cubic spline smoothing technique for post-processing of surface reflectance spectra retrieved from AVIRIS data, we also tried another well-known filter, i.e., the Savitzky-Golay (SG) filter [16]. The use of SG filter didn't produce satisfactory results. We observed that the peak positions of spectral features after application of the SG filter were not preserved. Therefore, we decided not to use the SG filter for our spectral smoothing purposes. In summary, we have described a technique, which fits spectra "locally" in the spectral domain based on cubic spline smoothing, for quick post processing of apparent reflectance spectra derived from AVIRIS data using the ATREM code. Results from analysis of AVIRIS data acquired over Cuprite mining district in Nevada in June of 1995 and over Ivanpah in California in April of 2010 are presented. Very good agreement between our results and those of empirical line method in the 2.0-2.5 µm spectral region is obtained. It is expected that the use of ATREM code for retrieving surface reflectance spectra from AVIRIS data plus the application of additional spectral smoothing should yield high quality surface reflectance spectra comparable with those of reflectance spectra measured in laboratory conditions.
3,967.6
2013-10-01T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Tailoring Vibrational Signature and Functionality of 2D-Ordered Linear-Chain Carbon-Based Nanocarriers for Predictive Performance Enhancement of High-End Energetic Materials A recently proposed, game-changing transformative energetics concept based on predictive synthesis and preprocessing at the nanoscale is considered as a pathway towards the development of the next generation of high-end nanoenergetic materials for future multimode solid propulsion systems and deep-space-capable small satellites. As a new door for the further performance enhancement of transformative energetic materials, we propose the predictive ion-assisted pulse-plasma-driven assembling of the various carbon-based allotropes, used as catalytic nanoadditives, by the 2D-ordered linear-chained carbon-based multicavity nanomatrices serving as functionalizing nanocarriers of multiple heteroatom clusters. The vacant functional nanocavities of the nanomatrices available for heteroatom doping, including various catalytic nanoagents, promote heat transfer enhancement within the reaction zones. We propose the innovative concept of fine-tuning the vibrational signatures, functionalities and nanoarchitectures of the mentioned nanocarriers by using the surface acoustic waves-assisted micro/nanomanipulation by the pulse-plasma growth zone combined with the data-driven carbon nanomaterials genome approach, which is a deep materials informatics-based toolkit belonging to the fourth scientific paradigm. For the predictive manipulation by the micro- and mesoscale, and the spatial distribution of the induction and energy release domains in the reaction zones, we propose the activation of the functionalizing nanocarriers, assembled by the heteroatom clusters, through the earlier proposed plasma-acoustic coupling-based technique, as well as by the Teslaphoresis force field, thus inducing the directed self-assembly of the mentioned nanocarbon-based additives and nanocarriers. Introduction The predictive control and enhancement of the energetic materials (EM) reactivity and energy output coupled with the mutual stabilization of high-energy components are extremely important for the development of low-cost and efficient small-scale solid propulsion systems, solid microthrusters as well as the high-efficiency micropower systems. The objective of this study is to uncover new potential opportunities for the predictive extracting of extra energy from the EM systems at the nanoscale by manipulating the combinations of the unique structural and physicochemical properties of functionalized low-dimensional nanocarbon allotropes by the deep materials informatics-based toolkit [1][2][3][4]. Application of the ion-assisted pulse-plasma functionalization and predictive heteroatom doping techniques open up a set of novel opportunities, transforming nanocarbon catalytic additives into nanohybrid systems with multifunctional properties. The key difference between nanomaterials and nanohybrids is that the nanomaterials exhibit a certain property in one material, while the nanohybrids exhibit multiple structural and physicochemical properties within one nanomaterial. Novel functionalities of the catalytic additives based on GFs and modified thermally expandable graphite-based fibers can be unlocked when they are used within the framework of the transformative energetics concept, based on the EM predictive synthesis and preprocessing at the nanoscale. Game-Changing Transformative Energetics Concept Recently proposed game-changing transformative energetics concepts [17,18], based on the EM predictive synthesis and preprocessing at the nanoscale, are considered as a pathway towards the development of the next generation of advanced propulsion materials for future multimode solid propulsion systems and deep-space-capable small satellites. This technological concept brings extra energy into the EM system, which goes beyond what was historically possible using conventional processing technologies and opens novel opportunities for the mutual stabilization of high-energy components inserted into the EM system. This technique allows the extraction of extra energy from the EM system at the nanoscale. The technological chain of the transformative energetic materials (TEM) preprocessing and synthesis includes the following main stages: preprocessing the EM components with the material properties, changing at the conversion into the nanoscale; the resonant acoustic mixing (RAM) of the EM composition, leading to changes in the material properties; the final 3D printing (additive manufacturing) of the blended EM composition into the highend EM elements and solid propellant charges. Numerous experimental studies have confirmed that nanosized EMs can be considered as a source of extremely high heat release rates along with outstanding possibilities of burning rates for programming, reliability, extremely high burning efficiency, safety of use and reduced sensitivity to external influences [19,20]. The fundamental distinguishing feature of the nanosized EMs is a significant increase in the specific surface area and a decrease in the distances between nanocomponents, which provides a significant increase in the chemical reaction rates while simultaneously providing a reduction of the ignition delay, as well as providing the required safety level [20]. The RAM is a relatively new and highly effective mixing technique that applies programmed low-frequency and high-intensity acoustic energy for blending highly viscous materials [21,22]. The RAM is a contactless mixing technique which ensures increased process safety that affords the potential to incorporate into the EM composition a higher proportion of high-energy-density solids, including hard-to-cut materials such as nanoenergetic systems. Three-dimensional printing allows the growing of complex and three-dimensional structures with programmed structural and physicochemical properties that give superior control over the micro-and mesoscale spatial distribution of the induction and energy release domains, as well as the nature of the energy release [23,24]. One of the important benefits of the TEM concept is connected with the possibility of additive manufacturing in the new generation of electrically activated solid propellants, or, more shortly, ePropellants, developed by Digital Solid State Propulsion, LLC, Reno, NV, USA [25,26]. In this new technique, the electric fields are used to control the ignition and extinguishment properties, and thus the throttle, of solid propellants. These EM and solid propellants, based on ionic salts, are mainly inert until an electric current is passed through them, which activates the electrochemical reactions. Currently, the ePropellant compositions include ammonium nitrate (AN) or hydroxylamine nitrate (HAN) as oxidizing components, and, in this regard, demonstrate the low-energy performance. The new approach for the development of green ePropellants with enhanced energy performance includes using ammonium dinitramide (ADN) as the main oxidizing agent. As a fundamentally new opportunity to further improve the energy efficiency of the TEMs, we propose to use the predictive ion-stimulated pulse-plasma modification of various carbon-based allotropes, used as catalytic nanoadditives, at the stage of the preliminary preparation of components (conversion to the nanoscale) by the plasma-driven attachment of multicavity nanomatrices of the 2D-ordered linear-chain carbon, which serve as functionalizing nanocarriers of multiple heteroatom clusters. The vacant functional nanocavities of the nanomatrices available for heteroatom doping, including various catalytic nanoagents, create new pathways of heat transfer enhancement within the EM reaction zones. In this case, as a carbon-based allotrope applied as a catalytic nanoadditive, we propose to use GFs or modified thermally expandable graphite-based fibers. The structural features and physicochemical properties of the 2D-ordered linear-chain carbon multicavity nanomatrices, the ion-assisted pulse-plasma growth and the heteroatom doping toolkit, as well as the predictive activation and multifunctionality property tuning, will be discussed in detail in the following sections. 2D-Ordered Linear-Chain Carbon as a Functionalizing Nanocarrier The spatial configuration of a 2D-ordered linear-chain carbon looks like a twodimensionally distributed hexagonal set of the parallel carbon chains interconnected by the van der Waals forces. In this case, the distance between the carbon chains is estimated as 5Å [27,28]. The 2D-ordered linear-chain carbon can be considered also as a carbyne-enriched nanomatrix containing encapsulated oriented linear chains of carbon atoms-the monatomic carbon filaments. Our understanding of the nanoarchitecture and the physicochemical properties of the 2D-ordered linear-chain carbon is still developing. For practical use of the carbyneenriched nanomaterials, the ability to ensure the high stability of this nanomaterial is of key importance. Carbyne, the "holy grail" of carbon allotropes, represents one of the examples of the unknown in science. The ideal one-dimensional form of carbon, an infinitely long linear chain of carbon atoms, named as a carbyne, attracted much interest due to its advanced mechanical and physicochemical properties, including the mechanical strength, predicted to be an order of magnitude higher than that of diamond [29][30][31]. Like a graphene, carbyne is just one atom thick, which gives it an extremely large surface area in relation to its mass. Carbyne was first described in 1885 by Adolf von Baeyer. Back in the 1930s, astronomers discovered carbynes as one of the first molecules in interstellar space. Later, astronomers found signs of the presence of cosmic carbyne crystals in interstellar dust clouds. One of the possible routes for the formation of cosmic carbyne crystals from carbonaceous dust is connected with intensive synchrotron radiation under a coronal photon flux in the interplanetary medium. In this connection, carbyne crystals can be considered as out-of-this-world interstellar material. The unique space environment, along with the intensive electromagnetic and radiation fields and microgravity conditions, provides ideal conditions for the growth of the carbyne crystals that are more perfect than their counterparts grown on Earth. The Earth-grown carbyne crystals usually contain defects that induce instability in the crystal structure. Research has shown that crystal growth in microgravity conditions has a benefit due to the lack of buoyancy-induced convection, which affects the transport of molecules in the crystal. The discovery of interstellar carbyne crystals can be considered as a key for understanding the mechanism of its formation and the environments in which its formation occurs, and also supports the concept of self-organization phenomena during the carbyne chain's growth. The electronic configuration of a linear-chain carbon molecule contains two Nanomaterials 2022, 12, 1041 5 of 25 types of bonds: the (σ)-bond is responsible for the mechanical stability of the linear-chain carbon molecule, and the (π)-bond, along with mechanical stability, is responsible for the electrical properties, because the π-electrons are delocalized and hence belong to the whole chain of atoms. The electronic configuration of a part of a linear-chain carbon molecule is presented in Figure 1. standing the mechanism of its formation and the environments in which its formation occurs, and also supports the concept of self-organization phenomena during the carbyne chain's growth. The electronic configuration of a linear-chain carbon molecule contains two types of bonds: the (σ)-bond is responsible for the mechanical stability of the linearchain carbon molecule, and the (π)-bond, along with mechanical stability, is responsible for the electrical properties, because the π-electrons are delocalized and hence belong to the whole chain of atoms. The electronic configuration of a part of a linear-chain carbon molecule is presented in Figure 1. The linear carbon chains are observed in carbon vapor at temperatures above 5000 K. The technology for growing this unique carbon nanomaterial looks quite simple, since it demonstrates the ability to self-organize in a vacuum when condensed from carbon vapor at temperatures above 3150 K. However, pure carbyne in the condensed phase exhibits extreme instability due to its high chemical activity. The growth of the macroscopic crystals of the carbyne is inhibited by the instability and high reactivity of this allotropic form of carbon. Carbyne physicochemical properties can be manipulated through the chain length, doping by nanoclusters and by a type of chain termination. Nanostructural stability depends on the linear carbon chain's length. Free carbon-atom wires (CAWs) of any length must be terminated by molecular complexes to ensure their stability. Most current research efforts are concentrated on searching for possibilities for the stabilization of the sp-hybridized carbon chains. The growth process determines the resulting properties of the carbyne-enriched nanomaterials. During a long period, attempts to grow long carbyne molecules were limited by its extreme chemical instability. In 2016, a research team at the University of Vienna proposed and demonstrated a fundamentally new strategy to ensure the structural stability of extremely long sp-hybridized carbon chains containing more than 6000 carbon atoms through growing them within the long nanomatrix formed by double-walled carbon nanotubes [31]. The double-walled carbon nanotubes provided efficient structural stabilization of the encapsulated one-dimensional carbyne molecule, which is extremely important for designing the advanced functional nanostructural metamaterials. Such kinds of carbon nanotubes serve as nanoreactors and protect the sp-hybridized carbon chains from interaction with the environment. This outstanding result showed the new The linear carbon chains are observed in carbon vapor at temperatures above 5000 K. The technology for growing this unique carbon nanomaterial looks quite simple, since it demonstrates the ability to self-organize in a vacuum when condensed from carbon vapor at temperatures above 3150 K. However, pure carbyne in the condensed phase exhibits extreme instability due to its high chemical activity. The growth of the macroscopic crystals of the carbyne is inhibited by the instability and high reactivity of this allotropic form of carbon. Carbyne physicochemical properties can be manipulated through the chain length, doping by nanoclusters and by a type of chain termination. Nanostructural stability depends on the linear carbon chain's length. Free carbon-atom wires (CAWs) of any length must be terminated by molecular complexes to ensure their stability. Most current research efforts are concentrated on searching for possibilities for the stabilization of the sp-hybridized carbon chains. The growth process determines the resulting properties of the carbyne-enriched nanomaterials. During a long period, attempts to grow long carbyne molecules were limited by its extreme chemical instability. In 2016, a research team at the University of Vienna proposed and demonstrated a fundamentally new strategy to ensure the structural stability of extremely long sp-hybridized carbon chains containing more than 6000 carbon atoms through growing them within the long nanomatrix formed by double-walled carbon nanotubes [31]. The double-walled carbon nanotubes provided efficient structural stabilization of the encapsulated one-dimensional carbyne molecule, which is extremely important for designing the advanced functional nanostructural metamaterials. Such kinds of carbon nanotubes serve as nanoreactors and protect the sp-hybridized carbon chains from interaction with the environment. This outstanding result showed the new fundamental possibility of using the control of the nanomatrix spatial structure for programming the stability of the inserted sp-hybridized carbon chains. The strategy, connected with the growth of the sp-hybridized carbon nanostructures in the composition of multicavity nanomatrices, seems to be the most promising way for the creation of the advanced carbon-based nanostructured metamaterials. In 2019, an alternative modification of this strategy was proposed by creating a new class of carbon allotropes synthesized by encapsulating linear sp-hybridized carbon chains into cylindrical nanocavities formed within the sp-3 carbon hexagonal structure [32]. Because the sp-hybridized carbon chains are highly unsaturated, they tend to react to form additional bonds, resulting in the decay into both sp-2 and sp-3 structures. In this regard, it is possible to ensure the incorporation and isolation of sp-hybridized carbon chains inside some sufficiently wide and long nanocavity formed by sp-3-hybridized carbon to prevent the spontaneous reaction and decay of the sp-hybridized carbon chains. As such an insulating nanostructure, we chose a diamond hexagonal structure, which has a sufficiently extended nanocavity along the axis. The result was the growth of an allotrope demonstrating the characteristic highfrequency vibrations associated with the sp-hybridized chain stretching modes and having a long time stability at room temperature environments. Relatively recently, a research team in the Carbon Nanosystems Lab of the Physical Electronics Chair of the Physics Department of Moscow State University (Babaev, V.G., Guseva, M.B., Streletsky, O.A., Khvostov, V.V., Savchenko, N.F.) found new routes to encapsulate the oriented linear chains of carbon atoms-the monatomic carbon filaments into the matrix of amorphous carbon, thus creating bends and controlling the end groups in the process of ion-assisted pulse-plasma growth [27,28]. This is one of the most promising routes to obtain stable carbyne-enriched nanostructuredmetamaterials, by growing them within the composition of the multicavity nanomatrix through the ion-assisted pulse-plasma deposition from the carbon plasma [28]. This technique also opens possibilities for the further increase of the long carbon chain stability through the assembling of atomic clusters of different chemical elements, for instance, silver, gold, titanium, etc. Within a 2D-ordered linear-chain carbon nanomatrix, the carbon atom wires very weakly interact with each other (due to the van der Waals interaction), and therefore the properties of such nanomatrices are actually determined by the properties of individual CAWs. The ordered array of the one-dimensional carbon chains packed parallel to one another in hexagonal structures are oriented perpendicular to the substrate surface. In the carbon atom chains, each carbon atom is connected to the two nearest neighbors by the sp-1 bonds. The geometric characteristics of the 2D-ordered linear-chain carbon nanomatrix is presented in Figure 2. The strategy, connected with the growth of the sp-hybridized carbon nanostructures in the composition of multicavity nanomatrices, seems to be the most promising way for the creation of the advanced carbon-based nanostructured metamaterials. In 2019, an alternative modification of this strategy was proposed by creating a new class of carbon allotropes synthesized by encapsulating linear sp-hybridized carbon chains into cylindrical nanocavities formed within the sp-3 carbon hexagonal structure [32]. Because the sp-hybridized carbon chains are highly unsaturated, they tend to react to form additional bonds, resulting in the decay into both sp-2 and sp-3 structures. In this regard, it is possible to ensure the incorporation and isolation of sp-hybridized carbon chains inside some sufficiently wide and long nanocavity formed by sp-3-hybridized carbon to prevent the spontaneous reaction and decay of the sp-hybridized carbon chains. As such an insulating nanostructure, we chose a diamond hexagonal structure, which has a sufficiently extended nanocavity along the axis. The result was the growth of an allotrope demonstrating the characteristic high-frequency vibrations associated with the sp-hybridized chain stretching modes and having a long time stability at room temperature environments. Relatively recently, a research team in the Carbon Nanosystems Lab of the Physical Electronics Chair of the Physics Department of Moscow State University (Babaev, V.G., Guseva, M.B., Streletsky, O.A., Khvostov, V.V., Savchenko, N.F.) found new routes to encapsulate the oriented linear chains of carbon atoms-the monatomic carbon filaments into the matrix of amorphous carbon, thus creating bends and controlling the end groups in the process of ion-assisted pulse-plasma growth [27,28]. This is one of the most promising routes to obtain stable carbyne-enriched nanostructured-metamaterials, by growing them within the composition of the multicavity nanomatrix through the ion-assisted pulse-plasma deposition from the carbon plasma [28]. This technique also opens possibilities for the further increase of the long carbon chain stability through the assembling of atomic clusters of different chemical elements, for instance, silver, gold, titanium, etc. Within a 2D-ordered linear-chain carbon nanomatrix, the carbon atom wires very weakly interact with each other (due to the van der Waals interaction), and therefore the properties of such nanomatrices are actually determined by the properties of individual CAWs. The ordered array of the one-dimensional carbon chains packed parallel to one another in hexagonal structures are oriented perpendicular to the substrate surface. In the carbon atom chains, each carbon atom is connected to the two nearest neighbors by the sp-1 bonds. The geometric characteristics of the 2D-ordered linear-chain carbon nanomatrix is presented in Figure 2. Along the vertical axis (Y), the electrical conductivity of the nanomatrix is metallic. The charge transfer along the carbon atoms chain along the Y axis is carried out by delocalized electrons of the π-bond. Due to the absence of coupling between the carbon chains along the longitudinal and transverse axes (X, Z) (where the van der Waals forces only exist), the nanomatrix demonstrates dielectric properties along these axes. Such nanomatrices can be considered as an array of mutually-stabilizing CAWs, which also includes a set of different sp-phases that provides stabilization of the sp-hybridized carbon chains. The straight carbon chain becomes unstable as it lengthens. A carbon chain with self-forming kinks (see Figure 2) is a more stable system (energetically more favorable) than a straight carbon chain. To ensure the stability of the growing carbon chains, it is necessary to provide a special requirement for the plasma-pulse-specific energy. The electron temperature of the plasma should not exceed the bond breaking energy in the carbon chains, since this leads to the "crosslinking" of these carbon chains and the formation of amorphous carbon with a short-range order of the diamond or graphite type. The specific energy of the plasma pulse should exceed the breaking energy of the sp-2 bonds (614 kJ/mol) and the sp-3 bonds (348 kJ/mol) but should not exceed the breaking energy of the sp-1 bonds (839 kJ/mol) in the evaporating carbon chains. The controlled bond-breaking and sp-phases transformation can be provided through the predictive ion-assisted stimulation with specific energy levels. At large thicknesses of the growing carbyne-enriched nanomatrix, the probability of interchain interaction and the formation of crosslinks between the carbon chains increases. In this regard, the growing nanomatrix is stimulated by the argon ions and, with an increase in thickness, its structure is additionally stabilized by the hydrogen ions injected into the arc discharge plasma during the carbon condensation process [28]. The 2D-ordered linear-chain carbon nanomatrix represents a multicavity nanostructure containing vacant functional nanocavities available for the assembling by atom clusters of various chemical elements. The schematic representation of the vacant functional nanocavity of the nanomatrix available for nanocluster assembling is presented in Figure 3. calized electrons of the π-bond. Due to the absence of coupling between the carbon chains along the longitudinal and transverse axes (X, Z) (where the van der Waals forces only exist), the nanomatrix demonstrates dielectric properties along these axes. Such nanomatrices can be considered as an array of mutually-stabilizing CAWs, which also includes a set of different sp-phases that provides stabilization of the sp-hybridized carbon chains. The straight carbon chain becomes unstable as it lengthens. A carbon chain with self-forming kinks (see Figure 2) is a more stable system (energetically more favorable) than a straight carbon chain. To ensure the stability of the growing carbon chains, it is necessary to provide a special requirement for the plasma-pulse-specific energy. The electron temperature of the plasma should not exceed the bond breaking energy in the carbon chains, since this leads to the "crosslinking" of these carbon chains and the formation of amorphous carbon with a short-range order of the diamond or graphite type. The specific energy of the plasma pulse should exceed the breaking energy of the sp-2 bonds (614 kJ/mol) and the sp-3 bonds (348 kJ/mol) but should not exceed the breaking energy of the sp-1 bonds (839 kJ/mol) in the evaporating carbon chains. The controlled bond-breaking and sp-phases transformation can be provided through the predictive ionassisted stimulation with specific energy levels. At large thicknesses of the growing carbyne-enriched nanomatrix, the probability of interchain interaction and the formation of crosslinks between the carbon chains increases. In this regard, the growing nanomatrix is stimulated by the argon ions and, with an increase in thickness, its structure is additionally stabilized by the hydrogen ions injected into the arc discharge plasma during the carbon condensation process [28]. The 2D-ordered linear-chain carbon nanomatrix represents a multicavity nanostructure containing vacant functional nanocavities available for the assembling by atom clusters of various chemical elements. The schematic representation of the vacant functional nanocavity of the nanomatrix available for nanocluster assembling is presented in Figure 3. By the cluster-assembling of the spatial structure of the 2D-ordered linear-chain carbon nanomatrix with various molecules and specific catalytic agents and chemical By the cluster-assembling of the spatial structure of the 2D-ordered linear-chain carbon nanomatrix with various molecules and specific catalytic agents and chemical elements, the properties of the nanomatrix can be modified or new structural and physicochemical properties can be added. Cluster-assembly of the nanomatrix can occur both without chemical interaction (so called intercalation) and with the rupture of the π bonds, which can lead to an additional reaction. For instance, by assembling the 2D-ordered linear-chain carbon nanomatrix with calcium clusters, which suck up hydrogen molecules, a high-density, reversible hydrogen storage device is created. In accordance with the geometric characteristics of the 2D-ordered linear-chain carbonbased nanomatrix and the configuration of the vacant functional nanocavity available storage device is created. In accordance with the geometric characteristics of the 2D-ordered linear-chain carbon-based nanomatrix and the configuration of the vacant functional nanocavity available for nanocluster assembling, as shown in Figures 2 and 3, the scheme of the multiple heteroatom doping functionalization of the nanocarrier, based on the multicavity nanomatrix, is presented in Figure 4. The 2D-ordered linear chain carbon nanomatrix could serve as an efficient basis for the design and the growth of the new carbon-based nanostructured metamaterials with unique electrophysical, optical, structural, topographic and chemical properties. The spatial structure of such a nanomatrix can also self-adjust to the structure of the embedded atom clusters. With a small chemical doping, the carbyne-enriched functionalizing nanocarriers can be transformed into a controllable piezoelectric material. Assembling the carbyne-enriched functionalizing nanocarriers by the piezoelectric nanomaterials clusters, for instance, by lithium atoms or zinc oxide (ZnO) nanoparticles, can transform them into the piezoelectric nanogenerators that can be used to control the electric charge distribution within the nanomatrix growing zone. This effect can also be reversed-applying an electric field to a piezoelectric nanogenerator will cause it to change shape or deform. The deformation generated in the piezoelectric nanogenerators is proportional to the magnitude of the applied electric field. Pulse-Plasma Deposition Reactor for Growing the Functionalizing Nanocarriers The cathodic arc plasma deposition (CAPD) or arc-PVD (PVD is physical vapor deposition) is a physical vapor deposition technique in which an electric arc is used to vaporize material from a cathode target. The vaporized material then condenses on a substrate, forming a 2D-ordered linear-chain carbon nanomatrix. The experimental set-up for the ion-stimulated pulse-plasma deposition of the 2D-ordered linear-chain carbon nanomatrix with the capability of cluster-assembly by various chemical elements is presented in Figure 5. The 2D-ordered linear chain carbon nanomatrix could serve as an efficient basis for the design and the growth of the new carbon-based nanostructured metamaterials with unique electrophysical, optical, structural, topographic and chemical properties. The spatial structure of such a nanomatrix can also self-adjust to the structure of the embedded atom clusters. With a small chemical doping, the carbyne-enriched functionalizing nanocarriers can be transformed into a controllable piezoelectric material. Assembling the carbyne-enriched functionalizing nanocarriers by the piezoelectric nanomaterials clusters, for instance, by lithium atoms or zinc oxide (ZnO) nanoparticles, can transform them into the piezoelectric nanogenerators that can be used to control the electric charge distribution within the nanomatrix growing zone. This effect can also be reversed-applying an electric field to a piezoelectric nanogenerator will cause it to change shape or deform. The deformation generated in the piezoelectric nanogenerators is proportional to the magnitude of the applied electric field. Pulse-Plasma Deposition Reactor for Growing the Functionalizing Nanocarriers The cathodic arc plasma deposition (CAPD) or arc-PVD (PVD is physical vapor deposition) is a physical vapor deposition technique in which an electric arc is used to vaporize material from a cathode target. The vaporized material then condenses on a substrate, forming a 2D-ordered linear-chain carbon nanomatrix. The experimental set-up for the ion-stimulated pulse-plasma deposition of the 2D-ordered linear-chain carbon nanomatrix with the capability of cluster-assembly by various chemical elements is presented in Figure 5. The main components of the pulse-plasma deposition reactor for growing the 2Dordered linear-chain carbon nanomatrix are as follows: the vacuum chamber; pulse-plasma carbon generator; the ion source for ionic stimulation; the target assembly with removable target material. The ion and plasma beams intersect above and at the substrate surface. The ion beam irradiation of the deposition zone forms bends in the attached carbon chains which stabilize the growing chain ensemble. The evaporation of the carbon plasma sheaf from the main discharge graphite cathode 3 is caused by the local heating of the graphite surface by electron bombardment to T = 3000 C. The chains of carbon atoms, C n (where n = 1, 2, 3, . . . ), formed in the plasma plasma carbon generator; the ion source for ionic stimulation; the target assembly with removable target material. The ion and plasma beams intersect above and at the substrate surface. The ion beam irradiation of the deposition zone forms bends in the attached carbon chains which stabilize the growing chain ensemble. The evaporation of the carbon plasma sheaf from the main discharge graphite cathode 3 is caused by the local heating of the graphite surface by electron bombardment to T = 3000 C. The chains of carbon atoms, Cn (where n = 1, 2, 3, . . .), formed in the plasma sheaf, are directed by electrode system to impinge upon the surface of the substrate where the polycondensation of the carbon chains takes place. Figure 5. Schematic representation of the pulse-plasma deposition reactor for growing the 2D-ordered linear-chain carbon nanomatrix: 1-vacuum chamber; 2-substrate; 3-pulse-plasma carbon generator (graphite cylindrical main discharge cathode); 4-the ion source for ionic stimulation; 5target assembly with removable target material; 6-vacuum sensor. The schematic representation of the pulse-plasma carbon generator installed in the reactor of the experimental set-up ( Figure 5) is shown in Figure 6. An arc discharge is ignited between the main discharge of cathode 1 and the main discharge anode 2 (which are preferably separated by a voltage of about 200 V) by means of auxiliary discharge between the ignition anode 6, the main discharge cathode 1 and the auxiliary discharge anode 4 surrounding the main discharge cathode 1. The auxiliary discharge is ignited by means of ignition cathode 5. The main discharge cathode (item 1 in Figure 6) design depends on the purposes of deposition and can be manufactured as a composite structure, containing cylindrical rods from various materials, used for heteroatom doping, for instance, with silver, tungsten, gold, etc. An example of the design of a cylindrical main discharge cathode is presented in Figure 7. The capacitors C1 and C2 shown in the diagram are connected with a power source that supplies an alternating voltage (100-300 V). In this case, a pulsed voltage with an The schematic representation of the pulse-plasma carbon generator installed in the reactor of the experimental set-up ( Figure 5) is shown in Figure 6. An arc discharge is ignited between the main discharge of cathode 1 and the main discharge anode 2 (which are preferably separated by a voltage of about 200 V) by means of auxiliary discharge between the ignition anode 6, the main discharge cathode 1 and the auxiliary discharge anode 4 surrounding the main discharge cathode 1. The auxiliary discharge is ignited by means of ignition cathode 5. The main discharge cathode (item 1 in Figure 6) design depends on the purposes of deposition and can be manufactured as a composite structure, containing cylindrical rods from various materials, used for heteroatom doping, for instance, with silver, tungsten, gold, etc. An example of the design of a cylindrical main discharge cathode is presented in Figure 7. The capacitors C1 and C2 shown in the diagram are connected with a power source that supplies an alternating voltage (100-300 V). In this case, a pulsed voltage with an amplitude of 800 V is applied to the ignition electrodes. The inductance L shown in the diagram is used to reduce the rate of the current increase to the required value. The growth of the carbon nanomatrix is stimulated by the irradiation with Ar+ ions. The flow of Ar + ions is formed using a source of low-pressure ions, which is installed in a separate section of the reactor vacuum chamber (see Figure 5). amplitude of 800 V is applied to the ignition electrodes. The inductance L shown in the diagram is used to reduce the rate of the current increase to the required value. The growth of the carbon nanomatrix is stimulated by the irradiation with Ar+ ions. The flow of Ar + ions is formed using a source of low-pressure ions, which is installed in a separate section of the reactor vacuum chamber (see Figure 5). Figure 5): 1-a cylindrical main discharge cathode (evaporated material, the high purity graphite) containing cylindrical rods manufactured from various materials and used for heteroatom doping; 2-the main discharge anode; 3-a solenoid final focusing system with plasma neutralization; 4-second auxiliary discharge anode; 5-ignition cathode; 6-ignition anode; 7dielectric insulator; 8-substrate holder. Figure 7. Example of design of a cylindrical main discharge cathode (evaporated material, the high purity graphite), containing cylindrical rods manufactured from various materials, used for heteroatom doping, for instance, silver, tungsten, gold, etc. This is item 1 in Figure 6. The energy of ions bombarding the substrate surface depends on the substrate bias voltage, being varied in the range 0-300 eV by both the carbon plasma parameters and the ion source extractor voltage, depending on the parameters of the ion-stimulated pulseplasma deposition. The nanomatrix can be deposited onto Si wafer, various metals and NaCl single crystals at an ion energy of 150 eV. Before deposition, the reactor chamber was pumped down to a residual pressure of 10 −4 Pa. The operating pressure during the deposition was 10 −4 Pa. Figure 5): 1-a cylindrical main discharge cathode (evaporated material, the high purity graphite) containing cylindrical rods manufactured from various materials and used for heteroatom doping; 2-the main discharge anode; 3-a solenoid final focusing system with plasma neutralization; 4-second auxiliary discharge anode; 5-ignition cathode; 6-ignition anode; 7-dielectric insulator; 8-substrate holder. amplitude of 800 V is applied to the ignition electrodes. The inductance L shown in the diagram is used to reduce the rate of the current increase to the required value. The growth of the carbon nanomatrix is stimulated by the irradiation with Ar+ ions. The flow of Ar + ions is formed using a source of low-pressure ions, which is installed in a separate section of the reactor vacuum chamber (see Figure 5). Figure 6. Schematic representation of the pulse-plasma generator installed in the reactor of the experimental set-up ( Figure 5): 1-a cylindrical main discharge cathode (evaporated material, the high purity graphite) containing cylindrical rods manufactured from various materials and used for heteroatom doping; 2-the main discharge anode; 3-a solenoid final focusing system with plasma neutralization; 4-second auxiliary discharge anode; 5-ignition cathode; 6-ignition anode; 7dielectric insulator; 8-substrate holder. Figure 7. Example of design of a cylindrical main discharge cathode (evaporated material, the high purity graphite), containing cylindrical rods manufactured from various materials, used for heteroatom doping, for instance, silver, tungsten, gold, etc. This is item 1 in Figure 6. The energy of ions bombarding the substrate surface depends on the substrate bias voltage, being varied in the range 0-300 eV by both the carbon plasma parameters and the ion source extractor voltage, depending on the parameters of the ion-stimulated pulseplasma deposition. The nanomatrix can be deposited onto Si wafer, various metals and NaCl single crystals at an ion energy of 150 eV. Before deposition, the reactor chamber was pumped down to a residual pressure of 10 −4 Pa. The operating pressure during the deposition was 10 −4 Pa. Figure 7. Example of design of a cylindrical main discharge cathode (evaporated material, the high purity graphite), containing cylindrical rods manufactured from various materials, used for heteroatom doping, for instance, silver, tungsten, gold, etc. This is item 1 in Figure 6. The energy of ions bombarding the substrate surface depends on the substrate bias voltage, being varied in the range 0-300 eV by both the carbon plasma parameters and the ion source extractor voltage, depending on the parameters of the ion-stimulated pulseplasma deposition. The nanomatrix can be deposited onto Si wafer, various metals and NaCl single crystals at an ion energy of 150 eV. Before deposition, the reactor chamber was pumped down to a residual pressure of 10 −4 Pa. The operating pressure during the deposition was 10 −4 Pa. The structure of the bonds in the grown carbon-based nanomatrices can be programmed by the processes of self-organization and autosynchronization of the growing nanostructures. A number of experimental studies show that the sp-bonds and the sp-hybridized carbon nanostructures are formed only in a narrow range of optimal ionstimulated pulse-plasma deposition parameters and modes [33]. The thickness of the grown nanomatrix is programmed by the number of pulses, by the energy per pulse, by the capacitance of the main discharge capacitors and by the charging voltage of these capacitors. The minimum coating thickness is 0.1-0.5 nanometers. With the deposition frequency of 1-5 Hz, the temperature of the samples does not exceed 60-80 • C. The ion-assistance during the nanomatrix pulse-plasma growth has a significant influence on its structural and physicochemical properties. Transmission electron microscopy study of the samples show that the ion-assistance energy level significantly affects the surface nanopattern shapes, sizes and spatial localizations. The specific conductivity of ion-assisted samples is 10 3 -10 4 times larger than the conductivity of the samples deposited without ion assistance. Ordered Pattern Formation at the Nanoscale Structural self-organization and ordered pattern formation are the universal and key phenomena observed during the growth and cluster-assembly of the 2D-ordered linearchain carbon-based nanomatrix at the ion-stimulated pulse-plasma deposition [34,35]. Selforganization phenomena are observed also during the cluster-assembling of the nanocavities by atoms of various chemical elements due to formation of new chemical, interatomic and intermolecular bonds. The pulse-plasma deposition zone is a vibration-sensitive media for which the universal laws of cymatics are valid [36]. The pattern's excitation phenomena in the pulse-plasma deposition zone are programmed by the interaction of several competing mechanisms, in particular through the thermoelectric convection excitation, by the state of stress in the deposited nanomatrix and by the self-synchronization of the self-excited oscillatory cells in the deposition region. The self-organization and formation of surface patterns are most pronounced when the grown system is supplied with extra energy and by nanosized active centers. Transmission electron microscopy has demonstrated that the structure of a nanomatrix grown without ion-assistance is homogeneous, while the structure of nanomatrices grown with ion-assistance becomes inhomogeneous [37]. For the case of doping a 2D-ordered linear-chain carbon nanomatrix with silver clusters, transmission electron microscopy has shown that, with an increase in the energy flux into the growing region, the average size of the active nucleation centers decreased with a simultaneous increase of their quantity [38]. Recent research demonstrates the 2D-ordered linear-chain carbon-based nanostructures that grow from high-temperature carbon plasma occur in accordance with the unified templates, the nanoarchitectures of which are determined by the vibrational state of carbon plasma molecules. Model Experimental Systems for Vibrational Activation A set of model experimental systems demonstrate how vibrations provide significant influence on the structures of the deposited model nanomaterial. Paper [39] demonstrated a new approach of the evaporated material vibration-assisted thermal deposition in vacuum chambers. Earlier, papers [40,41] were devoted to atomic deposition experiments, wherein the authors studied the nanoscale self-organization phenomena on the substrate surface using standing surface acoustic waves (SAW). Molecular dynamics simulations were used to describe the structuration of the physical driving mechanism. However, this research does not take into account the vacuum conditions influenced during the deposition processes. Acoustic waves cannot propagate in the low-pressure environment. At the same time, the acoustic waves can be transmitted from the vibration source to the solid as mechanical vibrations with a specific frequency. In the result, the substrate mechanical vibrations during the deposition could influence the surface morphology and the structure of the deposited nanomaterial. Selenium, having a variety of allotropic phases, is a convenient material for the fundamental research of the mechanical vibrational influence during the nanostructure's deposition. Tellurium, as well as hexagonal selenium, is a typical crystalline semiconductor whose atoms form polymeric, covalently bonded helical chains packed into a hexagonal lattice through the van der Waals forces. Due to this specific nature of tellurium, during the nanostructure deposition the formation of the nanowires, nanotubes, nanorods, etc., are observed. Such kinds of specific nanostructures can be considered as the model experimental systems that can demonstrate fundamental phenomena observed at the vibration-assisted thermal deposition of the evaporated material in a vacuum with acoustic wave frequencies. During the nanostructure deposition, the mechanical oscillations were applied to the substrate with input frequencies of 0, 50, 150 and 4 kHz at the deposition rate of 0.3 nm/s, and the vacuum chamber pressure of 7 × 10 −3 Pa. As can be seen from the atomic force microscopy (AFM) images of 150 nm thick tellurium nanostructures, presented in paper [39], the acoustic waves applied to the substrate resulted in morphological changes, demonstrating the self-organization of the nanostructures. As was demonstrated by the model experimental systems, vibrational activation is capable of transforming the orientation of the nanostructures in the grown nanomatrix. In the late 18th century, German physicist Ernst Chladni demonstrated the organizing power of sound and vibration in a visually striking manner. In the 1950s, the study of wave phenomena was continued by Swiss scientist and anthroposophist Hans Jenny, who named the research field as "Cymatics" ("kyma" is the Greek word for wave) [36]. Under this term, he summarized all phenomena which appear when vibration and sound meet substance. Sound is both a wave and a geometric pattern at the same time. Hans Jenny also discovered that higher frequencies produced more complex shapes. As the frequency increases, the disappearance of one pattern may be followed by a short chaotic phase before a new, more complicated and stable structure appears. As the amplitude increases, the movements become more and more rapid and violent, sometimes with small eruptions. The forms, shapes and patterns of motion that emerged turned out to be primarily a function of frequency, amplitude and the inherent characteristics of the various materials. A significant detail in Dr. Jenny's research of vibrational forms in fluids and gases is that, after the first disturbance activation in a fluid, gas or in a flame, this medium becomes sensitive to the influence of sound or vibrations. The acoustic hologram generated in the nanostructure growth zone through external vibrational-acoustic activation is capable of controlling the growth process, clusterassembling and formation of chemical bonds. In accordance with the universal laws of cymatics and the unified template (Mereon Matrix) approach, during the vibration-assisted activation of the pulse-plasma deposition zone, the excitation of the self-organized patterns occurred in accordance with the threedimensional unified template [42,43]. The Mereon Matrix is a three-dimensional template of a dynamic geometric process. The structure of the material systems is formed on the basis of the unified templates, the forms of which are defined by presence of vibrations in the system. The connection between shape and vibration are defined through the Mereon Matrix. Accordingly, for the case we are considering, the formation of the 2D-ordered linear-chain carbon-based nanomatrix structure occurs in accordance with the unified template patterns described by the cymatics laws. The unique structural and physicochemical properties of the nanostructured metamaterials arise not from the properties of the forming initial materials, but from the specific design of their arrangement, geometry and orientation. The geometrical structure of the metamaterials usually includes a repeated pattern at a scale that is smaller than the wavelengths of the phenomena they influence. Nanostructured metamaterials demonstrate unique physicochemical properties determined by their geometrical structure. Surface Acoustic Waves-Assisted Micro/Nanomanipulation Assuming that the 2D-ordered linear-chain carbon-based functionalizing nanocarriers are acoustically sensitive nanomaterials [27], we propose the predictive tailoring of the nanoarchitecture, patterning, vibrational characteristics and multifunctionality of the nanocarriers using the surface acoustic wave (SAW)-based toolkit. In particular, we propose to use the technology of the nanomaterials growing onto acoustically excited piezoelectric active substrates. Assisting the nanomaterial growth through generating the Rayleigh-type SAW leads to patterning phenomena characterized by substantial lateral changes in the nanostructures, thicknesses and properties [44]. Certain frequencies of acoustic vibrations are capable of forming various geometric shapes. In addition, changes in the crystal structure are also induced. There exist two basic types of the bulk acoustic waves. The first one is the longitudinal wave, in which the oscillations of the particles are only in the direction of the wave propagation. The second one is the shear wave, in which the particle displacements are orthogonal to the wave propagation. The size and distribution of pattern formation can be controlled through the adjustment of deposition parameters and the SAW properties. Changing the acoustic driving frequency can be employed to modify the nanopattern size. The use of combinations of vibrations in different frequency ranges makes it possible to purposefully control the nanostructure of the grown metamaterial. The piezoelectric elements and layers are also capable of generating electromagnetic emissions [44]. Acoustic and electromagnetic holograms with specified frequencies and spatial characteristics are capable of providing the spatial markings of the structure of the grown 2D-ordered linear-chain carbon-based nanomatrix. We propose to apply the new synergistic effect through simultaneous vibrationassisted self-organized wave pattern excitation along with the manipulation of their structural and physicochemical properties through the electromagnetic field. The interaction between the inhomogeneous distribution of the electric field generated on the vibrating piezoelectric layer and plasma ions will serve as an additional energizing factor that controls the local pattern excitation as well as the self-organization of the nanostructures. Application of the inverse piezoelectric effect during the 2D-ordered linear-chain carbon-based nanomatrix ion-assisted pulse-plasma deposition can be provided through the deposited piezoelectric layer (see Figure 8). The piezoelectric layer converts the supplied electric energy signals into the mechanical acoustic oscillations that propagate through the substrate and growth of the 2D-ordered linear-chain carbon-based nanomatrix. Applying an alternating current or radio frequency excitation to the electrodes deposited onto the piezoelectric material generates an acoustic wave that propagates in the direction perpendicular to the surface of the deposited nanomatrix into the bulk medium (bulk acoustic wave) or along the surface of the growing nanomatrix (SAW). Using different acoustic excitation frequencies and waveforms generated by piezoelectric elements excites the specific unified templates that mark the spatial geometry for growing the nanostructures. Such vibration-acoustic activation can be used to program The piezoelectric layer converts the supplied electric energy signals into the mechanical acoustic oscillations that propagate through the substrate and growth of the 2D-ordered linear-chain carbon-based nanomatrix. Applying an alternating current or radio frequency excitation to the electrodes deposited onto the piezoelectric material generates an acoustic wave that propagates in the direction perpendicular to the surface of the deposited nanomatrix into the bulk medium (bulk acoustic wave) or along the surface of the growing nanomatrix (SAW). Using different acoustic excitation frequencies and waveforms generated by piezoelectric elements excites the specific unified templates that mark the spatial geometry for growing the nanostructures. Such vibration-acoustic activation can be used to program the required nanoarchitecture of grown carbyne-enriched nanomatrices. The simultaneous use of direct and inverse piezoelectric effects opens up the possibility of providing the interactive ion-assisted pulse-plasma growth of the 2D-ordered linear-chain carbon-based nanomatrix (see Figure 9). The piezoelectric layer converts the supplied electric energy signals into the mechanical acoustic oscillations that propagate through the substrate and growth of the 2D-ordered linear-chain carbon-based nanomatrix. Applying an alternating current or radio frequency excitation to the electrodes deposited onto the piezoelectric material generates an acoustic wave that propagates in the direction perpendicular to the surface of the deposited nanomatrix into the bulk medium (bulk acoustic wave) or along the surface of the growing nanomatrix (SAW). Using different acoustic excitation frequencies and waveforms generated by piezoelectric elements excites the specific unified templates that mark the spatial geometry for growing the nanostructures. Such vibration-acoustic activation can be used to program the required nanoarchitecture of grown carbyne-enriched nanomatrices. The simultaneous use of direct and inverse piezoelectric effects opens up the possibility of providing the interactive ion-assisted pulse-plasma growth of the 2D-ordered linear-chain carbon-based nanomatrix (see Figure 9). Using direct and inverse piezoelectric effects in a nanostructure monitoring system, one piezoelectric transducer is electrically excited, causing a vibration which can be detected by a multitude of piezoelectric transducers operated in the sensor mode. By evaluating and comparing the excitation signal and the output signals of various sensors, it is possible to obtain information regarding the current state of the growth of the nanostructure. Our research performed for model experimental systems proves SAW-assisting the ion-plasma growing the 2D-ordered linear-chain carbon-based nanomatrix leads to cymatic patterning phenomena characterized by significant changes in the matrix nanoarchitecture as well as to the chemical bond transformation and nanocarbon structural phase transformation. In the model experimental system under study, the complex synergistic effect is realized through the merging of several physicochemical processes: the interaction between the pulsed flow of high-temperature plasma and the ion flow, and the carbon chains self-organizing phenomenon during condensation from the carbon vapor in a vacuum accompanied by growing nanolayers activation by the SAWs. In our model experimental system to excite the SAWs, we used the lithium niobate (LiNbO 3 ) piezoelectric substrates with a distributed electrode system as well as piezoelectric ceramic elements with a circle electrode system. These SAWs are like seismic waves excited by earthquakes, and their stress is highly concentrated near the surface. The advantage here is that the transducer on the piezoelectric substrate electrically generates the nanoquakes on demand, launching spatially and temporally tailored waves of a controlled magnitude. In particular, we also provide excitation of the standing SAWs of the Rayleigh type on a LiNbO 3 substrate. The research was conducted at a SAW excitation under different ranges, including the sonic and ultrasonic range (4-50 kHz). Our studies have shown that the growth rate of a 2D-ordered linear-chain carbon-based nanomatrix significantly increases with vibration rather than in the absence of vibration. The nanomatrix growth rate with ultrasonic vibrations increases with the frequency increase up to a certain value, and after this frequency value, the growth rate remains almost constant. The nanomatrix surface morphology, grown with the assistance of SAWs, and investigated with the cross-section scanning electron microscopy (SEM) analysis, is observed as more compact and smoother in comparison with the nanomatrix grown without the assistance of SAWs. The sp-phase transformation in the nanomatrix growing zone during vibrationalacoustic manipulation through the SAWs is associated with the bonds breaking at the input high-frequency vibrational energy exceeding the sp-2 and sp-3 bonds breaking energies. We have used the SAWs for manipulating the phase transformations during the growth of mixed-nanocarbon structures, where the sp, sp2 and sp3 bonds occur. The SAWs of the ultrasonic range have the capability of influencing the formation of the chemical bonds during the 2D-ordered linear-chain carbon-based nanomatrix ion-assisted pulse-plasma growth and are capable of breaking the sp carbon chemical bonds as well as the long chains of carbon. In particular, it was found that oscillations of the low-frequency sound range stimulate the formation of the sp2 and sp3 phases (the diamond/graphite phases). With a shift to the high-frequency, ultrasonic region, the formation of the sp phase is stimulated. An example of the binding energy ratio for carbyne (sp) and graphene-like (sp2) hybridized bonds, obtained using X-ray photoelectron spectroscopy (XPS), is shown in Figure 10. . An example of the binding energy ratio for carbyne (sp) and graphene-like (sp2) hybridized bonds, obtained using X-ray photoelectron spectroscopy (XPS). Raman Spectrum-Based Vibrational Signature The 2D-ordered linear-chain carbon-based functionalizing nanocarriers can be considered as both acoustic and electromagnetically sensitive nanostructured metamaterial [27]. The vibration of sp-hybridized carbon chains encapsulated into the multicavity nanomatrix occurs due to the van der Waals interactions between them. Each 2D-ordered linear-chain carbon-based nanomatrix grown can be characterized by the unique vibration signature formed by a set of individual CAWs [45]. The sp-hybridized carbon chains oscillate like elastic strings. Like the tuning of a guitar string, this vibration behavior can be determined based on length and tension ( Figure 11). The analysis of a 2D-ordered linear-chain carbon-based nanomatrix that can be performed with Raman spectroscopy goes beyond just simple chemical classification and refers to the category of vibrational spectroscopy [46][47][48][49]. The Raman spectroscopy analyzes a sample through the molecular vibration excitation and the subsequent decrypting of this vibrational interaction. The Raman spectrum of a multicavity nanomatrix will contain a series of peaks that correspond to the different vibrational modes in a molecule. The position and linewidth of these peaks can be so unique to a system that they are considered a fingerprint for the molecular species. Figure 10. An example of the binding energy ratio for carbyne (sp) and graphene-like (sp2) hybridized bonds, obtained using X-ray photoelectron spectroscopy (XPS). Raman Spectrum-Based Vibrational Signature The 2D-ordered linear-chain carbon-based functionalizing nanocarriers can be considered as both acoustic and electromagnetically sensitive nanostructured metamaterial [27]. The vibration of sp-hybridized carbon chains encapsulated into the multicavity nanomatrix occurs due to the van der Waals interactions between them. Each 2D-ordered linear-chain carbon-based nanomatrix grown can be characterized by the unique vibration signature formed by a set of individual CAWs [45]. The sp-hybridized carbon chains oscillate like elastic strings. Like the tuning of a guitar string, this vibration behavior can be determined based on length and tension ( Figure 11). The analysis of a 2D-ordered linear-chain carbon-based nanomatrix that can be performed with Raman spectroscopy goes beyond just simple chemical classification and refers to the category of vibrational spectroscopy [46][47][48][49]. The Raman spectroscopy analyzes a sample through the molecular vibration excitation and the subsequent decrypting of this vibrational interaction. The Raman spectrum of a multicavity nanomatrix will contain a series of peaks that correspond to the different vibrational modes in a molecule. The position and linewidth of these peaks can be so unique to a system that they are considered a fingerprint for the molecular species. cillate like elastic strings. Like the tuning of a guitar string, this vibration behavior can be determined based on length and tension ( Figure 11). The analysis of a 2D-ordered linear-chain carbon-based nanomatrix that can be performed with Raman spectroscopy goes beyond just simple chemical classification and refers to the category of vibrational spectroscopy [46][47][48][49]. The Raman spectroscopy analyzes a sample through the molecular vibration excitation and the subsequent decrypting of this vibrational interaction. The Raman spectrum of a multicavity nanomatrix will contain a series of peaks that correspond to the different vibrational modes in a molecule. The position and linewidth of these peaks can be so unique to a system that they are considered a fingerprint for the molecular species. The Raman vibrational spectrum contains а unique vibrational signature of the scattering molecule with a high resolution and can be used for precision identifying molecules. In this regard, Raman spectroscopy is considered as a key analytical characterization technique for the deep study of carbon-based nanomaterials and nanoscale surfaces [46][47][48][49][50]. The Raman vibrational spectrum contains a unique vibrational signature of the scattering molecule with a high resolution and can be used for precision identifying molecules. In this regard, Raman spectroscopy is considered as a key analytical characterization technique for the deep study of carbon-based nanomaterials and nanoscale surfaces [46][47][48][49][50]. Raman spectroscopy can be used as an effective tool for determining the structural arrangement that characterize two different forms of the same type of carbon-based nanomaterial, to distinguish between different allotropes of the same type carbon nanomaterial, to identify the phases and phase transitions, to determine which regions of a nanomaterial are amorphous or crystalline, to identify whether there are any defects present within the nanomaterial and to determine the shape of the nanomaterials. The Raman spectroscopic technique, as a promising self-sufficient technique, has enabled unprecedented insight into the physicochemical and structural properties of the carbon-based nanomaterials, and especially of low-dimensional nanocarbon allotropes. In particular, Raman spectroscopy is a key technique applied for bond structure characterization in nanocarbon allotropes. For instance, the spectral region between 1800-2400 cm −1 associated with sp-hybridized carbon (the carbyne phase). In the process of the 2D-ordered linear-chain carbon-based nanomatrix growth, its nanoarchitecture can be modified using the SAWs generated in a certain frequency range. The SAWs are also can be represented as a nanoquakes, which are capable of modifying the 2D-ordered linear-chain carbon-based nanomatrix phonons, its Raman response, and therefore can be used for fine-tuning the vibrational properties of the growing nanomatrix. Changing the modes and parameters of the ion-assisted pulse-plasma growth of a 2D-ordered linear-chain carbon-based nanomatrix allows for fine-tuning its vibrational signature. By analogy with the proposed earlier "genetic barcode" and "molecular barcode" concepts, in 2021, the new concept of a universal "Raman Barcode" was proposed, obtained by converting the sequences of bands in the Raman spectrum for the accurate express recognition of various combinations of molecules [50,51]. The Raman spectra contain hidden quantum information that can be extracted and recognized using the sonification technique into the acoustic holograms [52]. Like in the science of stellar sound waves, the molecular vibration frequencies are inaudible to the human ear. The understanding of these molecular vibration frequencies represents a revolution in nanomaterials science. The sonification of the 2D-ordered linear-chain carbon-based nanomatrix vibrational signatures open the possibilities to transform them into acoustic holograms, which significantly expands the research opportunities. Data-Driven Tailoring of Architecture and Functionality of Nanocarriers Nowadays, research on nanomaterials science is rapidly entering the phase of a data-driven age. For the predictive growth of the 2D-ordered linear-chain carbon-based functionalizing nanocarriers with a unique set of programmable microarchitectures and physicochemical properties, by using the extensive experimental testing, we propose to apply a new paradigm in materials science-a science based on data and deep materials informatics [53][54][55]. In this research area, the experimental data is a new resource, and knowledge is extracted from the datasets of materials. During recent years, the application of data science techniques for nanomaterials research and development has demonstrated significant achievements due to their outstanding capabilities to effectively extract the critically-important data-driven linkages from various nanomaterial input representative data to their output nanoarchitectures and physicochemical properties [56,57]. Applying a deep materials informatics approach opens up unprecedented possibilities for the predictive programming of the spatial structure of the grown 2D-ordered linearchain carbon-based nanomatrix at a fundamentally new level. We propose the fine-tuning of the vibrational signature, heteroatom doping, functionality and nanoarchitecture of 2D-ordered linear-chain carbon-based functionalizing nanocarriers by using precision SAW-assisted manipulations by the pulse-plasma growth zone, combined with the data-driven carbon nanomaterials genome approach. This approach represents a digital toolkit based on deep materials informatics belonging to the fourth scientific paradigm. The proposed data-driven carbon nanomaterials genome approach establishes linkages between the key modes and parameters of the ion-assisted pulse-plasma growth of the functionalizing nanocarriers and their resulting nanoarchitectures and physicochemical properties through a set of multifactorial computational models developed with the use of extensive experimental data for the selected set of key descriptors or fingerprints. A critical and key condition for the data-driven carbon nanomaterials genome approach development is the correct selection of the set of descriptors or features for inclusion into the multifactorial computational models that characterize the nanomaterials under study, which have a well-known correlation between the target and other properties. The descriptors can be classified as numeric or categorical. Based on the use of the formalized universal linkages, it becomes possible to predict changes in target nanoarchitectures and physicochemical characteristics for various ion-assisted pulse-plasma synthesis conditions, and vice versa, to predict the synthesis technological modes based on the required structural and physicochemical characteristics of the carbyne-enriched nanomatrix. A set of multifactorial computational models mentioned above can be developed by using the modern data mining methods (feedforward neural networks, deep learning neural networks, multiple adaptive regression splines, etc.). Application of the artificial neural networks toolkit opens unique possibilities to identify and describe, through the multifactorial computational models, all the hidden linkages of the carbyne-enriched nanomatrix vibrational signatures with their ion-assisted pulse-plasma synthesis modes and target physicochemical properties and nanoarchitectures. The artificial neural network (ANN) is a machine learning (ML) methodology currently used for predictive modeling in many research areas. A detailed description of the ANN methodology can be found in a number of papers, e.g., [56,[58][59][60][61][62]. The most modern approach to building multifactorial models based on the use of deep learning neural networks (deep learning), which allows to confidently extrapolate the identified patterns and solve forecasting problems, is implemented in the unique data science software platform PolyAnalyst, developed by Megaputer Intelligence [63,64]. The data science platform PolyAnalyst is the industry standard for extracting usable knowledge from large amounts of structured and unstructured data [64]. We propose to consider the Raman spectra-based molecular fingerprints, the vibrational signatures, as one of the key descriptors for incorporating into the data-driven carbon nanomaterials genome approach because this descriptor can catch the main peculiarities of the carbon-based nanomaterials. In this case, the multifactorial computational models will be developed using a set of Raman vibrational spectra of the investigated 2D-ordered linear-chain carbon-based functionalizing nanocarriers along with the key modes and parameters of the ion-assisted pulse-plasma synthesis. In this case, the multifactorial computational models are considered as carriers of information regarding the linkages between the 2D-ordered linear-chain carbon-based nanomatrix vibrational characteristics and the modes and parameters of the ion-assisted pulse-plasma growing. Compared to existing computational approaches such as density functional theory (DFT), used for modeling carbon-based nanomaterials properties, deep materials informatics allows much faster prediction of properties and opens up the possibility of describing disordered structures. This circumstance is of decisive importance for the discovery of new carbon-based nanomaterials which would not have been obvious with intuition alone. The utility and versatility of examples of the artificial neural network application, one of the most promising methods of data science for predicting certain macroscopic properties of energetic compounds, based on a trained set consisting of a large data set, are presented in the following references: [56,57]. This approach demonstrates similar predictive accuracy in comparison with the similar data derived from well-known empirical models. The proposed schemes of tracking and tailoring the key descriptors and linkages for incorporating into the data-driven carbon nanomaterials genome approach are presented in Figure 12. The proposed schemes of tracking and tailoring the key descriptors and linkages for incorporating into the data-driven carbon nanomaterials genome approach are presented in Figure 12. Electromagnetic and Acoustic Activation of the Functionalizing Nanocarriers The proposed modifications of the production technology based on the transformative energetics concept open up new prospects for the further enhancement of the energy resources and controllability of the EM reaction zones. The 2D-ordered linear-chain carbon-based functionalizing nanocarriers in this case can serve as transformers of the acoustic hologram into the electromagnetic emission, as well as directed self-assembly agents in an external electromagnetic field to manipulate the energy release domain localizations in the EM reaction zones. For the predictive manipulation of the spatial distribution of the induction and energy release domains in the EM reaction zones, we propose the activation of the abovementioned nanocarriers with the use of the earlier proposed plasma-acoustic coupling Electromagnetic and Acoustic Activation of the Functionalizing Nanocarriers The proposed modifications of the production technology based on the transformative energetics concept open up new prospects for the further enhancement of the energy resources and controllability of the EM reaction zones. The 2D-ordered linear-chain carbon-based functionalizing nanocarriers in this case can serve as transformers of the acoustic hologram into the electromagnetic emission, as well as directed self-assembly agents in an external electromagnetic field to manipulate the energy release domain localizations in the EM reaction zones. For the predictive manipulation of the spatial distribution of the induction and energy release domains in the EM reaction zones, we propose the activation of the abovementioned nanocarriers with the use of the earlier proposed plasma-acoustic coupling technique [65], as well as with Teslaphoresis force field [66]. This activation includes the directed self-assembly of the above-mentioned nanocarbon-based additives and functionalizing nanocarriers. As has been shown recently, the predictive manipulation by self-organized wave pattern excitation and by micro-and nanoscale oscillatory networks are one of the most effective ways to access the properties of the EM and solid propellant reaction zones [65,67]. Programmed acoustic emissions from the oscillated plasma arc disc into the EM reaction zones opens possibilities for manipulation by the networks of the micro-and nanostructures. Application of different acoustic frequencies emitted by the plasma arc emitter will activate specific properties in the EM reaction zones and will change the special localization and properties of the EM reaction zones. Moreover, such acoustic emissions are capable of exciting the self-organization of the EM reaction zones. The plasma-acoustic coupling mechanism transforms the input electrical energy into the directed acoustic energy. The plasma glow discharge, corona discharge or electric arc then acts as a massless radiating element. It creates compression waves in the gas. At the same time, the plasma arcs have zero weight. Generation of the plasma arc within a magnetic field perpendicular to its current path results in a Lorentz force on the charged particles, causing the arc to sweep about the center of the coax, forming a plasma disc. The technology of plasma arc emitters, modulated by acoustic frequencies, can provide manipulation by self-organization and by the self-synchronization of the micro-and nanoscale oscillatory networks and the self-organized wave pattern formation in the EM reactionary zones. For instance, the plasma arc force field emitter can be installed over the burning surface of the solid propellant end-burning charge with the special design of the electrode system. Recent experimental studies have confirmed that, during an arc discharge, the vibrating carbon nanotube (CNT) is capable of generating an electromagnetic field [68]. Doping the 2D-ordered linear-chain functionalizing carbon-based nanocarriers with clusters of piezoelectric nanomaterials turns them into the piezoelectric nanogenerators that can convert a plasma-excited acoustic hologram into the electromagnetic emission. This electromagnetic emission can then be used for manipulating the micro-and nanoscale oscillatory networks in EM reaction zones. Transformation of an acoustic hologram into an electromagnetic hologram through the use of functionalizing nanocarriers doped with clusters of piezoelectric nanomaterials is schematically shown in Figure 13. A high-frequency electromagnetic field applied through the plasma arc emitters over the EM reaction zones is capable of inducing the directed self-assembly of the carbon-based catalytic nanoadditives together with the 2D-ordered linear-chain carbon-based functionalizing nanocarriers. Such self-assembly is capable of changing the spatial distribution of the induction and energy release domains in the EM reaction zones. Relatively recent research conducted at Rice University experimentally proved a new phenomenon that opened the way for the direct self-assembly of the low-dimensional carbon allotropes. According to this experimental study, a set of carbon allotropes, as well as other nanomaterials, can assemble themselves at a large distance depending on the level of the used high-frequency electromagnetic emission [66]. The traditional directed self-assembly of the nanomaterials using electric fields have been limited to small scale structures, but with the Teslaphoresis force field (the electrokinetic phenomenon), this limitation has been lifted. For example, the CNTs, when exposed to a Teslaphoresis force field, demonstrate polarization and self-assembly into long parallel arrays at the macro level and can serve as electric current conductors. other nanomaterials, can assemble themselves at a large distance depending on the level of the used high-frequency electromagnetic emission [66]. The traditional directed self-assembly of the nanomaterials using electric fields have been limited to small scale structures, but with the Teslaphoresis force field (the electrokinetic phenomenon), this limitation has been lifted. For example, the CNTs, when exposed to a Teslaphoresis force field, demonstrate polarization and self-assembly into long parallel arrays at the macro level and can serve as electric current conductors. Application of the plasma-acoustic coupling-based technique as well as the Teslaphoresis force field-driving nanocarrier-directed self-assembly in the EM reaction zones is Application of the plasma-acoustic coupling-based technique as well as the Teslaphoresis force field-driving nanocarrier-directed self-assembly in the EM reaction zones is opening new ways for the inertial-free control by the structure and properties of the reaction zones with minimum expenses of energy and opens new doors for producing extremely small thrust impulses for the extra-precise attitude control of deep-space-capable small satellites. Outlook and Perspectives The next generation of aerospace propulsion systems requires energetic and propulsive materials to further increase the stored potential energy and thermodynamic performance. In this regard, the predictive programming of the storage and release of high-density energies on the required time scales to ensure the required efficiency is of great importance. Current research trends in the area of nanoenergetic materials are oriented on performance and safety enhancements through the application of a variety of catalytic nanoadditives and the development of environmentally-friendly EMs. Recently proposed transformative energetics concepts, based on the predictive synthesis at the nanoscale, can be considered as a pathway towards the development of the next generation of nanoenergetic materials. Within the transformative energetics concept, we propose a set of new approaches and technological solutions that will significantly expand its capabilities and versatility for the development and synthesis of promising EMs and solid propellants. Our research has also provided a new strategy to develop the new generation of green high-energy ePropellants. An important feature of the 2D-ordered linear-chain carbon-based functionalizing nanocarriers is the ability to fine-tune their nanoarchitectures and physicochemical properties. However, without using the deep materials informatics approach, such tuning through trial and error is extremely difficult. Application of various acoustic exciting frequencies and waveforms, generated in the 2D-ordered linear-chain carbon-based functionalizing nanocarriers growth zone, excites and creates specific unified templates for the growth of the nanostructures, and can be used for programming the required nanoarchitectures of the grown functionalizing nanocarriers. The use of the SAW toolkit makes precise manipulation possible by bond breaking and formation during the ion-assisted pulse-plasma growth, and, accordingly, by sp-phase transformations in the nanomatrix growth zone. The possibility of programmable Chladni pattern excitation at the nanoscale is a key approach to controlling the nanoarchitectures and properties of the grown 2D-ordered linear-chain carbon-based nanomatrices. Since the required combination of sp-phases in the composition of the nanomatrices being grown can be provided in a narrow range of technological growth parameters, the selection and exact provision of which is a difficult task, the use of the SAW toolkit opens up new possibilities for the precision tuning of parameters in the nanomatrix growth zone to provide the required combination of the sp-phases. The 2D-ordered linear-chain carbon-based functionalizing nanocarriers, assembled by the piezoelectric nanomaterial clusters, can also serve as the acoustic radiation converters into electromagnetic emission, and vice versa. Acoustic and electromagnetic holograms with specified frequencies and spatial characteristics are capable of providing the spatial markings of the structure of the grown 2D-ordered linear-chain carbon-based functionalizing nanocarriers. The combined use of the SAW toolkit, heteroatom doping and the data-driven carbon nanomaterials genome approach at the functionalizing of the nanocarrier ion-assisted pulseplasma growth creates a synergy effect that makes it possible to multiply the efficiency of the used approaches. The vibrational signature of the functionalizing nanocarriers can be programmed in several ways-through the fine tuning of the energy and the repetition rate of the plasma pulses and ion stimulation energy, through heteroatomic doping with clusters of atoms of various chemical elements, by programming the composition of the main discharge cathode and the distribution of cathode spots and by the activation of the nanomatrix growing zone by the SAW. Taking into account above-described approaches, the modified concept of the TEM multistage synthesis is schematically presented in Figure 14. composition of the main discharge cathode and the distribution of cathode spots and by the activation of the nanomatrix growing zone by the SAW. Taking into account above-described approaches, the modified concept of the TEM multistage synthesis is schematically presented in Figure 14. Application of the proposed new approaches and technological solutions opens new possibilities for smart control by the excited-state of the EM reaction zones and by the selforganized wave patterns excitation to access the properties of the EM reaction zones: the micro-and mesoscale and spatial distributions of the induction and energy release domains. The proposed application the plasma-acoustic coupling-based technique, as well as the Teslaphoresis force field-driving nanocarriers-directed self-assembly in the EM reaction zones, is opening new ways for the inertial-free control by the structural and physicochemical properties of the reaction zones with minimum expenses of energy, and which opens new doors for producing extremely small thrust impulses for the extra-precise attitude control of deep-space-capable small satellites. In summary, we have proposed several concepts and technological approaches that Application of the proposed new approaches and technological solutions opens new possibilities for smart control by the excited-state of the EM reaction zones and by the selforganized wave patterns excitation to access the properties of the EM reaction zones: the micro-and mesoscale and spatial distributions of the induction and energy release domains. The proposed application the plasma-acoustic coupling-based technique, as well as the Teslaphoresis force field-driving nanocarriers-directed self-assembly in the EM reaction zones, is opening new ways for the inertial-free control by the structural and physicochemical properties of the reaction zones with minimum expenses of energy, and which opens new doors for producing extremely small thrust impulses for the extra-precise attitude control of deep-space-capable small satellites. In summary, we have proposed several concepts and technological approaches that are capable of unlocking new potential opportunities for the predictive extracting of extra energy from nanoenergetic material systems at the nanoscale through the synergistic use heteroatom doping, the SAW-based tool-kit and the deep materials informatics-based toolkit of the 2D-ordered linear-chain carbon-based functionalizing nanocarriers ion-assisted pulseplasma growing. As extra energy-extracting means from transformative energetic materials, we propose the use of the plasma arc force field emitter-based technique and Teslaphoresis force field-based technique. The novelty of this research work is that we have proposed new pathways for the next generation of high-end nanoenergetic materials performance and safety enhancements that can be used for the development of the future of multimode solid propulsion systems and deep-space-capable small satellites.
16,881.2
2022-03-22T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Research on the Evaluation and Promotion of Employees from the Perspective of Competency With the gradual deepening of economic globalization, it has brought opportunities and challenges to the development of Chinese private enterprises. Under the new situation, the private employees of developing private enterprises have encountered problems such as lack of ability, lack of technology, and lack of initiative, which hindered the realization of corporate development strategies and inhibited the competitive vitality of enterprises. At the most basic level of the enterprise, its overall ability has an important role in the production and sustainable development of the enterprise. Therefore, evaluation and efforts to improve the ability of the basic level employees have become the focus of enterprise development. After the 1970s, scholars represented by McClelland proposed the “competency model” for employee training, and formulated corresponding human resource management methods by quantifying employee competence. With the competency model, the performance of employees in specific positions can be clearly displayed, which is an important criterion for judging the work willingness and ability of employees. This article takes Company A as an example, and takes the company’s workshop employees as the research object. It summarizes issues such as the development of competence theory at home and abroad, the determination of competency concepts, and an overview of competency models, as well as a study of the production and operation status of Company A. Based on the four key elements of the competency model: motivation, traits, self-concept, and social role, based on the actual situation of Company A, combined with questionnaires, factor analysis and other related research methods, it proposed that Company A’s ability to improve employees. The development path is intended to solve the production and operation dilemma of Company A and promote the flexible application of the competency model in the modernization process of the textile industry. Introduction Since the forty years of reform and opening up, private enterprises have driven China's economic growth and are a vital force in the development of the national economy. They have played an important role in stabilizing growth, promoting innovation, increasing employment, and improving people's livelihood. With the gradual deepening of the process of economic globalization, the development of China's private enterprises has brought development opportunities and severe challenges. Under the new situation, private enterprises have encountered some problems in the development, such as low ideological and moral qualities of employees, lack of work ability, lack of technical level, insufficient initiative and creativity, etc., which are mainly manifested in the textile and apparel, housekeeping, medical, and education industries, hinder the realization of corporate development strategies, and also inhibit the competitive vitality of enterprises. At the most basic level of the enterprise, its overall ability plays an important role in the production and sustainable development of the enterprise. In the textile industry, workshop employees are the key cultivation objects of enterprises, and their capabilities are related to the overall development of textile enterprises. The employee evaluation mechanism can stimulate employees' work enthusiasm and improve the overall competitiveness of the enterprise. Therefore, objectively evaluating and improving the ability of private employees at the grass-roots level are an urgent problem in the development of private enterprises in our country. This article takes the employees of Company A as the research object and summarizes the relevant theories of competence by scholars at home and abroad through a literature review. Competency evaluation system construction and other aspects are studied, and then the questionnaire survey method is used to analyze and evaluate the current status of the competencies of company A's workshop employees, find out the current problems of company A's managerial competency evaluation, and propose related improvement countermeasures. Analysis of Employee Status Enterprise A is a private enterprise mainly in the textile industry, which is a typical labor-intensive enterprise. The company has a total of 517 employees, of which 308 are productive employees. The company mainly produces production technicians, and it has low requirements for related knowledge and professionalism in textile production. In addition, there are corresponding operating guidelines for production-related links. On the whole, the qualifications and qualifications of employees are not very strict. From the perspective of the distribution of academic qualifications, the higher the level of educational qualifications, the smaller the number, showing a decreasing trend. The age distribution of employees accounts for a relatively high number of middle-aged laborers. This part of the employees is an important force for enterprise development, and it is their existence that maintains the normal operation of the workshop and ensures the Problems with Post Capabilities The ability of company A's workshop staff lagged behind other enterprises in the same industry. The company is dominated by old employees, leading to the lack of vitality of the company. The job capacity of workshop employees focuses on job technical operation capabilities, neglecting the improvement of other capabilities, and more attention is paid to the training of employees "post skills. Little consideration is given to employees" careers and job-specific occupations. The basic situation of development sets out comprehensive capacity requirements. Problems in Production Efficiency The overall production efficiency of the workshop is relatively low, the technology update is slow, and the hardware configuration level of the workshop has gradually fallen behind. At the same time, employees are accustomed to old production equipment, lack of learning and training for new equipment, and it takes a long time to train, causing technology. The difficulty of innovation has greatly reduced production efficiency. Judging from the enthusiasm of employees, most of the old employees who are accustomed to the traditional assessment methods have not been able to complete the change of thinking mode. They still maintain a working mentality of completing the specified number of pieces and going to work on time and on time. It is difficult to create higher production performance. Problems in the Training System Workshop staff training system is not complete and training efficiency is not high. The time and content of staff training need to be improved. The personnel trained in the company are fixed and minority, and rarely can they improve their technical level and post quality through professional training. The content focuses on the sharing and popularization of technical knowledge, and ignores the training of teamwork spirit, innovation ability, contingency ability and execution ability. For workshop employees, the requirements of the post for employees are not only the proficiency of equipment operation and technical knowledge, but also the comprehensive quality of other aspects of the employees, resulting in overall training effectiveness is not strong. Literature Review The emergence of the concept of competence is inseparable from the improvement of the industrialization of human society. After the Industrial Revolution, human society has gradually entered the era of large industries, and labor-intensive enterprises represented by the textile industry and processing industry have risen rapidly. The differences in the quality and level of work among workers have gradually been valued by management scholars. Harvard management professor David McClelland further elaborated the meaning of competence in his research. In his article "Testing for Competency Rather Than Intelligence" published in the American Psychologist, he pointed out that there are inherent flaws in the traditional assessment model of workers' capabilities. "Competency" must be the core element for evaluating employees' ability to perform at work. This is because competence includes the most core conditions and behavioral characteristics that can really affect employees' job performance. Only those elements that can significantly distinguish job performance are the only criteria for judging competency. In addition, Professor McClelland also believes that judging the quality and level of work of employees through competence must be based on objective data to minimize the impact of subjective judgments and ensure the scientific and objective evaluation results. Management scholars Hamill and Prahalad based on the concept of competence and published a paper entitled "Company Core Competence." In this article, they extended the concept of competence in the personal dimension and applied it to the company's organizational structure. They tried to build a company's human resource management framework of "workers-posts-departments" and creatively will be competent the concept of force was introduced into a new area of corporate management. Through the summary analysis of the academic viewpoints, there are some commonalities in the definition of competence both at home and abroad. First of all, competency is not a single-dimensional element, but a collection of multiple elements. The most common motivations, knowledge, personal characteristics, and skills appear in the collection. Therefore, when studying employee competence, these elements need to be considered. Second, the concept of competence is an open, constantly evolving concept. In combination with the development experience of domestic enterprises, in addition to some common elements in the concept of competence, individual elements will also be covered depending on the industry. The concept of competence has a certain personal color, and the meaning of competence may be different for different people. Therefore, from the perspective of competence, the analysis of employees "capabilities must be closely integrated with all aspects of employees" lives to draw as practical conclusions as possible. From the perspective of classification, the elements included in the competency model can be divided into two categories, the first category is for ordinary employees, and the second category is for middle and senior management personnel. To analyze the ability training of company A's workshop employees based on the competency model, it is necessary to analyze not only the competency model itself, but also the characteristics of the ability of the workshop employees themselves. Guo Yu (2019) pointed out in his research that the comprehensive ability of employees in the production workshop not only affects the company's application efficiency of technology and equipment, but also affects the company's overall production efficiency and product quality. poor professional skills and lack of sense of responsibility in the production process. At the same time, the cooperation between employees is not in place, and prone to lack of communication and unreasonable operations. Production equipment causes unnecessary losses, and will also reduce the production efficiency of the enterprise and increase the input cost of the enterprise. Zhu Jia (2019) pointed out in his research that the quality of employees largely determines the production efficiency and quality level of an enterprise. At present, workshop employees in labor-intensive industries generally lack due professional knowledge and understanding, and their ability to adapt and learn is generally poor. When faced with the update of corporate management concepts and business concepts, it is difficult to quickly accept them, making new production of enterprises Cost methods are difficult to implement effectively, and development in the past will be constrained. Chang Yanfeng (2019) divided the employees of the production workshop into four categories based on their duties and responsibilities as the standard: the first category is the basic staff, which accounts for three quarters of the entire workshop; the remaining three categories are respectively It is grassroots management staff, middle management staff, and senior leadership, which accounts for a quarter of the total number of employees in the workshop. From the perspective of personnel distribution, the basic staff is concentrated in the production area of the workshop, including production personnel, quality inspection and monitoring personnel, product distribution personnel, and security inspectors. Judging from the level of education, most of these grassroots staff do not have a bachelor's degree, and most of them only have junior high school or college education level. In addition, because these grassroots workers have been engaged in mechanical production for a long time, the inertia and inertia formed in the process also make it difficult for them to accept the company's new management philosophy and business philosophy, which has caused the innovation of the company's overall management level Hinder. Combining the above-mentioned domestic and foreign studies, we can find that the current domestic and foreign research on the competency model and the ability of workshop employees has the following characteristics: to practice, most of them ignore the gap between different production industries, making these suggestions formal, lacking practicality and pertinence. Second, there is a large gap in domestic and foreign research on the capabilities of workshop employees. Due to China's special national conditions, the development of labor-intensive industries is different from the overall development of the world. Therefore, foreign studies on the work ability of workshop employees are not of sufficient significance to China. Secondly, due to the influence of different industrial structures at home and abroad, even if they are also workshop employees, domestic workshop employees and foreign workshop employees are greatly different in terms of vocational skills, education level, and learning ability. Methodology The main purpose of this study to build the competency model is to summarize the factors that affect the work ability and quality of the employees of company A's workshop, and try to establish a systematic, scientific, and accurate employee competency evaluation system. Provide guidance for the follow-up company A's workshop employees to improve their working ability and quality, and explore new ideas for employee ability training. When screening the elements of competence, the literature research method and behavioral event interview method were mainly used. On the basis of these two methods, the competence factors in McClelland's "Competency Dictionary" were further analyzed and verified, and finally the relevant elements of the competency model in this paper were adjusted. Apply Literature Research The research results of G textile company sales representative competency model construction, A province S city textile enterprise workshop employee competency model construction, B city human resources consulting company's related survey of the textile industry competence are summarized, and company A is comprehensive The characteristics of the corporate culture and industry in the region are excluded from the McClelland's "Competent Quality Dictionary", which are inconsistent with the actual operating conditions of Company A. Use Behavioral Event Interview Method Through structured communication with ordinary employees, grass-roots managers, and middle-level managers in the workshop of Company A, ordinary employees are required to review the typical events and real cases that have occurred since they entered the workshop of Company A, and evaluate their own competence; grass-roots management is required Personnel and middle management personnel shall explain the evaluation results and evaluation reasons of their employees under the evaluation form submitted each month, and evaluate the employees' competence. Design the Questionnaire Overall, 308 questionnaires sent out by the 308 workshop employees surveyed this time totaled 308 questionnaires, and the number of valid questionnaires was 297. The basic information is shown in Table 1. Determine the questionnaire parameters. This study involved employees "assessment of their own competence and the assessment of others" competence. In the evaluation criteria, a Likert scale was selected to enable respondents to conduct "very important" questions on competence on the questionnaire. -More important-generally important-less important-not important "judgment, according to such judgments to score each competency element. The average of the scores of the six competency elements was sorted, followed by analysis and processing of the standard deviation, and combined with the results of the previous analysis, the results of the questionnaire analysis are shown in Table 2. By investigating the factors affecting the work quality and work level of company A's workshop employees, combining the company's corporate culture, corporate development strategy, and the development direction of the entire textile industry market in the future, the performance standards of company A's workshop employees were evaluated Confirm, and then complete the steps of sample selection, data analysis, expert interviews and inspections to achieve the construction of the competency model. Conclusions and Recommendations The textile workshop evaluates the employee's competence in order to help the company find the best candidate, and at the same time, the employees can also improve their self-abilities. The two can promote each other and grow with each other, which is ultimately beneficial to the development of the enterprise and the ultimate goal of enterprise development. Analyze Employee Training Needs and Formulate Specific Training Programs 1) Based on the competency model, first quantitative and qualitative analysis of employee training needs should be conducted. First, a series of tests were conducted on the current strengths and weaknesses of company A's workshop employees in their competencies, their desire for achievement, their responsibility for innovation and creativity, four types of personality temperament, communication skills, self-awareness and self-evaluation. The specific analysis results are shown in Table 3. Among the factors affecting employee competence, the motivation component has the highest importance, but also has the highest degree of lack. Employees generally lack a lack of awareness of their social status and role. However, traits and self-concepts, as the other two major factors affecting competence, are still in general in the current state of company A's workshop employees, and need to be compensated through employee training. 2) Based on the analysis results, a training plan needs to be developed. The job capacity training scale should be re-determined for company A's workshop employees, and the methods that help improve the competency of employees should be listed, such as: external exchange training, internal training of external lecturers, mutual assistance of employee groups, mentor tracking and coaching, daily science and technology. Daily science and technology lectures, micro-learning platform, etc., researched on the specific channels, advantages, and disadvantages of the six employees' job capacity training methods, and came to a conclusion. For the training of Company A's workshop employees, the training of employees should be flexibly combined with the advantages. The advantages, disadvantages, and competencies of the method are focused, and employee training is carried out flexibly. In the daily training process, it is necessary to pay attention to the needs of employees as well as strengthen the scientific literacy of employees, so that employees can receive advanced science and technology in the textile industry as soon as possible, and promote the modernization and technology of the production chain of Company A's workshop. Improve the Recruitment Mechanism and Select Employees That Match the Position of the Company To improve the recruitment mechanism, it is necessary to plan the entire recruitment system of Company A. From the perspective of recruitment methods, we can cooperate with local governments. In order to promote the overall development of the local light industry and heavy industry, the government incorporates the job motivation, personal characteristics, self-recognition, and social status of candidates into the scope of recruitment assessment in conjunction with the competency model, and uses the authority of the government to conduct a series of corporate recruitment announcements.. The use of Internet technology, combined with the local talent market, broadened the scope of online recruitment. Establish a matching recruitment review mechanism, the website background and the HR recruitment mailbox are regularly synchronized to simplify the recruitment process and save recruitment costs. Optimize the way to recruit technology talents. You can continue to use the traditional recruitment method to ensure that there will not be a large number of brain drain. You can also strengthen school-enterprise cooperation and attract outstanding graduates. You can also choose colleges and universities to cooperate, invest some funds and technology, and establish a textile industry incubation base. Stimulate students' enthusiasm for learning, and combine certain employment preferential policies to attract graduates. Improve the Company's Management Rules and Regulations and Enhance Implementation Strengthen the learning of management concepts of company A's workshop employees, and by disseminating advanced textile industry management concepts to employees, make employees gradually understand the original intention of the company's management, and then dissolve the psychological gap between employees and company management. Encourage and urge employees to put forward suggestions for improving the company's management rules and regulations, so that employees become the formulator, executor, and supervisor of the company's rules and management systems, so as to improve the implementation of management company A's workshop employees' management accountability affairs. Actively learning the experience of advanced domestic and foreign enterprises in the construction of management rules and regulations. Select the employees with high education level, initiative and strong learning ability in the workshop, give them certain management rights, establish a workshop technical person in charge system, so that advanced employees can take the lead, and promote the improvement of the execution of the entire workshop. In summary, an effective employee competency evaluation mechanism is of great significance for improving the overall competitiveness of the enterprise and its future development. With the rapid development of the modern knowledge-based economy, diversification and comprehensiveness of talents are increasingly required. From the perspective of the competence of employees in textile workshops, to enhance the value of research results, it is necessary to comprehensively analyze, including Production competence and organizational competence. For enterprises, it is necessary to improve the company's management rules and regulations, strengthen the study of management concepts, establish a sense of rules and contractual spirit, ensure the team's implementation, and strengthen the emphasis on employee training. Professional team, create a complete staff evaluation mechanism, to the maximum extent to train enterprises for comprehensive development of personnel, and provide important personnel support.
4,853.8
2020-02-10T00:00:00.000
[ "Business", "Education", "Economics" ]
A statistical model for early estimation of the prevalence and severity of an epidemic or pandemic from simple tests for infection confirmation Epidemics and pandemics require an early estimate of the cumulative infection prevalence, sometimes referred to as the infection "Iceberg," whose tip are the known cases. Accurate early estimates support better disease monitoring, more accurate estimation of infection fatality rate, and an assessment of the risks from asymptomatic individuals. We find the Pivot group, the population sub-group with the highest probability of being detected and confirmed as positively infected. We differentiate infection susceptibility, assumed to be almost uniform across all population sub-groups at this early stage, from the probability of being confirmed positive. The latter is often related to the likelihood of developing symptoms and complications, which differs between sub-groups (e.g., by age, in the case of the COVID-19 pandemic). A key assumption in our method is the almost-random subgroup infection assumption: The risk of initial infection is either almost uniform across all population sub-groups or not higher in the Pivot sub-group. We then present an algorithm that, using the lift value of the pivot sub-group, finds a lower bound for the cumulative infection prevalence in the population, that is, gives a lower bound on the size of the entire infection "Iceberg." We demonstrate our method by applying it to the case of the COVID-19 pandemic. We use UK and Spain serological surveys of COVID-19 in its first year to demonstrate that the data are consistent with our key assumption, at least for the chosen pivot sub-group. Overall, we applied our methods to nine countries or large regions whose data, mainly during the early COVID-19 pandemic phase, were available: Spain, the UK at two different time points, New York State, New York City, Italy, Norway, Sweden, Belgium, and Israel. We established an estimate of the lower bound of the cumulative infection prevalence for each of them. We have also computed the corresponding upper bounds on the infection fatality rates in each country or region. Using our methodology, we have demonstrated that estimating a lower bound for an epidemic’s infection prevalence at its early phase is feasible and that the assumptions underlying that estimate are valid. Our methodology is especially helpful when serological data are not yet available to gain an initial assessment on the prevalence scale, and more so for pandemics with an asymptomatic transmission, as is the case with Covid-19. Introduction A common problem when attempting to manage epidemics and pandemics at their beginning is assessing the total infection prevalence of the disease. For example, this was often a key issue in the case of the COVID-19 pandemic in its first year. The problem was often referred to colloquially as assessing the total Infected "Iceberg's" size (including the portion of the "Iceberg" that is "underwater," which is composed of asymptomatic infected individuals) [1][2][3][4][5]. Correct estimation of the total infection prevalence also bears directly on the infection fatality rate (IFR); a good lower bound for the first estimate indirectly provides a good upper bound for the second estimate. One suggestion to solve the problem is to use serological testing of the population, preferably measured randomly, to assess the overall infection prevalence [6][7][8][9][10]. For example, in the case of the COVID-19 pandemic in Spain, using serology has led to a mean of 5% positive seroprevalence using point of care (PoC) testing and a mean of 4.7% positive seroprevalence using a laboratory-based immunoassay testing [11]. In the case of COVID-19 in the UK, a large-scale self-administered immunoassay with over 100,000 volunteers suggested that by the time the serological tests were performed, a mean of 6.4% of the population had been infected [12,13]. However, serological and antibody home testing often have a known caveat since previously symptomatic people might be more likely to participate in these tests [14]. Another caveat is that in COVID-19, the kinetics of the neutralizing antibody response is typical of an acute viral infection, with declining neutralizing antibody titers observed after an initial peak and a magnitude of this peak that is dependent on disease severity [15]. Furthermore, serological tests are often difficult to administer and costly [16][17][18]. An alternative strategy for determining infection prevalence is the performance of massive acute disease testing during an epidemic. In the case of the COVID-19 pandemic, one suggested strategy, which attempts to reduce costs, is pooled testing [19]. However, pooled testing requires a dedicated testing infrastructure and overcoming multiple technical hurdles. Other researchers have assessed through simulation the effect of various assumptions on the proportion of asymptomatic cases and their infectivity and compared the results to actual data [20,21]. Here, we suggest a simple statistical method that uses only the distribution of the data of the patients who are confirmed as positive for the disease in question to set a lower bound on the size of an epidemic's cumulative infection prevalence (and correspondingly, an upper bound on the IFR). Our method is tailored for the early days of a pandemic, before vaccinations are available, and offers an easy-to-implement tool. Methods We first define our terms and key assumption and then outline our estimation methodology as an algorithm. Definition and assumptions Our method for estimating the minimal cumulative infection prevalence relies on finding a sub-group in the population for which the relative risk of being positively confirmed as infected is the highest. We refer to this high-risk sub-group as the Pivot group. Since the cumulative infection prevalence was often referred to as the "size of the Iceberg," we define an Iceberg Factor (IF) as the ratio of the total size of the infected population to the number of confirmed infected individuals. We further define the Minimal Iceberg Factor (MIF) as the smallest IF that explains the number of individuals in the Pivot group. A key assumption in our method is the almost-random sub-group infection assumption: The overall risk of initial infection, which we refer to as S 0 , and which is composed of several different components (in particular, the risk of exposure as well as the susceptibility to being infected), is in total (disregarding for a moment the specific values of these components), either almost uniform across all population sub-groups, or at least not higher in the Pivot subgroup; to compute a valid lower bound on the IF, it is enough that S 0 should at least not be greater for the Pivot sub-group. That is, we assume that the initial infection process is a random stochastic process, and thus, the proportion of each infected sub-group within the total infected population is similar to its proportion in the overall population. We demonstrate in the Results Section, using serological data, that the data are consistent with our key assumption, at least for the chosen pivot sub-group. This assumption holds at the early stages of a pandemic before vaccinations are developed and employed. We differentiate the probability of initial infection, S 0 , from the conditional probability of being symptomatic given that the patient is infected, S 1 , which is known to be age-related in the case of COVID-19. Thus, even though the initial infection probability S 0 is similar across all sub-groups, such as different age groups. Some sub-groups might well be over-or under-represented within the group of patients confirmed as positive. For example, the elderly sub-group might be over-represented in the positively confirmed group of COVID-19 patients despite an almost-uniform S 0 because elderly patients have a higher S 1 , are more likely to be symptomatic after being infected, and thus more likely to be confirmed as positive (The Results support our assumptions). In this study, we demonstrate our methodology by applying it to the first year of the COVID-19 pandemic and only for demographical sub-groups, specifically age-related subgroups. In general, this focus can be broadened, and other sub-groups, such as those defined by gender or ethnicity, might be used in the analysis. As we shall see when presenting our algorithm, the MIF is, in fact, the Pivot sub-group's relative risk (Lift). Thus, given the almost-random infection assumption (an almost uniform S 0 ), this IF is the minimal one that can explain the existence of all of the Pivot sub-group members that were confirmed as positive. However, the MIF, and the respective cumulative infection prevalence, might be smaller if, by chance, more people from the Pivot sub-group within the overall population were "sampled" by the random infection process. Thus, we also need to test whether a sufficiently large number of people from the Pivot sub-group (specifically, the number that is required to explain their existence within the known positively confirmed group) might have been sampled from the overall population in a reasonably likely manner (i.e., in a statistically insignificant fashion), even when given a smaller IF, and thus a smaller overall cumulative infection prevalence. Therefore, we test the reasonable likelihood of each potential cumulative infection prevalence (corresponding to a given IF) by applying a proportion test. The test examines whether the proportion of the Pivot sub-group in the cumulative infection prevalence of a particular country might be, purely by chance, sufficiently higher than their proportion within the country's overall population to explain the actual confirmed positive numbers of that Pivot subgroup, but still be larger only in a statistically insignificant fashion. Thus, in our study, in addition to computing the MIF, we calculated the smallest MIF that still explains the number of Pivot Group members that are confirmed as positive but for which the assumption of a "reasonably likely" sampling process due to the infection is not rejected, which we refer to as the Statistically Insignificant Minimal Iceberg Factor (SIMIF). That is, the SIMIF is the minimal IF for which the proportion test (for the Pivot sub-group's proportion within the Infected 'Iceberg') was still insignificant. An algorithmic description of the method Our suggested method is as follows: 1. Split the population into disjoint exhaustive sub-groups. For example, by age, gender, or both. 2. Find the Pivot sub-group, the population sub-group that displays the maximal relative risk (Lift) for being positively confirmed as infected. This is the sub-group for which its proportion within the confirmed (positive) infected patients, compared with its percentage in the population, is the highest. 3. Given the Key Assumption, and thus assuming that the distribution of groups within the infected population is similar to their population distribution, set the MIF to be the Lift of the Pivot sub-group. Thus, the resulting cumulative infection prevalence includes enough members of the Pivot group. Note that the almost-random infection assumption can be relaxed to the assumption that the infection rate of the Pivot sub-group is not greater than that of the rest of the population to maintain the MIF as a lower bound on the IF. 4. To allow for statistical deviations, compute the MIF that, even allowing for an insignificant statistical deviation from the Pivot group's proportion in the population during the infection process, might still contain a sufficient number of the Pivot group members to explain the number found in the "visible" part of the 'Iceberg'. That is the Statistically Insignificant Minimal Iceberg Factor (SIMIF). Given the MIF, which is a lower bound on overall infection prevalence, we compute the upper bound on the Infection Fatality Rate (IFR), by dividing the number of deaths due to the disease by the size of the estimated Infected "Iceberg" (i.e., the number of positively confirmed cases multiplied by the MIF). In the Results Section, we demonstrate in detail the application of this method to the COVID-19 pandemic, using data from two countries (the UK and Spain) and then summarize the results for a total of eight different countries; in one of them (the UK) we performed the computation for two data sets acquired at different time points (June and September 2020), and in another case (USA) we performed the calculation for data sets acquired at two different time points from two different regions (New York City and New York State). Table 1 summarizes each country's aggregated information of the PCR-RT COVID-19 Confirmed individuals and at which date, the source from which the data was obtained, and the country's population. We analyzed only secondary data available in the public domain, with no need for approval by the ethics committee in research. Results We first demonstrate our method in detail, using data from Spain and then the UK gathered from the early months of the disease. Then, we consider statistical randomness in the infection probability by computing the interval for the found minimal factor that allows for statistical randomness in the infection within a certain insignificance interval. We then apply our method to nine countries or large regions. We then show that our key assumptions and method are consistent with the serological data for Spain and the UK. We conclude by computing the corresponding Infection Fatality Rate (IFR) for these countries at that time. Demonstration of the method on early Spanish PCR-RT data We shall first demonstrate the value and outcomes of our methodology using the COVID-19 PCR-RT data for Spain on May 22, 2020 [23]. At that point, C pos = 252,283 positive (confirmed) cases were known, as can be seen in Table 2. The table further depicts the number and percent of individuals in each age group out of the country's population and the number of confirmed (PCR-RT COVID-19 Confirmed individuals) and percent out of all confirmed individuals in each age group. Consider the Spanish age distribution of the confirmed cases. Out of C pos = 252,283 positive cases, the number of 80 years or older cases, C pos.80+ was 59,797 (23.7%)-3.82 as much as their proportion in the Spanish population [32], POP prop.80+ , which is only 6.21% (2,901,252 of 46,736,782). his sub-group has the highest relative risk (Lift) for being confirmed as positive. Thus, the sub-group of 80+ year old people is the Spanish Pivot sub-group, and its Lift is 3.82. Thus, 3.82 would be the MIF for Spain at that point in time. In other words, at least 963,721 people must have already been infected at that point in time in Spain, to explain the number of positively confirmed cases from its Pivot sub-group. Demonstration of the method on early UK PCR-RT data When we follow the same procedure for the United Kingdom using its Jun. 10, 2020 data [24] (see Table 2), the minimal "Iceberg" size that explains the number of positive confirmed cases in the UK's Pivot, or highest-risk, sub-group, the 80+ years old age-group (4.68% of the British population [33]) at that point in time, C pos.80+ = 50,372, must be at least their relative risk for being confirmed as positive, namely, 4.48 the number of total positive cases found at that time (C pos = 222,441). Thus, The UK MIF on Jun. 10, 2020, was 4.48. Therefore, a total of at least C tot = 1,112,205 British people must have been infected at that point in time, most of them being "underwater" (unconfirmed), to explain the finding at that time of 50,372 positive cases in the 80 + years old age group. Allowing for statistical deviations However, based on statistical reasoning, another option might be suggested to explain the number of positively confirmed cases from the Pivot sub-group in Spain or in the UK, using a smaller IF, but without leading to a smaller number of positively confirmed patients from the Pivot sub-group. Perhaps the proportion of infected 80 + years older adults in the cumulative infection prevalence was, by chance, higher than their proportion within the population (even assuming that the likelihood of infection does not depend on age); and somehow, all of the infected older adults were tested and found positive. Could that explain the number of positively confirmed octogenarians while using a smaller IF, namely, a smaller Infected "Iceberg"? In the case of the Spanish example, note that if the cumulative infection prevalence's age distribution is similar to that of the Spanish population, it would contain, for an IF of 3.0, only I 80+ = 46,982 cases. Thus, we are short of 12,815 positive patients in that age group. But perhaps the proportion of infected 80 + years older people in the Spanish cumulative infection prevalence is, by chance, higher than their proportion within the Spanish population? To explore this explanation, we applied a proportion test to see whether it is reasonable that, given the proportion of the 80 + years old population in Spain, enough positive cases might have existed at random within the Spanish cumulative infection prevalence. That is, whether the 2,935,720 people who are 80 + years old, out of Spain's population of 62,676,180 citizens (i.e., 6.21%), might have randomly produced, through the "random sampling" of being infected, the minimal necessary number of 59,797 positive cases, within an only threefold (i.e., IF = 3.0) 'Iceberg' size of 756,849 (i.e., 7.9%), assuming an age-oblivious infection process. The result is: z-statistic = 60.92103; Significance level p < 0.0001; 95% CI of observed proportion: 7.84% to 7.96%. (Compare this confidence interval to Spain's 80 + years age group, which includes only 6.21% of the population). Thus, the Null Hypothesis is rejected at enormous odds. Thus, the IF is highly likely to be larger than three times the total number of confirmed positive cases to explain the number of confirmed cases in the 80+ years age group. In fact, any IF �3.76 would result in rejecting the null hypothesis at a level of significance greater than p < 0.05. o, the Spanish SIMIF at that point was 3.77. For the British data and an example factor of four, the results are similar: z-statistic 43.76; Significance level p < 0.0001; 95% CI of observed proportion: 5.61% to 5.71%. (Compare this confidence interval to the UK's 80 + years age group, which includes only 4.68% of the population). Thus, the British IF then must have been larger than four. In fact, for the UK, on Jun. 10, 2020, any IF �4.43 would result in rejecting the null hypothesis at a level of significance greater than p < 0.05, so the UK SIMIF at that point was 4.44. Since not all infected cases were confirmed as positive, the prevalence of both the Spanish and the UK cumulative infections must have been larger. We followed this procedure for multiple countries or large regions whose data, mostly during the early COVID-19 pandemic phase, were available, as detailed in Table 1. The results are detailed for all these countries, in detail, in S1 Table in S1 File. Assessment of the key assumption's consistency and of the method's computational results All that remains now is to assess that our key assumption that the initial infection sub-group infection-susceptibility S 0 is indeed age-invariant, and in particular, not significantly higher for our Pivot group, which in this case consists of the older people, is consistent with the data. We can easily validate this assumption by examining serological testing results from Spain and the UK (Table 3), depicted by Age group and the type of test. In the Spanish population, blood samples were taken during Apr. 27 to May 11, from 61,075 participants who received a pointof-care antibody test; if they agreed, a more definitive chemiluminescent microparticle immunoassay was also performed. The mean portion of older adults demonstrating evidence for previous COVID-19 infection was quite similar, considering both test types, to the portion of seropositive cases within the other age groups. In the case of the laboratory-based immunoassay, it is even lower than that portion within all other age groups, except for children and adolescents. Serological tests in the UK were performed during Jun. 20 to July 136, using a selfadministered lateral flow immunoassay (LFIA) test for IgG among a random population sample of 100,000 adults over 18 years. The results certainly do not suggest a higher infection-susceptibility risk, S 0 , for the elderly population: The portion of 75 + years old adults demonstrating evidence in their blood samples for previous COVID-19 infection was the lowest of all age groups for which the test was performed, thus further validating our assumption. In both countries, the actual IFs computed from the serological tests (9.32 by PoC or 8.49 by immunoassay for Spain and 17.00 for the UK) were, as predicted, considerably higher than the MIF lower bound computed by our method (3.82 and 4.48) for chronologically similar periods, and certainly higher than the SIMIF. Thus, these results are consistent with our key assumption and are also consistent with our method's computational results. Based on a Proportion Test, there might be several statistically significant differences in the serological prevalence of the infection among several of the population's age sub-groups, at least in the case of Table 3. Serology results for COVID-19 in Spain from Apr. 27 to May 11, 2020, using two different serological tests, and in the United Kingdom from Jun. 20 to Jul. 13, 2020, using lateral flow immunoassay (LFIA). Estimates of prevalence adjusted for imperfect test sensitivity and specificity; a 95% Confidence Interval is specified for each estimate. Relative frequency of positives (%) Relative frequency of positives (%) Age the UK. However, the Key Assumption, namely that the Pivot sub-group is not infected at a higher rate than the rest of the population, still holds. We note here that The MIF and the SIMIF were quite close in the cases we analyzed in detail, implying that calculating the MIF is probably sufficient for most cases. Computing the infection fatality rate (IFR) We proceed to compute the upper bound on the IFR. Recall that the MIF is a lower bound on the IF. Then, the estimated Infected "Iceberg" size is the number of positively confirmed cases multiplied by the MIF. We can compute an upper bound on the IFR by dividing the size of the estimated cumulative infection prevalence by the fatalities from the infection. Given the COVID-19 disease data, we chose the fatality rate date to be two weeks later than the date of the Serology test. The IFR upper bound computed by the lower bound provided by the MIF in Spain (given the number of deaths by Jun. 6, 2020, two weeks after May 22, 2020) was 3.03%, while the IFR computed by serology test results (which can be considered as closer to the true IFR) was 1.24% (PoC) or 1.36% (immunoassay); for the UK, the IFR upper bound computed by the lower bound provided by the MIF was 4.04%, while the IFR computed by serology was 1.06%; for New York State, the IFR upper bound computed by the lower bound supplied by the respective MIF was 9.29%, while IFR calculated from the serological test results was 0.59%. The full details appear in Table 4, showing for countries for which we have serological data, the date at which the serology data was reported and the serology test type, the COVID-19 fatality rate date, and the corresponding death toll at that date, The Serology-based IF and cumulative infection prevalence, and the calculated IFR according to the Serology and according to the MIF. Discussion Estimating a lower bound for the total number of infected individuals in a given population is key to monitoring and managing a new epidemic, and certainly a new pandemic, in its early days. It is also beneficial when a new strain of an existent virus appears. Along with other benefits, it supports an assessment of the risk due to asymptomatic cases and the creation of a more realistic upper bound on the IFR, which is important for quick evaluation of the risk to the population from a new epidemic or pandemic. Here, we suggest a cheap and easy method for rapid estimation of this lower bound, using data from the readily available tests taken by the public (in the case of COVID-19, this is the PCR-RT test). The line of reasoning of our method is based on finding the highest-lift Pivot group for being infected, that is, the sub-group for which its proportion within the confirmed (positive) infected patients, compared with its percentage in the population, is the highest. Our algorithm then utilizes the found Lift to determine the minimal factor which, when multiplied by the Pivot group size, will yield an estimate of the cumulative infection prevalence and show that this estimate is a lower bound on the cumulative infection prevalence. A key assumption in our method, which we had demonstrated as consistent with the serological data in the countries we had analyzed, is the almost-random sub-group infection assumption: The risk of initial infection is either almost uniform across all population subgroups or not higher in the Pivot sub-group. We differentiate here the susceptibility to infection, S 0 , assumed to be almost-randomly uniform across all population sub-groups at this early stage, from the probability of developing symptoms and complications, and hence being confirmed as positive, S 1 , which differs between sub-groups (e.g., by age). We also explicitly consider statistical randomness in the infection probability by computing the interval for the found minimal factor that allows for statistical randomness in the infection within a certain insignificance interval. Our results suggest that calculating the minimal factor, a quick and easy calculation, is probably sufficient. We have demonstrated the consistency of our assumption and the validity of our method using UK and Spain serological surveys of COVID-19 in its first year. Our key assumption was consistent with the data of the age-related pivot groups: The immunoassay results show that in Spain, the cumulative infection rate for ages 65+ was the lowest other than for the youngest group and that in the UK, the cumulative infection rate was actually lowest for the older population groups. Furthermore, the serology-based infection-prevalence factors computed from the Spanish and UK-based data were consistent with our lower-bound MIF predictions. Overall, we applied our methods to nine countries or large regions whose data, mostly during the early COVID-19 pandemic phase, were available: Spain, the UK at two different time points, New York State, New York City, Italy, Norway, Sweden, Belgium, and Israel. For each, we established the minimal lower bound on the factor by which the number of positively confirmed patients should be multiplied, which explains the population's age-based distribution, assuming an infection susceptibility that is independent of age. The lower bound ranged from 1.35 (NYC) to 5.1 (Belgium). We have also computed the corresponding IFRs for each country or region. The most prevalent solution for assessing the cumulative infection prevalence in the population is the use of serological or home antibody testing of the population, preferably measured randomly [35,36]. They enable detection only a few weeks after the infection, as antibodies are produced only 1-2 weeks after the onset of the infection, and require trained laboratory staff [37]. Thus, unlike the COVID-19 PCR-RT tests, which were administered in vast numbers, serology tests were performed on a much more limited scale [7]. Comparing the results of our method to serological data of a similar period in Spain and the UK, we find that the results from the serological data were 2.2 and 3.7 times higher, respectively, than our estimation of a lower bound for the epidemic's infection prevalence in these areas. While the antibody tests give a more accurate estimate, we believe that gaining an insight into the prevalence of a disease in the presence of asymptomatic individuals that spread the disease is important by itself. There are several advantages to using our method with the PCR-RT tests in the case of COVID-19. The common nucleic acid-based method in its base is highly accurate and readily available and allows for high throughput through simple automation services while returning the results in a matter of a few hours. Thus, it has been administered in very large numbers in almost all countries, the results are readily available, and due to the large number of administered tests, randomness is easily assumed [7,38,39]. Longitudinal PCR-RT tests were used for evaluating the percentage of asymptomatic people among all tested, based on self-reports of lack of symptoms [20,[40][41][42]. Compared with our method, longitudinal test results require monitoring a specific region over time and thus cannot be used at large. Thus, our method is the only one to use readily available information from large-scale PCR-RT tests and infer a realistic lower bound on the prevalence of the disease in the population. Unlike alternative strategies for determining infection prevalence, which advocates the performance of massive acute disease testing during an epidemic or a pandemic, such as pooled testing [19], our simple methodology requires no dedicated testing infrastructure and does not necessitate overcoming multiple technical hurdles. Our method, which is based on the actual tests being performed, and which provides lower bounds, is also simpler and arguably more sound than approaches using simulation [20,21]. Our method requires identifying a Pivot group. We use, for COVID-19, the age sub-groups. This seems to have worked reasonably well in the cases that we had analyzed, but other options are quite possible. For example, in the case of New York City, our computed MIF (1.35) is much smaller than the serology-based one (22.23); although the result is still consistent with the lower bound, the extreme gap in value might be due to the fact that in New York City a better Pivot group might have been based on socio-economic and ethnic considerations; these factors have been shown to play a key role in the prediction of morbidity and especially mortality across different counties on the USA [43]. Furthermore, the MIF is only a lower bound: In practice, only a portion of the Pivot sub-group's infected members are likely to be confirmed as positive. There are several limitations to our methodology. In particular, it is useful when positively confirmed cases are detected mostly due to a symptomatic presentation by the patients (which is governed by the symptom-manifestation probability S 1 ), as was common during the early phase of the COVID-19 pandemic or when some underlying process creates a high variability between different sub-groups, regarding the probability of being positively confirmed. It is less useful when positively confirmed cases are detected at random, such as when a general screening of the population is performed (whose results are governed by the infection-susceptibility probability S 0 ). The latter situation became more common during the more advanced phases of the COVID-19 pandemic, as the number of tests grew and the indications for performing them had expanded. Often, it will be easy to determine an appropriate Pivot sub-group (based on gender, age, or ethnicity). But, in other cases, it might be more challenging to do that. However, even in a new outbreak whose characteristics are relatively unknown, as long as one manages to find a population sub-group whose infection rate seems to be not higher than that of the other sub-groups, but its confirmed-positive rate (presumably due to a higher symptomatic presentation rate) seems to be higher than most other sub-groups, it could be used as a useful Pivot sub-group for calculating a lower bound for the prevalence of the overall infection. Furthermore, judging from outbreaks of the past several decades, it usually does not take too long to find a reasonable Pivot sub-group. For example, in the case of the SARS-CoV-2 outbreak, it was clear after only a few months that one such subgroup is the elderly population; in the case of the USA, it seemed to include also certain ethnic and socio-economic sub-groups, as documented in several studies; in other outbreaks it might be, for example, diabetic patients, etc. We ignored, in the case of the COVID-19 pandemic, the PCR-RT sensitivity and specificity; we assume they do not vary across age groups. Note also that the number of confirmed cases is by itself only a lower bound due to the PCR-RT's limited sensitivity. Conclusions We have demonstrated that estimating a lower bound for an epidemic or pandemic's infection prevalence and an upper bound on its infection fatality rate at its early phase, using our methodology, is feasible, and that the assumptions underlying that estimate are consistent with the data. Our methodology might often be necessary for the early phases of an epidemic or pandemic when serological data are not yet available, when vaccines are not available, or when new mutations of a known virus appear, which are resistant to an existent vaccine. Computing The MIF might also add insights to pandemic-related differences across different countries and times.
7,546
2023-01-26T00:00:00.000
[ "Mathematics", "Medicine" ]
Deep Bayesian baseline for segmenting diabetic retinopathy lesions: Advances and challenges Early diagnosis of retinopathy is essential for preventing retinal complications and visual impairment due to diabetes. For the detection of retinopathy lesions from retinal images, several automatic approaches based on deep neural networks have been developed in the recent years. Most of the proposed methods produce point estimates of pixels belonging to the lesion areas and give no or little information on the uncertainty of method predictions. However, the latter can be essential in the examination of the medical condition of the patient when the goal is early detection of abnormalities. This work extends the recent research with a Bayesian framework by considering the parameters of a convolutional neural network as random variables and utilizing stochastic variational dropout based approximation for uncertainty quantification. The framework includes an extended validation procedure and it allows analyzing lesion segmentation distributions, model calibration and prediction uncertainties. Also the challenges related to the deep probabilistic model and uncertainty quantification are presented. The proposed method achieves area under precision-recall curve of 0.84 for hard exudates, 0.641 for soft exudates, 0.593 for haemorrhages, and 0.484 Introduction Diabetic retinopathy (DR) is the most common complication of diabetes mellitus and can lead to a vision loss if not treated properly [1]. Screening of the condition and early detection of retinal abnormalities is essential and consists of examining retinal images for diabetic lesions. In the early stages of the disease, these lesions are small, typically have low contrast and sometimes difficult to detect for humans. The core of the screening problem is, however, the amount of images that need to be analyzed. Thus, automatic retinal image analysis methods are required. One way to build an assisting system is to train an end-to-end classifier that processes an input image and yields a diabetic retinopathy grade [2]. These systems are often criticized for being black-boxes producing results that are difficult to interpret [3]. As an alternative, one can train a segmentation model that processes the input image and produces a segmentation map where each element represents the probability of being a lesion. This way the diagnosis can be inferred from the segmentation maps by counting the detected lesions. In recent years, the field of DR lesion segmentation has advanced with the introduction of new retinal image datasets making it possible to accelerate research in related computer vision methods [4]. One of the most widely used benchmarks is Indian diabetic retinopathy image dataset (IDRiD) dataset providing high-quality ground truth masks for hard exudates, soft exudates, haemorrhages and microaneurysms. Porwal et al. [5] published the results of the IDRiD challenge held in 2018. The best performing algorithms were represented by deep convolutional architectures such as U-Net [6], dense fully-convolutional network (Dense-FCN) [7] and Mask-RCNN [8] or their variants. It should be noted that the data is very unbalanced and achieving high sensitivity was a challenge for many algorithms. To overcome this issue, the authors used balanced cross-entropy [9] and dice loss [10]. Due to the high resolution of the images, the models were trained in a patchwise manner. Guo et al. [11] proposed a multi-scale feature fusion method to handle issues with small lesion detection. Binary cross-entropy (BCE) loss with balancing coefficients was used to improve the sensitivity. The model was trained with full images resized to 1440 × 960 pixels without any further preprocessing. Yan et al. [12] proposed mutually local-global U-Net mitigating the problems of patchwise training which does not capture the global context. The proposed architecture consists of two U-Nets (global and local) that share the last layers of their decoders. Both networks are jointly trained minimizing weighted cross-entropy loss to deal with the class imbalance. The aforementioned approaches consider only point estimates of the trained models and produced results. Thus, the question of reliability of a trained model arises. In this work, the problem is addressed by using Bayesian deep learning modeling a distribution over the learned parameters of the model and produces the segmentation results in a form of posterior predictive distribution. Recently, Bayesian deep learning models have started finding their applications in the area of retinal image analysis. Leibig et al. [13] evaluated dropout based uncertainty measures and demonstrated improved diagnostic performance using uncertainty-informed decisions. Filos et al. [14] proposed a new benchmark for deep Bayesian models with application to DR diagnosis also assessing the robustness of the models to out-of-distribution examples and distribution shift. This work extends the preceding research with Bayesian DR lesion segmentation. To the best of authors' knowledge, this is the first work discussing the Bayesian approach for DR lesion segmentation. The aim is to establish a baseline that would inspire future research on the topic. The contributions of this work can be highlighted as follows: 1. The introduction of a novel Bayesian baseline for DR lesion segmentation allowing the analysis of segmentation distributions. 2. An assessment and analysis of model calibration and prediction uncertainties. 3. The presentation of an extended validation procedure for DR lesion segmentation task beyond the point estimates. The rest of the paper is organized as follows: Section 2 describes the utilized dataset and gives the information about class imbalance and the statistics of labels, and Section 3 explains the Bayesian image segmentation setup, utilized data sampling approach and training details. Section 4 explains the evaluation protocol and presents the performance metrics together with the visualizations of the inferred results. Section 5 discusses faced issues and directions for future research. The results of the work are summarized in Section 6. IDRiD dataset The IDRiD dataset is a common benchmark for the diabetic retinopathy lesion segmentation [5]. It contains 54 train and 27 test images of resolution 4288 × 2848 with segmentation masks aiming to be spatially accurate for four lesion types: hard exudates, soft exudates, haemorrhages, and microaneurysms. An example image from the dataset is shown in Fig. 1. The class imbalance can be visualized as a bar graph with the number of positive pixels for lesions for each image separately as well as for the whole dataset. The calculated statistics for the train and test sets are presented in Fig. 2 and Fig. 3. Background The classical approaches give only point estimates for the class label probabilities and the model parameters are considered to be deterministic. In order to capture imperfect data labeling and image noise, the model outputs and learned parameters can be considered as random variables. The first approach captures the heteroscedastic aleatoric uncertainty that depends on the input data, whereas the second represents the epistemic uncertainty that models a distribution of the learned parameters. Here, a brief explanation for the lesion segmentation task is given below. More detailed explanations for the uncertainties can be found in Refs. [15,16]. Let f be a model, with parameters θ, that maps an input image x to a map of logits ŷ, accompanied by a map standard deviations σ of the logits: Then, the probabilities of the class labels can be calculated as follows: where ⊙ stands for the Hadamard product and ε are sampled during inference. Epistemic uncertainty can be captured by considering the model parameters to be a random variable and making use of the following posterior predictive: where denotes a dataset of input-output pairs. Typically, the parameter's posterior p(θ| ) for complex models such as deep neural networks is intractable and variational approximations are used [16]. The posterior in (3) can be replaced by a simpler distribution q θ (ω) with variational parameters ω. In this work, Monte-Carlo dropout [16] is used as a framework to perform stochastic variational inference. The relation between the true and approximate posteriors is given by where M D is a dropout mask that randomly sets the model weights to zero. The training procedure can then be formulated as the minimization of the Kullback-Leibler divergence D KL between the true posterior and the approximation. This is equivalent to minimizing the negative variational lower bound [16]: where X, Y represent the inputs and outputs of the model, respectively, and p(ω) is the prior for the variational parameters ω. The expectation in the first part of (5) is typically approximated using Monte-Carlo integration [16]. In this work, it is approximated using one sample from the variational distribution. Therefore, the optimization objective becomes where i is an index of the training example and N is the total number of samples in the training set. ℛ is a regularization term that depends on the form of a prior distribution over the parameters of the model. In this case, the prior is a normal distribution corresponding to L 2 weight decay. The loss function chosen for this work is binary cross-entropy and it is summed over the aleatoric samples: where N A is a number of aleatoric samples. The training scheme described above does not take into account class imbalance. In this work, a straightforward oversampling scheme based on class frequencies statistics is used and it is described in the next section. Oversampling One way to handle class imbalance is to perform oversampling of the underrepresented classes. Here, three-stage sampling is performed: 1. Positive samples are selected with π + probability and negative samples are selected with 1 − π + probability. 2. An image of the selected class is sampled with the probability p image i proportional to the logarithm of the pixel count of the given class, that is, where N image i is the number of positive pixels for the class of interest in the image with index i. 3. The final step is to select an image patch containing pixels of the class of interest. In order to select such a patch, we follow a scheme similar to the previous stage. The image is divided into a set of overlapping patches and the patch is selected with probability where M patch i is the number of positive pixels for the class of interest in the patch with index i. The log scale here is used in order to increase the diversity of chosen samples. π + is a tunable hyperparameter and should be chosen depending on the class imbalance in a particular case. In this work, π + = 0.5 is used as experimentally it has been found that this value provides the best results. Architecture The architecture utilized in this work is a Dense-FCN [17]. It has been shown that Dense-FCNs have less parameters and may outperform other fully-convolutional network (FCN) architectures in a variety of different segmentation tasks [17]. Here we adapt the Dense-FCN architecture for the lesion segmentation task. The main building block of Dense-FCNs is a dense convolutional block (DCB) where the input of each layer is a concatenation of the outputs of the previous layers. The block consists of repeating batch normalization (BN), rectified linear unit (ReLU), convolution and dropout p = 0.5 layers resulting in g feature maps (growth rate). The main concept of Dense-FCNs is similar to other encoder-decoder architectures in the sense that the input is first compressed to a hidden representation by the downsampling part. Thereafter the segmentation masks are recovered by an upsampling part. The downsampling part consists of DCBs and downsampling transitions with skip connections to the upsampling part. The upsampling part consists of DCBs and upsampling transitions. An example of two blocks in downsampling and upsampling paths of a Dense-FCN is shown in Fig. 4. The total number of trainable parameters is 9319778. The architectural parameters used are are as follows: • The growth rate for all DCBs: g = 16. Image preprocessing It was noticed in the experimental part of the work that simple preprocessing proposed in Ref. [18] improves the results. The preprocessing is implemented in two steps: 1. Luminosity enhancement employs luminance gain matrix G that is applied in the red-green-blue (RGB) color space: where r, g and b are red, green and blue image channels respectively, x ′ is an enhanced image, and V ′ i is an enhanced luminance value at pixel with index i. The enhanced luminance value is calculated by converting the image to hue-saturation-value (HSV) color space and enhancing the luminance V using gamma enhancement. Here, we choose Γ = 1/ 2.2 as in the original work [18]. Contrast enhancement is performed using Contrast Limited Adaptive Histogram Equalization [19] algorithm with the clip limit 0.1 and the grid size 8 × 8. In order to reduce requirements for computing resources, the images were resized to the resolution of 2144 × 1440 pixels. Two examples of the original and enhanced images are presented in Fig. 5. Training details The Dense-FCN was trained for 100 epochs with 500 steps per epoch on random patches 224 × 224 with the batch size equal to 6. The patches were generated with the overlap 192 × 192. Data augmentation by vertical and horizontal mirroring was applied. The parameter values were empirically tuned based on initial experiments with the IDRiD dataset. The weights were initialized using HeNormal [20]. In addition to dropout, L 2 regularization with the weight decay factor 10 − 4 was used. As the optimizer, Adadelta [21] with the learning rate l = 1 and the decay rate ρ = 0.95 was used. The learning rate was adjusted according to the following schedule: Evaluation protocol In [5], many authors processed images in a patchwise manner during the validation stage. In this work, it was noticed that with Bayesian neural networks this can lead to checkerboard artifacts that have a negative impact on the segmentation performance. Therefore, in the inference stage images are not divided into patches but are processed as full images. It is also worth to note that full-resolution processing is much faster and it takes approximately 14 min to process an image with 50 epistemic and 100 aleatoric samples. The input and output images have the resolution of 2144 × 1440 pixels. In order to evaluate the segmentation performance, the following classification metrics are used: • Sensitivity (SE) is used to assess the ability of the model to discover lesions: where TP and FN are the amounts of true positive and false negative pixels, respectively. • Positive predictive value (PPV) is used in addition to sensitivity but taking into account false positives FP: • Specificity (SP) is used to assess to ability of the model to correctly segment healthy pixels: where TN is the amount of true negative pixels. • Area under receiver-operating-characteristic curve (ROC-AUC) is an integral metric regardless of the thresholding value. ROC-AUC is calculated under the area of the curve plotted as a true positive rate against false positive rate by varying the threshold. • Area under precision-recall curve (PR-AUC) is another integral metric regardless of the thresholding value. PR-AUC more realistically represents the segmentation performance in comparison to the area under receiver operating characteristic ROC-AUC. • Expected calibration error (ECE) is used to assess a model's calibration [22]: where p is a confidence estimate of the predicted class ŷ, y is a true label and π is a true probability. Together with ECE, reliability diagrams are also presented. These reliability diagrams are graphs showing the expected accuracy against classification confidence, thereby representing calibration quality. In the case of perfect calibration, the graph is an identity function. In the evaluation, sensitivity, specificity and positive predictive value are calculated by thresholding the output predictive mean with T = 0.5. In the inference, the model parameters are sampled 100 times and the number of inferred aleatoric samples is N A = 100. The final posterior predictive mean is calculated over all the predicted samples, and the aleatoric uncertainty U A and epistemic uncertainty U E of the outputs are calculated as in Ref. [23]: where E and V denote expectation and variance, respectively, and U T is the total predictive uncertainty. Apart from characterizing the total uncertainty, it is also important to evaluate the meaningfulness of the produced uncertainty maps. This is a more challenging task since only point estimates of ground truth labels are available. However, it is reasonable to assume that incorrectly segmented areas must have higher uncertainties. Mobiny et al. [24] proposed to use the uncertainty as a tool predict incorrect classification results by thresholding the output uncertainties. Camarasa et al. [25] analyzed different uncertainty measures for medical image segmentation and concluded that the averaged variance and averaged entropy perform equally well and are better than other metrics. In this work, the standard deviation is used. We follow the same approach and use the following: 1. Area under uncertainty precision-recall curve (PR-AUC) is used an integral metric to assess the quality of uncertainty estimates. 2. Uncertainty sensitivity (U-SE) is used to assess the ability of the uncertainty estimates to discover misclassifications. 3. Uncertainty specificity (U-SP) is used to assess the ability of the uncertainty estimates to correctly classify misclassifications. 4. Uncertainty expected calibration error (U-ECE) is also used to validate the uncertainty calibration. U-SE and U-SP are calculated using the threshold which is half of the maximum uncertainty value. To summarize, the extended validation approach consists of the analysis of the produced segmentation masks as well as comparison of the produced uncertainties and the misclassification maps. Evaluation of segmentation results The precision-recall (PR) and receiver operating characteristic (ROC) curves are shown in Fig. 6. It is clear that the ROC curves demonstrate close-to-optimal classification results due to large class imbalance. On the other hand, the PR curves represent the classification performance more realistically. The corresponding performance metrics are given in Table 1. Based on the figures and the table, it is clear that the easiest task is to segment the hard exudates, whereas the most difficult one is the segmentation of microaneurysms. Low sensitivies are a common problem for the DR lesion segmentation task [5]. This can be explained by the relatively low contrast and size of lesions. Apart from the analysis of true positive classifications, it is also essential to have classifiers with high specificity. From Table 1 it is possible to see that specificities are very high for all types of lesions being close to one. Nevertheless, it can be easily achieved due to the class imbalance. PPVs, on the other hand, give more insights into the problem of false positive classifications comparing them to true positives. It is easy to notice that in the worst case scenario for microaneurysms there are almost as many falsely classified pixels of healthy tissues as correctly discovered pixels of microaneurysms. This fact gives additional motivation for analyzing the uncertainties. The reliability diagrams are given in Fig. 7. It can be seen that the trained models are miscalibrated and the one for haemorrhages represents the best result. Guo et al. [22] have shown that deep neural networks are typically poorly calibrated and the authors proposed methods decreasing the degree of miscalibration. Guo et al. claimed that the ECE of approximately 0.01 − 0.02 can be achieved for standard classification benchmark datasets and Dense architectures. In this work, no methods microaneurysms. The first column shows the ground truth masks, the second shows the mean inferred probabilities and the third shows epistemic uncertainty masks (standard deviations of probabilities). for improving the calibration were used and the reliability is assessed for the baseline model. The segmentation results for two example images from the test set (shown in Fig. 5) are illustrated in Fig. 8 and Fig. 9. From the images, it is possible to observe visual similarities between the ground truth and mean inferred probability maps. Higher uncertainties are concentrated around the areas with high predicted confidence and false positive segmented pixels. A more detailed discussion about the inference results and the estimated uncertainties is given in the next section. Uncertainty quantification The PR curves and reliability diagrams are shown in Fig. 6 and the evaluation metrics are given in Table 2. From the results, it is clear that normalized uncertainties are not efficient predictors of misclassifications and have low sensitivities. It is worth to note that the evaluation procedure is straightforward and considers only soft uncertainties against hard misclassifications. Nevertheless, the uncertainties are not necessarily high only near the misclassification areas, but also near the areas of relatively low confidence as shown below. This can also explain the uncertainty miscalibrations. The uncertainty PR curves are given in Fig. 10 and the uncertainty reliability diagrams are presented in Fig. 11. From the reliability diagrams it is clear that the uncertainties are mostly underestimated, since the growing confidence values stop matching with the increasing accuracy values. Inference results for hard exudates of the magnified example image are shown in Fig. 12. It is clear that the misclassifications and epistemic uncertainties are mostly concentrated around the edges of the lesions. This can be explained by unclear boundaries of the lesions. The aleatoric uncertainties acting as a learned loss attenuation are also higher around the borders. The boundary uncertainties are a general pattern for segmentation models and can be observed within a wide variety of tasks. It is also possible to see small yellow lesions being incorrectly classified as background which highlights the problems of detecting small-scale lesions. It is worth noting that there is a soft exudate left to the hard exudates cluster and the model is certain for not classifying it as a hard exudate. Inference results for soft exudates of the magnified example image are shown in Fig. 13. The high boundary uncertainties are presented in this case as well. Soft exudates typically have low contrast, no texture, unclear edges and can be easily confused with the background. It is possible to see false positive detections of soft exudates in the lower left part of the image which is slightly more yellow comparing to the other background pixels. The soft exudate in the lower right part of the image has uneven contrast and the low-contrast part of the lesion is incorrectly classified as the background. In both cases, the model yielded nonmaximum mean confidence and the incorrectly classified pixels also have high uncertainties. In Fig. 14, the inference results for the haemorrhages of the magnified example image are presented. The lesion is surrounded by blood vessels and a part of the macula is presented in the magnified input image. The part with blood vessels to the left is incorrectly classified as a haemorrhage. It is also possible to see the model's confusion about the part with the macula. Epistemic uncertainty is in general higher near the areas with similar colors highlighting the surrounding blood vessels and macula.Inference results for microaneurysms of the magnified example image are given in Fig. 15. Microaneurysms are the smallest of all lesions and the epistemic uncertainty is high over the whole area of lesions. On the other hand, the aleatoric uncertainties are still higher near the edges. Being small-scale lesions with no textures, microaneurysms are confused with any red small spots, which is visible on the epistemic uncertainty maps. Discussion The approach presented in this work shows classification performance comparable to previously reported methods [5]. The uncertainty maps can be used for the visual inspection and analysis of the performance. The estimated uncertainties and the produced confidence maps provide more information about the model's behaviour. Nevertheless, a few challenges remain and they are discussed in this section in addition to brief explanations of failed experiments. One of the main issues in lesion detection is low sensitivity of the segmentation model. This problem is present in the related previous works [4,26] and also in this study. In medical image analysis and segmentation, it is common to use custom heuristic loss functions [26] to improve sensitivity [27] or deal with lesion boundary issues [28]. We also experimented with other loss functions including focal loss [29], Tversky loss [27], generalized dice loss [28], and boundary loss [30]. Nevertheless, results outperforming the proposed baseline were not achieved. This negative outcome is likely due to omitting the tuning of loss functions' hyperparameters. These objectives are typically synthetic in the sense that they are formulated already in the form of loss functions and not as log-likelihoods. This means that they are not derived from specific distributions encoding the information about class imbalance. On the other hand, binary cross-entropy is derived as a negative logarithm of the Bernoulli likelihood. To study the issue with low sensitivity, more focused research is required to evaluate modern loss functions for medical image segmentation in the context of Bayesian deep learning and model calibration. In this work, a straightforward scheme based on label statistics is used to balance the lesion and background data. A potentially more efficient approach would be to use Bayesian active learning [31] where uncertainty-based acquisition functions are used to select the training samples. Typically, these methods do not work well with unbalanced data which can be another topic for the future research. Model and uncertainty calibration metrics are also subjects for further improvements. Apart from the classical calibration methods described in Ref. [22], alternative ways of improving the calibration exist. Thulasidasan et al. [32] proposed to use mix-up augmentation to improve the model calibration. Seo et al. [33] proposed single-shot calibration by regularizing the model with the uncertainty of the outputs. Laves et al. [34] considered the uncertainty calibration in the context of deep Bayesian regression and discovered that the predicted uncertainties are typically underestimated. The problem was solved using simple temperature scaling of aleatoric and epistemic uncertainty. During the development of this work, experiments with the uncertainty calibration using Platt scaling and isotonic regression were conducted. However, no improvements over the baseline were found. It is likely that a more systematic approach aiming to solve both calibration problems is required. Conclusion In this paper, a Bayesian baseline for the diabetic retinopathy lesion segmentation, allowing the analysis of segmentation distributions, model calibration and prediction uncertainties, is proposed. Also an extended validation approach consisting of the analysis of segmentation performance and the ability of uncertainty estimates to detect false classifications is provided. The presented results from the uncertainty quantification experiments show that the estimates are qualitatively similar to misclassification maps and can be used to assess issues in the lesion segmentation. Overall, the main challenges of the deep probabilistic model are the small-scale lesions, areas with low contrast and unclear boundaries. The color information is also essential for successful segmentation and healthy tissues can be confused with lesions when being of a similar color. Further research and development is required to make the predicted lesion segmentation uncertainties suitable for numeric quantification. Declaration of competing interest None of the authors have any conflict of interest.
6,183.8
2021-08-06T00:00:00.000
[ "Medicine", "Computer Science" ]
Comparative muscle irritation and pharmacokinetics of florfenicol-hydroxypropyl-β-cyclodextrin inclusion complex freeze-dried powder injection and florfenicol commercial injection in beagle dogs Florfenicol (FF) is a novel animal-specific amidohydrin broad-spectrum antibiotic. However, its aqueous solubility is extremely poor, far below the effective dose required for veterinary clinic. Thus, FF is often used in large doses, which significantly limits its preparation and application. To overcome these shortcomings, the FF-hydroxypropyl-β-cyclodextrin (FF-HP-β-CD) inclusion complexes were developed using the solution-stirring method. The physical properties of FF-HP-β-CD were characterized. A comparison was conducted between FF and FF-HP-β-CD freeze-dried powder injection of their muscle irritation and the pharmacokinetics. The drug loading and saturated solubility of FF-HP-β-CD at 37 °C were 11.78% ± 0.04% and 78.93 ± 0.42 mg/mL, respectively (35.4-fold compared with FF). Results of scanning electron microscopy, differential scanning calorimetry, X-ray diffraction, and Fourier transform infrared showed that FF was entrapped in the inner cavity of HP-β-CD, and the inclusion complex formed in an amorphous state. In comparison with FF commercial injection, FF-HP-β-CD increased the elimination half-life (t1/2β), transport rate constant (K10, K12, K21), and maximum concentration (Cmax) after intramuscular injection in beagle dogs. Conversely, it decreased the distribution half-life (t1/2α), absorption rate constant (Ka), apparent volume of distribution (V1/F), and peak time (Tmax). These results suggest that FF-HP-β-CD freeze-dried powder injection is a promising formulation for clinical application. Florfenicol (FF), also known as fluthiracnamline, is chemically named D(+)-threo-1-p-sulfonylphenyl-2-dichloroacetamido-3-fluoropropanol. FF is an excellent broad-spectrum antibiotic for animals with broad antibacterial spectrum, wide distribution in the body, and with safe and efficient features; particularly, it does not potentially cause aplastic anemia and without teratogenic, carcinogenic, and mutagenic effects 1 . FF is extensively used clinically for the treatment of bacterial diseases in livestock and poultry caused by susceptible strains. Chloramphenicol was banned from use as an animal-derived food for animals; thus, the market demand for FF has been further expanded. To date, FF has been used extensively as the main force for anti-infective drugs in various countries, and the amount of FF is tens of thousands of tons per year worldwide 2 . However, the bioavailability of FF is severely affected with an increased cost of the drug owing to the poor water solubility. Therefore, improving the water solubility of FF is of great importance for clinical application. The chemical methods used to increase FF solubility include the synthesis of FF phosphate, FF sulfonate, and FF succinate 1 . However, the preparation conditions of FF phosphate are reported to be complicated and are unsuitable for industrial production. Although succinic prodrugs increase the water solubility of the drug, succinic prodrugs reduce its biological activity. In addition, other problems occur, such as low yield, poor stability, and numerous byproducts. The physical methods used for FF solubilization include the addition of cosolvents, micronization, nanoemulsion, in-situ gel, β-cyclodextrin (CD) inclusion complexes, and solid dispersion nanoparticles [3][4][5][6] . Although, reports are available on FF-β-CD and FF-hydroxypropyl (HP)-β-CD inclusion complexes, most of them were reported only for the preparation methods and their physical properties; the in vivo evaluation is rarely studied 7,8 . These methods can increase the solubility of FF to a certain extent; however, meeting the needs of clinical use as a commonly used preparation is difficult. At present, the commonly used FF preparations in the clinic include powders, injections, solutions, and premixes. Moreover, with the large-scale and intensive farming of animal husbandry, oral administration, condiments, and drinking water have become the main administration modes. Therefore, premixes, powders, and oral solutions of FF are in considerable demand. However, for individual animals with special economic value, such as boars, breed bulls, and pets, people tend to devote other resources for diagnosis and treatment. Therefore, they are also the main consumers of FF injections. In addition to the advantages of the injection, such as less frequent administration, rapid onset, and suitability for animals with difficulty swallowing, injection also has a strong market demand. However, the commonly used FF injection (30%) is formulated with organic solvents (e.g., dimethylformamide, pyrrolidone, and propylene glycol) It can cause local muscle irritation and toxicity 5 . On the basis of the aforementioned reasons, developing water-soluble FF injection is of great importance. HP-β-CD is a derivative of CD and thus has its advantages and has good water solubility at room temperature; moreover, it is stable to heat and nontoxic to kidneys. When administered parenterally, HP-β-CD is excreted in the urine and does not accumulate in the body. Moreover, it can promote the rapid release of the encapsulated substances. It also has low surface activity, low hemolytic activity, and safe for use. After the drug and HP-β-CD form an inclusion compound, it has the effect of increasing drug stability, preventing volatilization, reducing drug irritation, and increasing drug solubility. In this study, FF-HP-β-CD was prepared using the solution-stirring method to increase the solubility of FF. The characterization, muscle irritation, and pharmacokinetics of FF-HP-β-CD were investigated. Results and Discussion Preparation of FF-HP-β-CD. Commonly used inclusion complex preparation methods include grinding, ultrasonic, solution-stirring, freeze-drying, and spray-drying methods. The current study selected the solution-stirring method, and FF-HP-β-CD was successfully prepared on the basis of single-factor experiment and orthogonal design. On the basis of our research, the average drug loading is 11.78% ± 0.04%. Moreover, the prepared FF-HP-β-CD was observed to have a highly enhanced solubility of 78.93 ± 0.42 mg/mL at 37 °C, which is 35.4-fold compared with FF (2.23 ± 0.04 mg/mL) and is more than the reported solubility of 40.76 mg/mL 9 . Characterization of FF-HP-β-CD. Scanning electron microscopy (SEM). From the SEM images ( Fig. 1), FF raw drug has an obvious flake crystal structure. HP-β-CD exhibits a typical spherical particle structure containing a cavity. The physical mixture (PM) of FF and HP-β-CD prepared in a molar ratio of 1:2 clearly shows flaky FF crystals and spherical HP-β-CD particles, indicating that the physical form of the two do not change. However, the morphology and crystal structure of FF-HP-β-CD lyophilized powder have undergone substantial changes. Neither a flaky crystal structure nor a distinct spherical structure has emerged but a new irregular amorphous state. Thus, the FF is embedded in the HP-β-CD cavity to form an amorphous-state inclusion complex. X-ray diffraction (XRD). XRD analysis is a common method used to detect crystalline and amorphous forms. Figure 2 shows the XRD patterns of FF, HP-β-CD, PM, and FF-HP-β-CD, which are consistent with the literature 10 . FF has high-intensity characteristic diffraction peaks at different diffraction angles (2θ) at 7.92°, 16.03°, and 26.72°. Thus, the crystal form of FF-HP-β-CD shows no obvious characteristic diffraction peaks. It only exhibits two broad diffraction peak bands around 10° and 20°, indicating the amorphous state of HP-β-CD. PM shows a superposition of FF and HP-β-CD; however, the lower relative content of FF after mixing leads to a decrease in the intensity of the characteristic diffraction peaks. This finding indicates that the physical form of PM does not change. By contrast, the FF-HP-β-CD shows a similar result to the HP-β-CD diffraction peaks. The typical characteristic diffraction peak of FF completely disappears and belongs to the amorphous state, which may be related to the fact that FF is embedded in the HP-β-CD lumen. This result is consistent with the SEM results, and both reflect the amorphous state of the obtained FF-HP-β-CD. Differential scanning calorimeter (DSC). In the DSC spectra (Fig. 3), FF has a characteristic endothermic peak (melting point peak) at 154.64 °C. This result is consistent with the results of Yang 11 . HP-β-CD has an endothermic peak at 279.71 °C. In addition, PM has two endothermic peaks, which serve as the superposition of those of FF and HP-β-CD. As expected, the spectra of FF-HP-β-CD are different from the PM. Only one characteristic endothermic peak of the HP-β-CD exists, with no characteristic endothermic peak of FF near 154 °C. This finding is most likely to occur because the melting point of the original drug FF is changed after it is embedded in the HP-β-CD cavity, thereby resulting in the disappearance of the original characteristic endothermic peak of the FF. Therefore, FF forms a clathrate with HP-β-CD and becomes a new amorphous phase. Figure 4 illustrates the FTIR spectra of FF, HP-β-CD, PM, and FF-HP-β-CD. FF shows an -OH stretch band at 3456 cm −1 , a -C=O band at 1683 cm −1 , a -CN stretch band and -NH bending at 1534 cm −1 , and a -NH stretch band and -CN bending at 1274 cm −1 . This result is consistent with the literature 12 . HP-β-CD has an -OH stretch band at 3393 cm −1 . The infrared spectrum of the PM is a superposition of the FF and HP-β-CD spectra. Different from PM, the four characteristic peaks of FF significantly weaken or 1 H nuclear magnetic resonance ( 1 H-NMR). In this study, we performed docking studies of FF in the HP-β-CD cavity because the chemical and electronic environments of protons are affected during the complexation and are reflected through the changes in the δ values (Fig. 5). H-3 and H-5 are two atoms that form in the CD cavity. When FF enters the HP-β-CD, the hydrogen atoms in the FF and HP-β-CD cavity undergo corresponding changes due to the anisotropic shielding effect of the aromatic ring. From the comparison of the chemical shift values of HP-β-CD inclusion spectra with the FF-HP-β-CD, except for H-5, all the protons in the remaining CD positions apparently have a certain chemical shift, combined with the molecular structure of HP-β-CD. H-5 is at the narrow port, and H-3 is at the wide port. Thus, the FF molecules enter the HP-β-CD cavity from the wide www.nature.com/scientificreports www.nature.com/scientificreports/ mouth end. This result is consistent with the FTIR results. The comparison of the 1 H-NMR spectra of FF and FF-HP-β-CD shows that the 8-11-position hydrogen atoms in the FF molecular structure all exhibit a 0.01-0.02 ppm chemical shift difference. Thus, FF is inferred to possibly be encapsulated in the HP-β-CD cavity from the side containing the dichloromethyl (-CHCl2) structure, whereas the FF sulfonyl side is outside the CD cavity (Fig. 6). These slight changes in chemical shifts indicate that the two are bound in a non-covalent bond, which is consistent with the principle of clathrate formation. Muscle irritation test of FF-HP-β-CD injection. A large demand exists for injections in the clinic, and the commonly used FF injection can cause local muscle irritation and toxicity. The current study commits to developing a water-soluble FF injection based on FF-HP-β-CD freeze-dried powder. Figure 7 illustrates the muscle histopathologic section. After continuous infusion for 3 days, saline has shown no visible sign of degeneration, necrosis, and inflammatory reaction; In addition, muscle interstitial cells has no hyperemia (Fig. 7A1,A2). One case (1/3) of FF-HP-β-CD aqueous solution group shows a small amount of muscle fiber atrophy and necrosis; no obvious fibrous tissue hyperplasia and inflammatory cell infiltration is observed (Fig. 7B1,B2). The clinically used 10% FF injection (Fig. 7C1,C2) presents a visibly large number of muscle fibers that are atrophied and necrotic and have a significantly large proliferation of fibroblasts. Histopathology shows that FF-HP-β-CD can significantly reduce the injection irritancy compared with 10% FF injection. In vivo pharmacokinetic parameters in beagle dogs. The pharmacokinetic parameters of FF after intramuscular injection in beagle dogs were assessed by fitting a two-compartment open model to the individual concentration-time data for plasma. Figure 8 denotes the mean FF concentrations in plasma, and Table 1 presents the corresponding pharmacokinetic parameters. t 1/2α , Ka, V1/F, and T max are significantly lower than those of 10% FF www.nature.com/scientificreports www.nature.com/scientificreports/ injection (P < 0.05, P < 0.01). t 1/2β , K 10 , K 12 , K 21 , and C max are significantly higher than those of 10% FF injection (P < 0.05, P < 0.01). Because FF-HP-β-CD can increase the FF solubility, at the early stage, the dissolution of FF from HP-β-CD inclusion complexes is speeded up, resulting in the increase of drug absorption rate and peak plasma concentration. Therefore, after the FF-HP-β-CD injection was administered, the pharmacokinetic parameters, such as C max and T 1/2α , are all significantly different from those of 10% FF injection. The C max of FF injection and FF-HP-β-CD in plasma of beagle dogs calculated in this study after 0.58 ± 0.13 h and 0.44 ± 0.09 h are 11.56 ± 1.54 mg/L and 15.90 ± 3.15 mg/L, respectively. T 1/2α are 0.77 ± 0.08 h and 0.29 ± 0.13 h for FF injection and FF-HP-β-CD. This indicates FF-HP-β-CD could avoid its common use with large dose. We can use lower dose of FF at the format of FF-HP-β-CD to reach a higher plasma concentration owing to its enhanced solubility. However, in the later stage, because FF is released from the cyclodextrin cavity, it will last longer in the body than the crude FF in the plasma 13 . That is why the elimination half-life (t 1/2β ) of FF-HP-β-CD is significantly longer than that of FF. Although the area under the concentration (AUC 0-∞ ) of FF-HP-β-CD shows no significant increase in relation to FF, it has a larger area than that of FF. It indicates that FF-HP-β-CD increased bioavailability of FF to some extent. The pharmacokinetics of FF has been previously studied in animal species, including rabbits, rats, dogs, carp, swine, and sheep 9,14-17 Species differences in FF elimination exist. One-compartment open, two-compartment open, and non-compartment models exist after intravenous, intramuscular, and oral administrations in different species 18,19 . However, no pharmacokinetic report of FF or FF-HP-β-CD is available after intramuscular injection in beagle dogs. In our study, plasma concentration time data of FF-HP-β-CD and 10% FF injection after www.nature.com/scientificreports www.nature.com/scientificreports/ intramuscular injection in beagle dogs were adequately described by the two-compartment open model. The absorption of FF-HP-β-CD is faster and higher than FF, and its elimination of half-life is extended. As we know, due to FF's poor aqueous solubility, it is often used in large dose, which significantly limits its preparation and application. In addition, the commonly used FF injection is formulated with organic solvents. It can cause local muscle irritation and toxicity. Our study suggests FF-HP-β-CD freeze-dried powder, a water-soluble FF injection is a promising formulation for solving these problems and clinical application. www.nature.com/scientificreports www.nature.com/scientificreports/ (Sichuan, China). Before the experiment, the animals were acclimatized at 25 °C ± 2 °C under natural light/dark conditions for 1 week with free access to food and water. Materials. Preparation of FF-HP-β-CD. FF-HP-β-CD was prepared using the solution-stirring method. The single-factor experiment and orthogonal design were adopted in the optimization of the prescription and manufacture process of FF-HP-β-CD. In brief, FF and HP-β-CD (mole ratio of 2:1) were weighed, and FF was dissolved with 50% methanol. HP-β-CD was prepared in a 20% (w:v) aqueous solution. Then, the FF organic solution was added to the HP-β-CD aqueous solution. The pH of the mixture was adjusted to 5 and stirred at 100 r/min at 30 °C for 2 h. The inclusion complex solution was transferred to a rotary evaporator (Shanghai Yarong Biochemistry Instrument Factory, Shanghai, China), and the organic solvent was evaporated at 65 °C. Subsequently, the FF-HP-β-CD solution was obtained. Finally, the FF-HP-β-CD solution was filtered with a 0.22 μm microporous membrane and was lyophilized. The lyophilization process was as follows: pre-frozen at −30 °C for 4 h and then freeze dried at −50 °C for 16 h. Muscle irritation test of FF-HP-β-CD injection. Six rabbits were randomly divided into two groups, and the control group rabbits were administered with 20 mg/kg of 10% FF injection. The test group rabbits were injected with 20 mg/kg of FF-HP-β-CD aqueous solution (lyophilized powder was dissolved in a sterile saline) in the left leg muscle. Intramuscular injection of equal volume of sterile saline was administered on the right side. Continuous injection was administered for 3 days. Rabbits were sacrificed 48 h after the last dose; then, the muscle at the injection site was isolated and fixed in 10% neutral carbonated-buffered formaldehyde. The fixed tissues www.nature.com/scientificreports www.nature.com/scientificreports/ were paraffin-embedded and sliced using a microtome. Hematoxylin and eosin-stained sections of the muscle tissues were evaluated. In vivo pharmacokinetic studies. A pharmacokinetic study was performed on healthy beagle dogs. Twelve beagle dogs were randomly divided into two groups. The control group was administered with 10% FF injection, whereas the other was administered with FF-HP-β-CD aqueous solution at the left hind limb muscle (20 mg FF/kg body weight) 9,14 . At predetermined time points, blood samples were collected from the auricular vein. Samples were centrifuged at 3000 rpm for 10 min within 1 h after sampling, and plasma was collected and stored at −70 °C until analysis. FF was extracted from the plasma through liquid-liquid extraction with acetonitrile. In brief, the samples were prepared by adding 0.4 mL of plasma to 1.2 mL of acetonitrile. The mixture was vortexed for 3 min and centrifuged at 12000 rpm for 10 min. After filtering through the cellulose acetate membranes of 0.22 μm pore diameters, 20 μL of the collected filtrate was injected into the high-performance liquid chromatography (HPLC) for analysis. All samples were analyzed on a Shimadzu LC-2010CHT system (Shimadzu Corporation, Kyoto, Japan) with a UV detector. HPLC analysis was performed using a 5 μm C18-silica-based stainless steel Kromasil column (4.6 mm × 250 mm, Akzo Nobel, Bohus, Sweden) as a stationary phase. The mobile phases were acetonitrile: 0.1% of the phosphoric acid solution (35:65, v:v) with a 0.5 mL/min flow rate. Then, the drug concentration-time data in plasma were fitted using DAS 2.0 software supplied by the Pharmacological Society of China (Beijing, China). The most appropriate pharmacokinetic model was evaluated in terms of the range of the coefficient of determination (r2) and comparisons of Akaike's information criterion values. Statistical analysis. The statistical analysis of data was performed using analysis of variance by using SPSS 15.0 software (SPSS Inc., Chicago, USA). The results were expressed as a mean ± standard deviation. A difference of p < 0.05 was considered statistically significant.
4,186
2019-11-13T00:00:00.000
[ "Chemistry" ]
2-Allylaminothiazole and 2-allylaminodihydrothiazole derivatives: synthesis, characterization, and evaluation of bioactivity Abstract Some reactions of selected chlorooxoesters and haloesters with a 1-allylthiourea under various conditions have been performed. The reactions have been performed in methanol in alkaline and neutral environment. Condensation of 1-allylthiourea with chlorooxoesters has been further led via acetal as intermediate compound. As a result, the compounds containing thiazole and a 4,5-dihydrothiazole ring with a good yield have been obtained. The structures of the compounds were verified by 1H NMR, 13C NMR as well as X-ray diffraction analysis. Due to the potential biological activity of the synthesized compounds, the parameters of their bioavailability have been determined, and the probability of pharmacological action has been defined. All of the obtained compounds fulfilled the rule of five, which indicate their good absorption after oral intake. The probability of pharmacological action and potential targets calculated for the obtained compounds show that they can be potential drugs. Graphical abstract Introduction The thiazole and 4,5-dihydrothiazole derivatives exhibit promising biological activity. The attention should be paid to, among others, biovitrum BVT-2733, biovitrum BVT-14225 [1][2][3][4], and amgen 2922 [4][5][6], which are described as inhibitors of 11b-hydroxysteroid dehydrogenase type 1 (Fig. 1). 11b-Hydroxysteroid dehydrogenase type 1 regulates glucocorticoid action, and the inhibition of this enzyme is a viable therapeutic strategy for the treatment of type 2 diabetes and the metabolic syndrome. Furthermore, compounds containing a thiazole ring are used as inhibitors of thymidylate synthase [7,8]. Due to their biological activity, it is worth concentrating on the efficient method of the thiazole and dihydrothiazole ring formation. One of the ways of forming a dihydrothiazole ring is the reaction of thiourea or its derivatives with haloacids. Such reactions have been described before [9,10]. They have been carried out in various solvents, i.e., water, pyridine, benzene, and acetonitrile. The obtained products had the form of hydrogen halides. To obtain dihydrothiazoles, hydrogen halides were treated in alkaline conditions: sodium carbonate or aqueous solution of ammonia. Due to the previously conducted condensation reactions of 1-allylthiourea (1) with oxoesters in alkaline conditions, which have allowed us to obtain the N-allylthiouracil derivatives [11,12], we have become interested in carrying out the reactions of haloesters with 1 in similar conditions. Encouraged by high efficiency of the synthesized products as well as the easiness of their extracting, we have expanded the conducted synthesis applying chlorooxoesters. Results and discussion Chemistry The condensation reactions for three different haloesters 2-4 were carried out in alkaline conditions ( Table 1). The reaction of 1 with haloesters was carried out in methanol with a 10 % excess of ester and a double excess of sodium methoxide (procedure A). The products were extracted in a yield of 50-65 %. Additionally, the analogous reactions without the base afforded products 6 and 7 in lower yields (40-47 %), whereas only trace amounts of product 5 were obtained from ester 2 (procedure B). The structures of 5-7 were verified by 1 H and 13 C NMR spectral data, as well as mass spectra. The structures of 5 and 7 were also verified by X-ray diffraction analysis (Fig. 2). The above-mentioned highly selective reactions in a good yield in alkaline conditions have encouraged us to carry out the analogous reactions with the use of 1 and chlorooxoesters. The condensation reactions of 1 with 3-oxoesters lead to the formation of 3-allyl-2-thiouracil ( Fig. 3) [13,14]. The presence of a chlorine atom in the oxoester molecules 9 and 10 causes a carbonyl group and a halogen atom to be involved in the reaction, while an ester group is not ( Fig. 3; Table 2). Furthermore, the nucleophilic substitution reaction occurs on the sulfur atom, but does not occur on the nitrogen atom, as in the case of addition elimination reaction in the ester group of a 3-oxoester molecule. Consequently, a five (not six)-membered ring containing nitrogen and sulfur atom is formed, in the same way as while using haloesters 2-4 ( Table 1). As a result of addition of 1 to the carbonyl group of chlorooxoester, and then the elimination of water, the cyclic products 11, 12, containing not one but two double bonds in the ring, were formed. The reaction of 1 with chlorooxoesters and haloesters was carried out at the same alkaline conditions (procedure A). In the first case the products were obtained with a much lower yield (4-15 %), which is probably due to two simultaneous condensation reactions: aldol and Claisen condensation reaction ( Table 2). The poor yield of products 11 and 12 made us change the conditions of conducting the reactions. The reaction of 1 with chlorooxoesters in ethanol let us obtain products 11 and 12 in higher yields (35-42 %) (procedure B). A further increase of a yield was obtained due to transforming a carbonyl group into acetal, followed by the reaction of obtained acetals from 1 in acidic environment, according to the procedure described above (procedure C) [14]. The structures of 11 and 12 were verified by 1 H and 13 C NMR spectral data, mass spectra, as well as X-ray diffraction analysis (Fig. 4). Evaluation of biological activity Due to the biological activity of the compounds containing thiazole and dihydrothiazole rings [1][2][3][4][5][6][7][8], we have calculated the parameters conditioning the bioavailability of the synthesized compounds and their probable activity. Lipinski's rule of five describes molecular properties important for drug pharmacokinetics in the human body, especially their oral absorption. The rule says, that an orally active drug must not violate more than one of the following criteria: B5 hydrogen donors (nOHNH), B10 hydrogen acceptors (nON), MW B 500 Da, log P calc B5 [17]. Those properties calculated with Molinspiration program [18] are presented in the Table 3. All compounds 2-4, 11, 12 fulfilled the rule of five, which indicate their good absorption after oral intake. Topological polar surface area (TPSA) becomes another popular indicator of drug absorbance [19]. TPSA values lower than 140 Å 2 suggest good cell membrane permeability of derivatives 2-4 and 11, 12. Moreover, calculated TPSA values below 60 Å 2 suggest high chances to penetrate blood-brain barrier for compounds 2-4 and 12. The probability of pharmacological action and potential targets for compounds 2-4 and 11, 12 were calculated by PASS online program [20], which is based on analysis of structure-activity relationships (SAR) of chemical compounds and predicts over 4000 kinds of biological activities, including pharmacological effects, mechanisms of action, toxic and adverse effects with average accuracy above 95 % [21]. The selected results of the predicted activities and mechanisms of action of 2-4 and 11, 12 are presented in Table 4. The best results were obtained for derivative 12, which showed the highest probabilities for the mucomembranous protection, antieczematic, antiulcerative, and antiviral activity. The predictions for compounds 2-4 and 12 showed that they are also possible mucomembranous protectors. Moreover, derivatives 3, 4 may be radioprotectors and may have antiviral activity. According to PASS calculations a possible mechanism of action for compounds 2-4 and 11, 12 is enzymes inhibition: chloride peroxidase, muramoyltetrapeptide carboxypeptidase, Cl --transporting ATPase. Other mechanisms of membrane permeability and gastrin inhibitions should be associated with mucomembranous protection and antiulcerative activity, respectively. Conclusion In conclusion, we have successfully conducted simple synthesis leading to the compounds containing thiazole and 4,5-dihydrothiazole ring in the reaction of 1-allylthiourea with haloesters and oxoesters, respectively. By changing the reaction conditions we have managed to obtain the products of a very good performance. The method is very universal and can be applied for the synthesis of compounds containing thiazole and dihydrothiazole rings. Experimental 1 H NMR and 13 C NMR spectra were recorded on Bruker 300 MHz, 400 MHz, and 700 MHz apparatus (TMS as an internal standard). MS spectra were recorded on the Finnigan MAT 95. UV spectra were recorded on the spectrophotometer Aquarius 7250 Cecil Instruments. All chemicals and solvents were purchased commercially and used without further purification. The progress of reactions and also the purity of the obtained compounds were monitored by TLC on TLC-sheets ALUGRAM SIL G/UV 254 plates with ethyl acetate as an eluent. General procedures for the synthesis of compounds 5-7, 11, 12 Procedure A To a solution of sodium methoxide (prepared from 0.1 mol of sodium in 60 cm 3 anhydrous MeOH), 0.05 mol of 1 and 0.055 mol of ester (2-4, 9, 10) were added and refluxed (reflux time depends on the compounds present in Tables 1, 2). The solvent was evaporated and the residue was dissolved in water and neutralized with HCl to pH 7-8. The product was extracted with CHCl 3 , dried, concentrated, and crystallized from diethyl ether. Procedure B To a solution of 30 cm 3 anhydrous EtOH, 0.05 mol of 1 and 0.055 mol of ester (2-4, 9, 10) were added and refluxed (reflux time depends on the compounds present in Tables 1, 2). The solvent was evaporated and the residue was dissolved in water and neutralized with NaOH to pH 7-8. The product was extracted with CHCl 3 , dried, concentrated, and crystallized from diethyl ether. Procedure C To a solution of 30 cm 3 anhydrous MeOH, 0.03 mol of triethyl orthoformate, 0.075 g p-toluenesulfonic acid and 0.05 mol of chlorooxoester 9 or 10 were added and mixed for 24 h. Diethyl ether (30 cm 3 ) was added to the mixture and neutralized with sodium ethoxide to pH 7-8, then 30 cm 3 distilled water was added. The product was extracted with diethyl ether and dried with sodium carbonate. The solvent was evaporated and to the residue 0.02 mol of 1, and 0.11 cm 3 of concentrated sulfuric acid were added and refluxed for 1 h. The solvent was evaporated and the residue was dissolved in 30 cm 3 water and neutralized with NaOH to pH 7-8. The product was extracted with CHCl 3 , dried, concentrated, and crystallized from diethyl ether. X-ray diffraction study The X-ray data for reported structures were collected at 293(2) K with an Oxford Sapphire CCD diffractometer using MoKa radiation k = 0.71073 A and x-2h method. The numerical absorption corrections were applied (RED171 package of programs Oxford Diffraction, 2000) [22]. All structures have been solved by direct methods and refined with the full-matrix least-squares method on F 2 with the use of SHELX-97 program package [23]. The hydrogen atoms have been located from the different electron density maps and constrained during refinement. The crystallographic data have been deposited with the Cambridge Crystallographic Data Centre, the CCDC numbers: 1034864 for 5 and 1034865-1034867 for 7, 11, and 12, respectively. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2,521.4
2015-08-05T00:00:00.000
[ "Chemistry", "Medicine" ]
Editorial: Brain-Computer Interfaces and Augmented/Virtual Reality Cognitive Systems Lab, University of Bremen, Bremen, Germany, 2 Institute for Systems and Robotics (ISR), Instituto Superior Técnico, Lisbon, Portugal, 3 Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt, France, Department of Biomedical Engineering, Virginia Commonwealth University, Richmond, VA, United States, Madeira Interactive Technologies Institute, University of Madeira, Funchal, Portugal, 6 Intheon Labs, San Diego, CA, United States, Department of Neurosurgery, School of Mental Health and Neurosciences, Maastricht SCOPE In recent years, Augmented and Virtual Reality (AR/VR) has matured technically, delivering higher levels of immersion and presence to users, while it has also become a widely available tool to create a new range of applications and experiences. AR/VR technology allows to create scenarios which are much more stimulating and expressive than standard desktop applications, covering a wide variety of areas, namely entertainment, education, art, and health, among others. The fusion of Brain-Computer Interfaces (BCI) with AR/VR can provide additional communication channels, by increasing the bandwidth of the human-AR/VR interaction. This is achieved either explicitly through active BCIs or implicitly using passive BCIs. Active BCIs notably allow users to issue commands to devices or to enter text without physical involvement of any kind, while passive BCIs monitor a user's state (e.g., workload level, attentional state) and can be used to proactively adapt the VR/AR interface. See Lotte et al. (2012) for a general review on mixing VR and BCI technologies. BCIs, together with AR/VR, offer the possibility for immersive scenarios through induced illusions of an artificially perceived reality that can be utilized not only in basic BCI research but also in many fields of application: In basic research, AR/VR can be used to adjust intensity, complexity, and realism of stimuli smoothly while maintaining full control over their presentation. In therapeutic applications, AR/VR can create motivating training paradigms that make use of gamification approaches or believable illusions, e.g., of virtual limbs interacting with the environment. In Human-Computer Interaction, AR/VR can be used for rapid prototyping of new interface paradigms in realistic settings. To live up to these expectations, methodological advances are required for BCI interaction and stimulus design, synchronization, or dealing with VR/AR specific artifacts (Tremmel et al., 2019) and distractions. The papers in this Research Topic show both the upsides and the potential of BCI research for or with AR/VR technology, as well as the challenges that lie on the way to such an integration. Moreover, both passive and active BCIs are presented from current studies. In terms of passive BCIs, attentional state in AR is investigated by Vortmann et al. and Overall, we see that BCI can be used in VR/AR-based training for workload and attention assessment, but also in rehabilitation contexts, where the VR components creates a much more immersive experience compared to traditional training paradigms. RESEARCH HIGHLIGHTS In the Research Topic, we find works both on passive and active BCI technology. Regarding passive BCI, research concentrates on two aspects which are often tackled for adaptive technology: attention and workload. From their motivation, these papers aim for an improvement of the AR/VR interface itself through user state adaptation. Vortmann et al. perform a study on the classification of internal from external attention in an AR setting. The authors develop a novel AR task requiring continuous spatial alignment, mimicking typical AR interactions. They show that using frequency features, the classifier achieves an average classification accuracy of 85% using windows of 13 s length. Recently, they demonstrated an real time implementation of the attention model (Vortmann et al., 2019), enabling online adaptation of ARbased user interfaces, such as a smart home control in AR (Putze et al., 2019b) using Steady-State Visually Evoked Potentials (SSVEP) and eye tracking to select from virtual menus displayed in the environment. Other related work is by Si-Mohammed et al. (2018), a pioneer study on the feasibility and design of 3D User Interfaces exploiting AR for SSVEP-based BCI. Tremmel et al. perform a study to measure mental workload in VR. For this purpose, they adapted the wellstudied n-back task for interactive VR and made it publicly available. They recorded neural activity, recorded through Electroencephalography (EEG), from 15 participants performing 0-, 1-, and 2-back trials and could show that workload levels could be discriminated from the scalp recordings despite the large level of physical movement occuring during VR usage. Additionally, they showed the feasibility of using functional Near Infrared Spectroscopy (fNIRS) as an alternative modality (Putze et al., 2019a), as both fNIRS (Herff et al., 2014) and the combination of fNIRS and EEG (Herff et al., 2015) have been shown to be successful in workload classification. This opens the door for future multimodal systems on workload classification in VR. Other papers concentrate on the integration of active BCI technology in VR settings. In contrast to the passive BCI contributions, this research is less about the potential improvement of the VR interface, but about leveraging this immersive technology to improve traditional BCI paradigms, for example in rehabilitation. Vourvopoulos, Pardo et al. performed a study by combining the principles of VR and BCI in a rehabilitation training platform called REINVENT , assessing its effects on four chronic stroke patients across different levels of motor impairment. The acquired post-stroke EEG signals that indicate an attempt to move, drive a virtual avatar arm (from the affected side), allowing patient-driven action observation BCI in VR. They show that EEG-based BCI-VR may benefit patients with more severe motor impairments, by increasing their communication bandwidth to VR, in-contrast to patients with more mild impairments that they can still harness existing sensorimotor pathways with EMG-based feedback in the same VR training . By extending BCI-VR training from uni-manual (of the affected arm) into bi-manual control, Vourvopoulos, Jorge et al. performed a study by using NeuRow, a self-paced BCI paradigm in VR (Vourvopoulos et al., 2016) together with brain-imaging data (fMRI) in a chronic stroke patient. They found important improvements in upper extremity clinical scales (Fugl-Meyer) and identified increases in brain activation measured by fMRI that suggest neuroplastic changes in brain motor networks. These results suggest that BCI with ecologically-valid VR could be useful in chronic stroke patients with reduced upper limb motor function while together with brain-imaging data we move toward identifying the specific benefits of brain-controlled VR training environments for neurorehabilitation. Finally, Škola et al. presented a study with a gamified BCI-VR training including virtual avatar designed with the aim to maintain high levels of attention and motivation. This was achieved using a progressively increasing training, event-driven and not self-paced by providing participants with score points about their progress. Performance levels in terms of classifier performance show higher than chance level (65%) and the strength of Event-Related Desynchronizations (ERDs) during session was positively correlated to the subjective magnitude of the sense of ownership while the perceived ownership of the avatar body was not correlated to the BCI performance nor to the sense of agency. SUMMARY The presented articles present the large potential of combining AR/VR technology with BCIs to further increase the immersiveness of AR/VR and improve the usability of BCIs for rehabilitation and control. All presented papers use elaborate, task-specific experiment setups with custom AR/VR environments and custom solutions to set up sensors, input devices, and displays [see (Putze, 2019) for an overview of emerging methods and tools and (Si-Mohammed et al., 2017) for a general overview of potential applications and main scientific challenges related to the combination of AR and BCI]. Future research can build on this pioneering works and the resulting best practices to derive more standardized, common experiment protocols for a large variety of research questions. This would lower the entry barrier and make the promising technology more accessible. AUTHOR CONTRIBUTIONS FP created the structure and initial draft. CH, AV, and FP contributed the paper summaries. All authors contributed to the overview of the field.
1,824.2
2020-05-12T00:00:00.000
[ "Computer Science", "Medicine", "Engineering" ]
Four new triterpene saponins from Cephalaria speciosa and their potent cytotoxic and immunomodulatory activities Four new triterpene saponins, namely speciosides A-D (1–4) along with six known saponins were isolated from the n-butanol extract of Cephalaria speciosa. In addition to these, three new prosapogenins (2a–4a) were obtained after alkaline hydrolysis. Elucidation of the structures of the isolated compounds was carried out by 1D, 2D NMR, HR-ESI/MS and GC–MS analyses. Cytotoxic activity was investigated on A549, CCD34-Lu, MDA-MB-231, PC-3, U-87MG, HeLa, HepG-2 cells by MTT method. Additionally, the immunomodulatory effect of compounds was evaluated for macrophage polarization with/without inactivated IBV D274 antigen treatment on THP-1 cells originated macrophage cells in terms of M1 or M2. According to the cytotoxicity results, compound 1 and prosapogenin 2a exhibit significant cytotoxicity than doxorubicin by comparison. The results demonstrated that saponin molecules treated THP-1 originated macrophages were induced M1 and/or M2 polarization. Additionally, macrophage cells treated with/without IBV D274 antigen contained saponin compounds were triggered significantly M2 polarization relative to M1. Notably, monodesmosidic saponins (1 and 2a–4a) in comparison with bisdesmosidic ones (2–4) demonstrated the most effect on M2 polarization. In conclusion, the results showed that all the isolated new saponins and their prosapogenins have immunomodulatory potential on macrophage cells increasing immune response without significant cytotoxic effect on THP-1 originated macrophages. Plant material All methods were carried out in accordance with guidelines on good agricultural and collection practices for medicinal plants of the World Health Organization and European Medicines Agency 17,18 .The plant material was collected in accordance with relevant institutional, national and international guidelines and legislation.The appropriate permission for the collection was provided by "Republic of Turkiye Ministry of Agriculture and Forestry, General Directorate of Nature Conservation and National Parks" with the number E-21264211-288.04-6345319.Cephalaria speciosa 17 was gathered in August 2013 from 1600 m altitude hillside between Sivas-Erzincan (39°54′01'' N, 38°51′08'' E).The plant was identified by R. Suleyman Gokturk and then was archived at Akdeniz University Herbarium Research Centre with the number R.S. Gokturk 7673.The aerial parts of the plant were harvested by hand, dried at room temperature and kept away sunlight. Monosaccharide analysis For the investigation of the exact structure of carbohydrate units, acidic hydrolysis and GC-MS analysis were performed.3-5 mg (each) of compounds 1-4 were hydrolyzed with 5 ml 5% HCl (aq) solution under reflux for 2 h at 90 °C.And then the reaction mixture was neutralized using a 5% KOH (aq) solution.The resulting aqueous solution was extracted with 10 ml of CHCl 3 , the extraction procedure was repeated three times for the organic and aqueous layers from each extraction step.The water phase dried and was kept in a hi-vac for 24 h.It was dissolved in 2 mL of pyridine, after dissolution 2 mL of HMDS-TMCS (1:1) (hexamethyldisilazane-trimethylchlorosilane) was added and was carried out under reflux, and argon gas at 85 °C for 2 h.A silylation reaction was carried out for the standard sugar mixture, simultaneously.The monosaccharide analysis of the compounds 1-4 was performed by the GC-Mass method of Sarikahya and Kirmizigul 10 under the following conditions; column, HP-5MS (60 m × 0.25 mm ID × 0.25 µm df), flow rate, 1.0 mL/min; split ratio, 1:20; ion source temperature, 230 °C; inlet and interface temperatures, 250 °C; temperature program started at 60 °C (held for 1 min) followed by 10 °C/min at 240 °C (held for 2 min), the final temperature reached 320 °C (held for 3 min) and the EI mode Table 2. 13 C-NMR data for compounds 1-4. 13 Alkaline hydrolysis The esteric linkages of the sugar moieties which are in bisdesmosidic triterpene glycosides (compounds 2-4) were hydrolyzed by the following method.In this method, all glycosides (3 mg each) were refluxed with 5% KOH in a water solution (pH = 12-13) at 80 °C for 1 h.The reaction mixtures were neutralized with 5% HCl in a water www.nature.com/scientificreports/solution and then concentrated to dryness 10 .The residues were extracted with n-BuOH:H 2 O (1:1) and the organic layers were analyzed by 1 H-NMR to give the pure prosapogenins (2a-4a www.nature.com/scientificreports/ After the formazan crystals were completely dissolved, the optical density (OD) was measured at 570 nm with an UV/Vis microplate spectrophotometer (Thermo Fisher Scientific, USA).All data are the results of at least three independent experiments carried out in triplicate.Data are presented as mean ± standard error of the mean (SEM) of samples.The IC 50 value was calculated using the GraphPad Prism 8 (USA) program based on the calculated percent viability values and absorbance values.The percentage of viability was determined by Eq. ( 1): Determination of macrophage polarization by flow cytometry To differentiate THP-1 cells into M0-THP-1, cells were seeded at an initial concentration of 1 × 10 6 cells/mL in 6-well plates (Corning, USA) and treated with 15 ng/mL phorbol 12-myristate 13-acetate (PMA, Sigma USA).Then incubate at 5% CO 2 , 37 °C for 48 h 24 .To evaluate the potential of saponins with/without infectious bronchitis virus (IBV) D274 antigen on macrophage polarization, THP-1 cells were treated with saponins at 3 µg/ mL and then incubated for 24 h with or without 4 µL of inactivated IBV D274 antigens (final titer of 1 HA).The inactive IBV D274 antigen used in this study was prepared in our previously published study 25 .Based on the results of our previous research, the saponin concentration was selected as a non-cytotoxic concentration 9 .At the end of the incubation period, the cells were collected by trypsinization and washed three times with PBS.Following, 1 µL of PE anti-human CD80 (Biolegend, CA), FITC anti-human CD163 (Biolegend, CA), and FITC anti-human CD11b (Biolegend, CA) antibodies were added into the cells pellet and incubated for 45 min at room temperature in the dark.The cells pellets underwent two PBS washes following incubation.Finally, the cells were subsequently analyzed by flow cytometry (BD Accuri, C5). 1 × 10 4 cells were counted for each sample group using flow cytometry.All data are the results of at least three independent experiments carried out in triplicate.Data are presented as mean ± standard error of the mean (SEM) of samples.The untreated stained macrophage cells with anti-human CD80, FITC anti-human CD163, and FITC anti-human CD11b were compared with the results of treated cells to obtain from M0 polarization to M1 and M2 or both. Statistical analysis The results of at least three independent experiments were compared with the variable groups and the control group, with mean ± standard deviation values.One-way analysis of variance (ANOVA) was performed with GraphPad Prism 8 (USA) by Dunnett's test.P < 0.05 was considered significant. Results and discussion Phytochemical studies on the aerial parts of the Cephalaria speciosa resulted in a total of 10 saponins, 4 of which new triterpene saponins, namely, speciosides A-D (1-4) with 3 new prosapogenins (2a-4a) (Fig. 1).A total of 6 known saponins named macranthoidin A (5) 26 elmalienoside A (6) 27 , dipsacoside B (7) 28 , decaisoside E (8) 29 , scoposide A (9 30 were also isolated, purified and structurally identified. Compound 1 was obtained as a pale yellow amorphous powder having [α] D 25 : -10.8 (c: 0.19, DMSO).IR spectrum of compound 1 showed absorption bands of functional groups which are aliphatic C-H (2923 cm −1 ), alkene -C=C-(1693 cm −1 ), carboxylic acid carbonyl C=O (1727 cm −1 ), and C-O (1268, 1048 cm −1 ) and hydroxyl groups (3347 cm −1 ).The 13 C-NMR spectrum showed that compound 1 consisted of 42 carbons (Table 2), of which 12 were identified for carbohydrate groups and 30 carbons were determined for the aglycone.The 13 Compound 2 was obtained as a white amorphous powder with specific rotation of [α] D 25 : -6.1 (c 0.33, MeOH).The IR and NMR spectroscopic features of 2 were similar to those of 1 except for the sugar units and free typical hydroxy group linked C-23 (Tables 1 and 2).Spectral analysis of the compound 2 showed that the structure consists of 54 carbons can be seen 13 C-NMR spectrum (See.Sup.Info.Fig. S 13).Aglycone part was determined as oleanoic acid based on chemical shift difference for C-23 of aglycone compared to hederagenin where compound 2 shows signal for C-23 at δ C 27.8 methyl signal (Table 2).The structure of aglycone moiety was defined based on HSQC, COSY, and HMBC spectrum analysis in a similar way as in compound 1.The 1 H and 13 C-NMR data for the aglycone part also agree with data for oleanoic acid aglycone reported in the literature 10 .Sugar carbons were defined according to COSY and HMBC correlations.For the sugar moieties, the Compound 3 was obtained as a white amorphous powder with a specific rotation of [α] D 25 -15.9 (c 0.13, MeOH).The IR and NMR data of the aglycon moiety of 3 were identical to the other new compounds (1 and 4).The C-3 oxymethine carbon and C-28 carbonyl carbon were observed at δ C 79.5 and 175.7, respectively which suggests that 3 is a bisdesmosidic triterpene saponin.The other aglycone carbon locations were determined by studies on COSY and HMBC correlations.For the sugar moieties, the 1 H-NMR spectrum of 3 displayed four anomeric proton signals at δ H 4.26 (d, J = 7.8 Hz), 5.19 (brs), 5.18 (d, J = 8.4 Hz) and 4.17 Compound 4 was obtained as a pale yellow amorphous powder with a specific rotation of [α] D 25 -11.7 (c 0.34, MeOH).Compound 4 consists of 54 carbons according to 13 C-NMR (Table 2).24 of the carbon signals were identified for carbohydrate groups and 30 carbons were determined for the aglycone and confirmed that compound 4 has a hederagenin aglycone.The C-3 oxymethine carbon and C-28 carbonyl carbon were observed at δ C 80.0 and 175.7, respectively which suggests that compound 4 is a bisdesmosidic hederagenin type triterpene saponin.Aglycone carbon locations were determined by studies on COSY and HMBC correlations.For the carbohydrate moieties, the 1 H-NMR spectrum of compound 4 showed four anomeric proton signals at δ H 4.26 (d, J = 7.2 Hz), 5.17 www.nature.com/scientificreports/I) showed the sugar linkage points to the aglycone.Furthermore, HMBC correlations between δ H 5.17 The cytotoxic effects of compounds 1-4 and prosapogenins 2a-4a, which was obtained after alkaline hydrolysis, were examined using the MTT method.The estimated IC 50 values was given in Table 3 and the percent vitality graph according to the MTT result was given in Fig. S 60-S 67 in Sup.Info.According to the results, while compounds 2-4, and 3a did not have a cytotoxic effect on THP-1-derived macrophage cells., compounds 1, 2a, and 4a (27.57± 0.52, 37.34 ± 8.17, and 35.74 ± 4.90 µM) inflicted weak cytotoxic effect when compared with doxorubicin (6.84 ± 0.18 µM).Compounds 1, 2a, and 4a demonstrated significant cytotoxicity on most cancerous cells.The mentioned compounds showed significant IC 50 on A549 and MDA-MB-231 cells as 8.59 ± 0.19 and 15.09 ± 1.02 µM for compound 1, and 20.77 ± 0.46 and 24.29 ± 2.65 µM for prosapogenin 2a, and 8.13 ± 0.01 and 8.43 ± 0.15 µM for 4a when compared with doxorubicin (40.01 ± 0.02 and 70.00 ± 0.02 µM).Compounds 2-4 and prosapogenin 3a did not show significant cytotoxicity on all the tested cell lines (Table 3).These results demonstrated that bisdesmosidic compounds (2-4) did not show any cytotoxic activity while following monodesmosidic compounds (1, 2a, 3a, and 4a) have cytotoxic effect on cancerous cell lines.In literature, as a similar study Tlili et al. 31 , who identified the biochemical profiles of the major compounds found in plant leaf extract and examined its anti-leukemic potential against acute monocytic leukemia (AML) THP-1 cells.They proved that the mTOR pathway may be involved in cell cycle inhibition and apoptosis induction caused by saponin.Furthermore, the obtained data confirm the predictions that the monodesmosidic saponins exhibit parallel cytotoxicity in our previous studies 9,11 .Furthermore, the structures of compounds 1 and 2a have the same galactose and rhamnose units attached third carbon of aglycone moieties however their aglycone structures are hederagenin and oleanane, respectively.These structural varieties may prove the activity distinctness; cytotoxic activity on A549, MDA-MB-231, and HeLa cell lines (8.59 ± 0.19, 15.09 ± 1.02, and 48.11 ± 4.56 µM) for compound 1, and A549 and MDA-MB-231 (20.77 ± 0.46 and 24.29 ± 2.65 µM) for compound 2a.These results may relate with free -CH 2 OH of hederagenin aglycone moiety of compound 1 increased the cytotoxicity. In this study, it was also aimed to examine the potential of saponin and their prosapogenins from C. speciosa on THP-1 monocytes-originated macrophage polarization.Regarding the immune modulation potential of saponins, THP-1-originated macrophage cells were used to determine of saponin effect on macrophage polarization.The use of the THP-1 cell line as a suitable model to study the functions and responses of monocytes and macrophages, as well as the differentiation of macrophages and any potential effects of environmental stimuli is suggested 22 .Also, Genin et al. 32 suggested the THP-1 cell line for macrophage differentiation and established a novel and practical model of human macrophage polarization to study how macrophages could modulate tumor cells, specifically the tumor cells' response to chemotherapeutic agents.For this purpose, THP-1 cells were used as a suitable model to study macrophage differentiation and the potential effects of environmental stimuli.The THP-1 cell line is recommended for macrophage differentiation and for establishing a practical model of human macrophage polarization to study how macrophages could modulate tumor cells. The Mean Fluorescent Intensity (MFI) values (Table 5) of flow cytometry results showed that the MFI of CD11b in all THP-1 originated macrophage cells is almost near 30.000 which means all of the treated groups induced M0 polarization in THP-1 macrophages (Fig. 2).According to Fig. 2 in both antigen and without antigen saponin treated cells MFI values of CD163, the marker of M2 on macrophages when compared with CD80, the marker of M1, are higher which shows the saponins potential on M2 macrophage polarization.And also, it is worthwhile to mention the MFI values of cells treated with both antigen and saponin that is significantly higher than the cells without antigen (Fig. 2a,b).According to the fact that M1/M2 describes the two major and opposing activities of macrophages 33 .In the present study all of the samples showed the similar inverse relationship between CD80 and CD163's MFI values.Based on our results, the saponins enhanced M2 polarization.In the present study, all samples showed a similar inverse relationship between the MFI values of CD80 and CD163.5). Regarding saponins, studies suggest that they can trigger Th2 in M0 macrophages, which in turn stimulates M2 macrophage polarization 34,35 .Furthermore, treating cells with both antigens and saponins can enhance macrophage polarization, leading to an improved immune system response.Macrophages, derived from monocytes, play a critical role in inflammation, host defense, and tissue healing.M2-polarized macrophages have potential as adjuvants for anticancer therapies, and recent approaches focus on M2 polarization.According to Zhao et al. 36 demonstrated that Panax saponins promote M2 macrophage polarization so that depend on their effects of antiinflammation, saponins have important role on treatment of vascular disease.Similarly, our findings demonstrate that saponins play an effective role in M2 macrophage polarization.As mentioned before, our findings indicate that saponins take an effective role in M2 macrophage polarization so that induce wound healing effect. Conclusion This study evaluated the cytotoxic activities and macrophage polarization potential of four new oleanoic acid and hederagenin type triterpene saponins (1-4) named speciosides A-D and their new prosapogenins (2a-4a), respectively.The 4 new saponins and 3 prosapogenins were tested for their cytotoxicity.Consequently, 1, 2a, and 4a exhibited significant cytotoxicity against A549 cells and MDA-MB-231 cells.All isolated new saponins (1-4) and their prosapogenins (2a-4a) induced M2 macrophage polarization.These results provide valuable information for the literature on the structure, cytotoxicity, and immunomodulatory relationship of the studied saponins.Further studies should explore the activity-structure relationships to establish a model for activitytargeted synthesis and semi-synthesis.Moreover, additional research on the biological activities of these saponins for pharmaceutical and industrial applications is recommended. Table 1 . 1H-NMR data for compounds 1-4.1HNMR data (δ) were measured in DMSO-d 6 at 600 MHz.Coupling constants (J) in Hz are given in parentheses.The assignments are based on DEPT, COSY, HMQC and HMBC experiments.nd: not detected. C NMR data were measured in DMSO-d 6 at 150 MHz.The assignments are based on HSQC, COSY and HMBC experiments.nd: not detected.
3,754.8
2023-10-08T00:00:00.000
[ "Medicine", "Chemistry" ]
Distributed Consensus‐Based Estimation with Random Delays Distributed Consensus-Based Estimation with Random Delays In this chapter we investigate the distributed estimation of linear-invariant systems with network-induced delays and packet dropouts. The methodology is based on local Luenberger-like observers combined with consensus strategies. Only neighbors are allowed to communicate, and the random network-induced delays are modeled as Markov chains. Then, the sufficient and necessary conditions for the stochastic stability of the observation error system are established. Furthermore, the design problem is solved via an iterative linear matrix inequality approach. Simulation examples illustrate the effectiveness of the proposed method. Introduction The convergence of sensing, computing, and communication in low cost, low power devices is enabling a revolution in the way we interact with the physical world. The technological advances in wireless communication make possible the integration of many devices allowing flexible, robust, and easily configurable systems of wireless sensor networks (WSNs). This chapter is devoted to the estimation problem in such networks. Since sensor networks are usually large-scale systems, centralization is difficult and costly due to large communication costs. Therefore, one must employ distributed or decentralized estimation techniques. Conventional decentralized estimation schemes involve all-to-all communication [1]. Distributed schemes seem to fit better. In this class of schemes, the system is divided into several smaller subsystems, each governed by a different agent, which may or may not share information with the rest. There exists a vast literature that study the distributed estimation for sensor networks in which the dynamics induced by the communication network (time-varying delays and data losses mainly) are taken into account [2][3][4][5][6][7][8][9][10]. Millan et al. [6] have studied the distributed state estimation problem for a class of linear time-invariant systems over sensor networks subject to network-induced delays, which are assumed to have taken values in ½0, τ M . One of the constraints is the network-induced time delays, which can degrade the performance or even cause instability. Various methodologies have been proposed for modeling and stability analysis for network systems in the presence of network-induced time delays and packet dropouts. The Markov chain can be effectively used to model the network-induced time delays in sensor networks. In Ref. [11], the time delays of the networked control systems are modeled by using the Markov chains, and further an output feedback controller design method is proposed. The rest of the chapter is organized as follows. In Section 2, we analyze the available delay information and formulate the observer design problem. In Section 3, the sufficient and necessary conditions to guarantee the stochastic stability are presented first and the equivalent LMI conditions with constraints are derived. Simulation examples are given to illustrate the effectiveness of the proposed method in Section 4. Notation: Consider a network with p sensors. Let υ ¼ f1, 2, ⋯, pg be an index set of p sensor nodes, ε⊂υ · υ be the link set of paired sensor nodes. Then the directed graph G ¼ ðυ, εÞ represents the sensing topology. The link ði, jÞ implies that the node i receives information from node j. The cardinality of ε is equal to l. Define q ¼ gði, jÞ as the link index. N i ¼ fj∈υjði, jÞ∈εg denotes the subset of nodes that communicating to node i. Problem formulation Assume a sensor network intended to collectively estimate the state of a linear plant in a distributed way. Every observer computes a local estimation of the plant's states based on local measurements and the information received from neighboring nodes. Observers periodically collect some outputs of the plant and broadcast some information of their own estimation. The information is transmitted through the network, so network-induced time delays and dropouts may occur. In this work, the system to be observed is assumed to be an autonomous linear time-invariant plant given by the following equations: where xðkÞ∈R n is the state of the plant, y i ðkÞ∈R mi is the system's outputs and p is the number of the observers. Assume ðA, CÞ is observable, where C ¼ ½C 1 , ⋯, C p . Besides the system's output y i ðkÞ, observer i receives some estimated outputsŷ ij ðkÞ ¼ C ijxj from each neighbor j∈Ν i . The matrix C ij is assumed to be known for both nodes. Define C i as a matrix stacking the matrix C j and the matrices C ij for all j∈Ν i . It is assumed that ðA, C i Þ is observable ∀i. Delays modeled by Markov chains The communication links between neighbors may be affected by delays and/or packet dropouts. The equivalent delay τ ij ðkÞ∈N (or τ q ðkÞ, with q ¼ gði, jÞ∈f1, ⋯, lg) represents the time difference between the current time instant k and the instant when the last packet sent by j was received at node i. The delay includes the effect of sampling, communication delay, and packet dropouts. The number of consecutive packet dropouts and network-induced delays are assumed to be bounded, so τ ij ðkÞ is also bounded. The Markov chain is a discrete-time stochastic process with Markov property. One way to model the delays is to use the finite state Markov chain as in Refs. [7][8][9]. The main advantages of the Markov model are that the dependencies between delays are taken into account since in real networks the current time delays are usually related to the previous delays [8]. In this note, τ ij ðkÞ (∀i, j∈N i ) are modeled as l different Markov chains that take values in And their transition probability matrices are Λ q ¼ ½λ qrs , q ¼ 1, 2, ⋯, l. That means τ ij ðkÞ jump from mode r to s with the probability λ qrs : where λ qrs ≥ 0 and ∑ τM s¼0 λ qrs ¼ 1 for all r, s ∈ W. Remark 1: In the real network, the network-induced delays are difficult to measure. Using the stochastic process to model the delays is more practical. For sensor networks, the communication link between different pairs of nodes is also different, so the data may experience different time delays. It is more reasonable to model the delays by different Markov chains. Observation error system The structure of the observers described in the following is inspired by that given in Ref. [6]. To estimate the state of the plant, every node is assumed to run an estimator of the plant's state as:x The observers' dynamics are based on both local Luenberger-like observers weighted with M i matrices, and consensus with weighting matrices N ij , which takes into account the information received from the neighboring nodes. The observation error of observer i is defined as e i ðkÞ ¼x i ðkÞ−xðkÞ. From Eqs. (1)-(5), the dynamics of the observation errors can be written as: Define eðkÞ ¼ ½ e T 1 ðkÞ e T 2 ðkÞ ⋯ e T p ðkÞ T , XðkÞ ¼ ½ e T ðkÞ e T ðk−1Þ ⋯ e T ðk−τ M Þ T , then we have the observation error system: Π q are block matrices in correspondence with each of the links q communicating the observer i with j, in which the only blocks different from zero are −N ij C ij and N ij C ij in the ði, iÞ and ði, jÞ positions, respectively. M ¼ fM i , i∈υg, N ¼ fN ij , i∈υ, j∈N i g are observer matrices to be designed. Remark 2: The observation error system (Eq. (7)) depends on the delays τ 1 ðkÞ, ⋯, τ l ðkÞ. This makes the analysis and design more challenging. The objective of this note is to design the observers to guarantee the stochastic stability of Eq. (7). Numerical example Consider a plant whose dynamics is given by: . Assume the network has two nodes, with two links, one is from node 1 to node 2, and the other is from node 2 to node 1. The matrices are given as follows: The random delays are assumed to be τ q ðkÞ∈f0, 1gðq ¼ 1, 2Þ, and their transition probability matrices are given by . Figure 1 shows part of the simulation run of the delay τ 2 ðkÞ governed by its transition probability matrix Λ 2 . Conclusion This chapter addresses the problem of distributed estimation considering random networkinduced delays and packet dropouts. The delays are modeled by Markov chains. The observers are based on local Luenberger-like observers and consensus terms to weight the information received from neighboring nodes. Then the resulting observation error system is a special discrete-time jump linear system. The sufficient and necessary conditions for the stochastic stability of the observation error system are derived in the form of a set of LMIs with nonconvex constraints. Simulation examples verify its effectiveness.
2,046.8
2017-02-01T00:00:00.000
[ "Mathematics" ]
In Vitro Glycoengineering of IgG1 and Its Effect on Fc Receptor Binding and ADCC Activity The importance and effect of Fc glycosylation of monoclonal antibodies with regard to biological activity is widely discussed and has been investigated in numerous studies. Fc glycosylation of monoclonal antibodies from current production systems is subject to batch-to-batch variability. If there are glycosylation changes between different batches, these changes are observed not only for one but multiple glycan species. Therefore, studying the effect of distinct Fc glycan species such as galactosylated and sialylated structures is challenging due to the lack of well-defined differences in glycan patterns of samples used. In this study, the influence of IgG1 Fc galactosylation and sialylation on its effector functions has been investigated using five different samples which were produced from one single drug substance batch by in vitro glycoengineering. This sample set comprises preparations with minimal and maximal galactosylation and different levels of sialylation of fully galactosylated Fc glycans. Among others, Roche developed the glycosyltransferase enzyme sialyltransferase which was used for the in vitro glycoengineering activities at medium scale. A variety of analytical assays, including Surface Plasmon Resonance and recently developed FcγR affinity chromatography, as well as an optimized cell-based ADCC assay were applied to investigate the effect of Fc galactosylation and sialylation on the in vitro FcγRI, IIa, and IIIa receptor binding and ADCC activity of IgG1. The results of our studies do not show an impact, neither positive nor negative, of sialic acid- containing Fc glycans of IgG1 on ADCC activity, FcγRI, and RIIIa receptors, but a slightly improved binding to FcγRIIa. Furthermore, we demonstrate a galactosylation-induced positive impact on the binding activity of the IgG1 to FcγRIIa and FcγRIIIa receptors and ADCC activity. Introduction Glycosylation of therapeutic proteins is crucial for their biological activity as has been previously identified [1]. Glycosylation profiles vary depending on, for example, production cell type used, fermentation process, or even production scale [2,3]. Variability in glycan patterns based on manufacturing variability was described also for marketed antibody products [4,5]. This variability might be even more pronounced during development of monoclonal antibodies based on multiple changes implemented during process optimization. The impact of non-fucosylated complex type Fc glycans on the effector function of monoclonal antibodies has been shown in different publications [6][7][8][9]. For galactose, the effects are controversially discussed based on different studies available. Several reports conclude that different galactosylation levels do not influence ADCC activity [10][11][12]. However, positive correlation between galactosylation and FcγRIIIa binding has also been observed in multiple studies [13,14]. Terminal sialic acid has been shown to influence Fcγ receptor binding and anti-inflammatory activity [15] or antibody-dependent cellular cytotoxicity in different studies [16,17] by reduced binding of sialylated antibody towards FcγRIIIa. However, there are also studies showing no influence of sialic acid on the FcγR interactions [18,19]. Investigation of glycan structure-function is highly dependent on a well-defined difference between samples. Optimally, there should be variation in levels of only one glycan species (e.g. galactose) between the investigated samples, whereas the levels of all other glycan species should remain constant (e.g. afucose, mannose). This might be one reason for the contradictory results of previous studies, where samples have been used from different batches or after fractionation. In this study we started with one single batch of IgG1 and modified the glycan structures using glycoenzymes, the so-called in vitro glycoengineering (IVGE) approach. Using IVGE, a sample itself might still exhibit glycan heterogeneity but selective changes can be introduced, e.g. conversion from low levels to high levels of galactose. Different groups have already employed this technique which emerged in recent years and is still under development. Different approaches are possible using specific enzymes called glycosyltransferases. One strategy is to transfer an entire glycan structure to the antibody backbone. In this case, the glycan tree has to be available as an oxazoline and the receiving protein needs to host the core N-acetyl glucosamine (GlcNAc) at the respective N-glycan site [20]. However, this technique is not very common since both oxazoline-derivatized sugars as well as specific enzymes are not easily available. Another strategy is treatment of glycan structures from their terminal ends. Cleavage of terminal glycans can easily be achieved by use of glycosidases such as sialidase or galactosidase. More difficult is the addition of terminal sugar moieties such as sialic acid or galactose. Prerequisites are the availability of activated sugars (e.g. CMP-NANA, UDP-Gal) and specific enzymes (e.g. sialyl-or galactosyltransferase)-ingredients which have not been reliably available in the past. This might be one reason why the in vitro glycoengineering approach was not broadly applied in the pharmaceutical industry even though this technique has been used for more than a decade for different, mostly analytical, purposes [21][22][23][24][25]. One advantage of in vitro glycoengineering is its independence from the production cell line and the production process. Thus, glycan variants of a therapeutic protein can easily be produced at milligram or gram scale for analytical assays, or even at kilogram scale for commercial application in relatively short time and with relatively low development effort. To better understand the influence of the most common terminal sugar moieties of monoclonal antibody Fc glycans, galactose and sialic acid, on Fc-functionality, we decided to investigate this for a monoclonal IgG1 expressed in CHO cells at Roche in-house. The chosen IgG1, directed against a receptor of the EGFR family, has fully active effector function and ADCC as part of its mode of action. We applied the in vitro glycoengineering technique to selectively change the content of terminal galactose, and/or sialic acid of Fc glycans. A comprehensive set of analytical methods was used to confirm changes in the glycan pattern, to show molecule integrity and to study the impact of the different glycan variants on Fc-functionality of the antibody. Glycan analysis was performed using 2-amino benzamide (2-AB) labeling of released glycans with HPLC separation and fluorescence detection. Molecule integrity was checked by size exclusion and LCMS peptide mapping [26]. Several binding properties of the molecule were analyzed with state-of-the-art and newly developed characterization assays. Binding to FcγRI, IIa, and IIIa was assessed by Surface Plasmon Resonance (SPR) technology. As orthogonal method, recently developed FcγRIIa and IIIa affinity chromatography was applied. Additionally, a fully quantitative NK-cell based in vitro ADCC assay [27] was included in the study. Preparation and analysis of molecular integrity of different glycan variants Starting from one single batch of IgG1, different glycan variants were produced by IVGE as described in the Materials and Methods section. Fig 1 depicts the workflow for the stepwise approach of glycan variant sample preparation. Four glycan variants were produced from the starting material: A hypo-galactosylated variant, i.e. IgG1 that comprises predominantly G0F Fc glycan species, a hyper-galactosylated variant (predominantly G2F), a mono-sialylated variant (predominantly G2S1F) and a di-sialylated variant (increased level of G2S2F) (see also Table 1 for illustration of the glycan structures). The glycosylation profile was determined by HPLC analysis after cleavage of the glycans from the protein backbone and subsequent 2-AB labeling. As shown in Table 1, the starting material (or "bulk", material obtained from the production process after regular fermentation and purification steps) consists mainly of fucosylated glycan species without or with one single terminal galactose unit. By application of the in vitro glycoengineering technique about 85% purity were achieved for the hypo-and hypergalactosylated variants without influencing levels of high-mannose or afucose (Table 1). Taking into account that some glycan species cannot be modified with the enzymes used (e.g. mannose 5), this represents the maximal possible change in galactosylation. For the mono-sialylated variant, a high level of 46.7% of G2S1F glycan species was achieved. It can be noted that the sialic acid was primarily added to the α1,3-arm using a Roche in-house human α2,6-sialyltransferase (ST6) variant (data not shown). This is in accordance with a previous publication where preferential sialylation of the α1,3-arm was observed after treatment with human ST6 [28]. For the di-sialylated variant, additional 30% of the G2S2F glycan species were added to the existing level of G2S1F. Potential amino acid degradation sites were monitored by LCMS tryptic peptide mapping in order to assess the impact of glycan variant sample preparation on molecular properties such as target or receptor binding. As shown in Table 1, the different sample preparation steps have a minor impact on asparagine deamidation and isoaspartic acid formation in the target binding regions. However, these observed differences in degradation are low (<10%). Molecular integrity of the samples was verified by size-exclusion chromatography as depicted in Table 1. In general, longer sample preparation time resulted in slightly reduced monomer content. As a consequence, slightly increased amounts of high-and/or low-molecular weight species were observed, resulting in a monomer content varying between 95.6% and 99.3% in the different samples. Fcγ receptor I, IIa, and IIIa interaction by SPR analysis The binding activities of the IgG1 glycovariants to the respective Fc-gamma receptors were determined by surface plasmon resonance (SPR). Here, the His-tagged Fc-gamma receptors (FcγRI, FcγRIIa_R131, and FcγRIIIa_V158) were captured via Anti-His antibody and immobilized on the chip surface. Subsequently, glycovariants in equal concentrations (300 nM) were injected as analytes. Deglycosylated IgG1 has been used as negative control in all SPR measurements. This variant failed to bind all tested FcγRs, except the highly affine FcγRI (data not shown), which is consistent with previous reports. Comparison of binding to the high affinity FcγRI showed similar interactions for all glycovariants tested (Fig 2A), whereas hypo-galactosylated material binds to FcγRIIa and FcγRIIIa with binding efficiencies between 65 and 75% ( Fig 2B and 2C), compared to the bulk material. In contrast, di-galactosylated glycovariants (hyper-galactosylated, mono-and di-sialylated variants) appear to have an increased binding to FcγRIIa and FcγRIIIa (106-140%). Sialylation doesn't seem to impair binding to FcγRs. FcγRIIa interaction even appears to be slightly improved for the sialic acid-bearing variants. Interestingly, in contrast to previously reported results [15,17], our data do not demonstrate decreased interaction between FcγRIIIa and the sialic acid variants of Fc glycans of the IgG1. Fcγ receptor IIa and IIIa affinity chromatography In addition to SPR analysis, interaction of glycovariants with the FcγRIIa and FcγRIIIa was addressed employing recently developed FcγR affinity chromatography. This analytical method is used as an orthogonal method to SPR analysis, allowing assessment of antibody receptor interaction in a qualitative manner. For these assays, human FcγRIIa_R131 or FcγRIIIa_V158 is immobilized on the column (similar to FcRn affinity chromatography, [29]). Antibody samples are loaded onto the column and eluted by a linear pH gradient from pH 8.0 to 4.0 or pH 6.0 to 3.0, respectively. Comparison of retention times enables analysis of the interaction between applied antibodies and immobilized receptors, whereas improved binding correlates with longer retention times and vice versa. In addition, FcγRIIIa affinity chromatography enables separation of fucosylated and afucosylated antibody species. The latter were shown to have up to 50-fold higher affinity to FcγRIIIa [30], which in this case, is reflected by longer retention times. Similar to the results obtained by SPR, di-galactosylated variants show stronger interaction with the FcγRIIa on the affinity column. The effect was further enhanced by the presence of sialic acid, as demonstrated by increased retention time for these variants, compared to bulk material, whereas binding of the hypo-galactosylated variant appeared to be weaker (Fig 3). However, results from sialylated samples may only reflect a tendency since we currently have only limited experience with this assay. Deglycosylated antibody used as a negative control eluted immediately upon load, before the pH gradient was started (data not shown). As already mentioned, all tested glycan variants have a similar level of afucosylation in a range of 7-9%. Chromatographic profiles of the tested glycovariants show two peaks (Fig 4A), corresponding to two antibody species: fucosylated (main peak with retention times of 19-22 minutes) and afucosylated (small peak with retention times between 28 and 32 minutes). Comparison of the retention times of the fucosylated glycovariants demonstrated a similar binding pattern as obtained with the SPR analysis. The hypo-galacatosylated variant showed the weakest receptor interaction, revealing lower binding efficiency, compared to the bulk material ( Fig 4B and 4C). Di-galactosylated variants, independent of the presence of terminal sialic acid, demonstrated a shift to longer retention times and thus stronger binding. Consistent with the reported findings, afucosylated antibodies show delayed retention times and thus improved binding affinities. Interestingly, retention patterns of afucosylated antibodies and fucosylated variants show the same tendency: Di-galactosylated antibodies show stronger binding compared to bulk material, and the opposite is true for the remaining glycovariants. These results indicate that despite the prominent effect caused by the absence of fucose, remaining sugar residues might still contribute to the antibody-receptor interaction. The newly developed FcγRIIIa columns are a powerful tool to separate fucosylated and non-fucosylated antibodies by their binding affinity to FcγRIIIa. In addition to the effect of non-fucosylated antibodies, the FcγRIIIa column also separates the glycan variants prepared in this study with a clear correlation to the ADCC and SPR data. ADCC assay ADCC activity was measured using an improved cell-based assay employing Natural Killer cell lines engineered to express the high affinity variant of FcγRIIIa (V158) [27]. This assay addressed both, binding to target cells via the CDR region and activation of FcγRIIIa-positive effector cells via the Fc region of the antibody. ADCC activities were determined relative to a reference material using full curve parallel line analysis [31]. The hypo-galactosylated variant showed a decreased relative ADCC activity of 66%. A relative ADCC activity of 118% was obtained for the hyper-galactosylated variant (Fig 5). Comparison of the bulk material to the hypo-and hyper-galactosylated variants reveals that the absence of galactose has a negative impact on ADCC whereas the presence of two galactose units has a positive effect on ADCC In Vitro Glycoengineering of IgG1 -Effect on FcR Binding and ADCC activity. This is in accordance with the observations made for FcγRIIIa analysis by SPR technology and by the Fcγ receptor column assay. ADCC mediated by glycan species G2S1F (104% relative ADCC) and G2S2F (115%) is not significantly different from G2F (118%). Thus, addition of sialic acid has no impact on ADCC activity. Summary of results Using the IVGE approach, five samples have been generated, differing substantially in Fc glycan composition. The impact of the different glycan variants on Fc functionality was investigated and is summarized in Table 2. Galactose and sialic acid have no influence on FcγRI binding. Independent from the analytical method used (SPR or affinity chromatography), galactosylation as well as sialylation tend to increase the binding to FcγRIIa_R131. In all three methods (SPR, column assay, and ADCC), the presence of terminal galactose increases FcγRIIIa_V158 binding and consequently ADCC activity, whereas a change in terminal sialic acid content has no influence. Discussion To our knowledge, this is the first time such a glycan variant sample set was generated for a monoclonal antibody. Employing the in vitro glycoengineering technique, defined differences in Fc glycosylation of the samples were achieved. In the past, limited availability of glycosyltransferases such as α2,6-sialyltransferase (ST6), β1,4-galactosyltransferase (GalT), and activated sugars (UDP-Gal, GMP-NANA) might have been the main challenge for the IVGE approach. Meanwhile, activated sugars, ST6, and GalT are commercially available at Roche-α2,3-sialyltransferase (ST3) is currently in development. The IVGE approach offers new opportunities in sample preparation. However, as shown in this study, generated samples might not be completely heterogeneous as it might be achievable with engineered cell lines. Nonetheless, IVGE allows for preparation of well-defined samples in sufficient amounts in a relatively short period of time compared to cell line engineering. Furthermore, IVGE is independent from the production cell line and the production process. The mAb glycovariants in this study were investigated with regard to Fc functionality by a broad panel of analytical assays ( Table 2). The obtained results are partly in accordance with and partly in contrast to previous studies. With regard to galactosylation, the results of our study are in agreement with those from Houde et al. and Kumpel et al. [13,14], who also observed a positive correlation of galactose with FcγRIIIa binding and ADCC activity. Houde et al. [13] evaluated the effect of differential galactosylation and afucosylation of IgG1 on FcγRIIIa binding. In their studies, the level of galactose was varied while keeping the afucose level constant. In those experiments, galactosylation has, next to afucosylation, a clear impact on FcγRIIIa binding. Kumpel et al. [14] also observed a positive contribution of increased galactosylation on ADCC using a sample set with a difference of about 26% in glycan species G0F. Nevertheless, our results are in contrast to those obtained by Boyd et al., Hodoniczky et al., and Shinkawa et al. [10][11][12], where no impact of galactosylation on ADCC activity was reported. A possible explanation might reside in the use of Peripheral Blood Mononuclear Cells (PMBCs), which might less reliably detect and quantify small differences induced by changes in galactose levels as compared to the cell-line-based assay used in our study. Furthermore, changes in afucose levels might have overlapped and thus might have superimposed the actual impact of galactose, especially since a less pronounced effect of galactose compared to afucose has been observed in the past [30]. Maybe also the difference in galactose levels between hypo-and hyper-galactosylated samples might have been too small to measure an effect with the functional assays used. We did neither observe a positive nor a negative impact of sialic acid on FcγRI, IIIa_V158 binding, or FcγRIIIa_V158-based ADCC activity, and only small impact on FcγRIIa_R131 binding (Figs 2 and 5), even though we were able to change the level of sialylation by about or even more than 47%. The direct involvement of IgG sialylation to FcγRII interaction remains unsolved. The inhibitory receptor FcγRIIb is structurally, closely related to the activating receptors FcγRI-III [32]. Therefore, a differentiation to the FcγRIIb based on sialylation is very unlikely and a direct involvement in an anti-inflammatory response probably not correlated to FcγRIIb binding but more to an alternative receptor [33]. The theoretical mechanisms of an anti-inflammatory response are intensely discussed [34]. For instance, in humans, DC-SIGN positive dendritic cells or macrophages were reported to detect sialic acid-rich IgGs in IVIG preparations [35]. Boyd et al. [10] also observed no impact of sialylation on ADCC activity. However, the difference in sialic acid levels between the samples tested was lower than 47% in their case. In contrast, other publications describe a negative impact of increased levels of sialylation on FcγRIIIa binding and ADCC activity [15][16][17]. It is not always clear if glycan patterns of samples used in previous studies might have had differences in other glycan species apart from sialic acid, since complete glycan analysis has not been shown after sample preparation, such as lectin fractionation [15]. In such cases, it is not clear if levels of afucose or high-mannose, which contribute to FcγRIIIa binding and ADCC activity might also have changed and therefore might impact results and conclusions of those studies. Scallon et al. [17] observed reduced ADCC activity, but no change in FcγRIIIa binding when the level of sialylation in monoclonal antibodies (mAbs) produced by SP2/0 cells was increased by up to 62%. The question arose whether the location of the sialic acid residue could be of significance (α1,3 arm vs. α1,6 arm). Furthermore, SP2/0 cells produce N-glycolyl-neuraminic acid (NGNA) instead of N-acetyl-neuraminic acid (NANA, SA) which might also have an influence on binding activity and account for the observations in their study. Overall, in our study, we were able to produce samples with significant differences in galactose and sialic acid levels of Fc glycans of an IgG1. This was achieved using the in vitro glycoengineering approach, a technique which has not been used extensively in the past. We carefully monitored changes in glycosylation patterns as well as molecular integrity before and after in vitro glycoengineering of an individual IgG1 batch prior to functional analysis. With regard to galactosylation, we observed an impact on Fcγ receptor binding and ADCC activity. However, this impact is of much lower magnitude than the contribution from afucose. Furthermore, our sample contained primarily di-galactosylated (G2F) Fc glycans. The effect of mono-galactosylated (G1F) glycans remains to be clarified. Concerning sialic acid, there are still numerous questions that remain open when comparing the results of our study with previous ones, where contrary results were obtained. Those conflicting observations might need further clarification and might be subject to future investigations. For example, is there a different impact of NGNA vs. NANA, does the location of sialic acid residues play a role (α1,3 arm vs. α1,6 arm), or is there a difference between terminal α2,3and α2,6-linked sialic acid units? Galactosyl-as well as sialyltransferase are now reliably available at Roche Custom Biotech and can easily be applied to further investigate the impact of these Fc glycans using additional IgG antibodies in future studies. Hyper-galactosylated variant. One gram IgG1 (c = 25 mg/ml) was mixed with 157 ml reaction buffer (10 mM UDP-Gal, 20 mM MnCl 2 , 100 mM MES pH 6.5). The β(1-4)-Galactosyltransferase (G5507-25U, Sigma) was diluted with dH 2 O to a concentration of 10 U/ml. 4.6 ml galactosyltransferase was added to the sample at the start of the incubation and an additional 2.3 ml was added after 2 and 3 days. The total incubation time was 4 days at 32°C. The sample was purified by Protein A chromatography. The final concentration was 8.1 mg/mL. To further increase the degree of sialylation, 100 mg of the mono-sialylated sample was mixed with 3 ml of the aqueous CMP-NANA solution and 10 mg ST6-Variant2 (Roche, Cat. No. 07012250103; preferentially di-sialylated glycans are obtained). The sample was incubated for 7 hours at 37°C with subsequent Protein A purification. The final concentration was 3.8 mg/mL. Size exclusion chromatography Size exclusion chromatography was performed under isocratic conditions for 40 minutes at a flow rate of 0.5 ml/min using running buffer (200 mmol/l K 3 PO 4 , 250 mmol/l KCl, pH 7.0). One-hundred fifty microliter of sample at a concentration of 1 mg/ml was injected and the UV chromatograms recorded at a wavelength of 280 nm. LCMS peptide mapping Proteolytic pH 6 digest. For the detection and quantification of Asn deamidation, Asp isomerization, and Met oxidation at the peptide level, the samples were denatured in 0.2 M His-HCl, 8 M Gua-HCl, pH 6.0 by diluting 350 μg of mAb in a total volume of 300 μl. For reduction, 10 μl of 0.1 g/ml dithiothreitol was added followed by incubation at 50°C for 1 hour. Next, the buffer solution was exchanged to digestion buffer (0.02 M His-HCl, pH 6.0) using a NAP5-gel filtration column (GE Healthcare, Buckinghamshire, UK). Subsequently, the NAP5-eluate (500 μl) was mixed with 10 μl of a 0.25 mg/ml trypsin solution (Trypsin Proteomics grade, Roche, Penzberg, Germany) in 10 mM HCl and incubated at 37°C for 18 hours. Analysis of proteolytic peptides by liquid-chromatography mass-spectrometry (LC-MS). The tryptic peptide mixture was separated by RP-UPLC (ACQUITY, Waters, Manchester, UK) on a C18 column (BEH C18 1.7 μm 2.1 x 150 mm; Waters, Manchester, UK), and the eluate analyzed online with a Synapt G2 electrospray mass spectrometer (Waters, Manchester, UK). The mobile phases consisted of 0.1% formic acid in water (solvent A) and 0.1% formic acid in acetonitrile (solvent B). The chromatography was carried out using a gradient from 1 to 35% solvent B in 45 minutes and finally from 35 to 80% solvent B in 3 minutes using a flow rate of 300 μl/min. UV absorption was measured at a wavelength of 220 nm. 3.5 μg digested protein was applied. The UPLC system and mass spectrometer were connected by PEEK-capillary tubes. Data acquisition was controlled by MassLynx V4.1 software (Waters, Manchester, UK). Parameters for MS detection were adjusted according to general experience available from peptide analysis of recombinant antibodies. Data analysis for the quantification of deamidation/isomerization/oxidation levels and the level of afucosylation. Peptides of interest were identified manually by searching their m/ z-values within the experimental mass spectrum. For the quantification, specific ion current (SIC) chromatograms of peptides of interest were generated on the basis of their monoisotopic mass and detected charge states using GRAMS AI software (Thermo Fisher Scientific, Dreieich, Germany). Relative amounts of Asn deamidation, Asp isomerization, and Met oxidation were calculated by manual integration of modified and unmodified peptide peaks. The level of afucosylation was calculated by manual integration of afucosylated and fucosylated glycan species. Expression and Purification of Fcγ Receptors Human FcγRI, FcγRIIa_R131 and FcγRIIIa_V158 receptors used in this work were expressed in-house in HEK293F for FcγRI, FcγRIIa_R131 (transient expression) and in CHO cells for FcγRIIIa_V158, respectively. The CHO DG44 cell line was stably transfected [36]. Purification of the receptor was achieved by affinity chromatography using Ni Sepharose High Performance material (GE Healthcare, Munich, Germany) followed by elution with 300 mM imidazole (Sigma, Munich, Germany) an case of FcγRI, FcγRIIa_R131 and 100 mM imidazole in case of FcγRIIIa_V158, and a size exclusion chromatography on Superdex 200 26/60 column (GE Healthcare, Munich, Germany) with PBS pH7.4 (FcγRI, FcγRIIa_R131) or 2 mM MOPS, 150 mM NaCl, 0.02% Tween 20 pH 7.0 buffer (FcγRIIIa_V158). Fcγ receptors were biotinylated using the biotinylation kit from Avidity according to the manufacturer instructions (Bulk BIRA, Avidity LLC, Denver, CO, USA) and dialyzed at 4°C over night to remove excess of biotin. The product quality was characterized by standard methods. The glycosylation pattern of FcγRI and FcγRIIa is not relevant for the antibody interaction, and was not addressed in detail. Analysis of the glycosylation pattern of FcγRIIIa was described previously [36]. Surface plasmon resonance (SPR) for FcγRI, FcγRIIa, and FcγRIIIa binding analysis SPR interaction analysis was performed on a Biacore T200 system (GE Healthcare). For interaction analysis of FcγRs and IgG1 glycovariants, an anti-His capturing antibody (GE Healthcare) was injected to achieve a level of 12,000 resonance units (RU). Immobilization of the capturing antibody was performed on a CM5 chip using the standard amine coupling kit (GE Healthcare) at pH 4.5. FcγRIa, FcγRIIa, and FcγRIIIa were captured at a concentration of 200 nM with a pulse of 60 seconds at a flow rate of 10 μl/min. Subsequently, IgG1 glycovariants were applied at a concentration of 300 nM and a flow rate of 30 μl/min for 60 seconds. The dissociation phase was monitored for 180 seconds. The surface was regenerated by a 60 second washing step with 10 mM Glycine, pH 1.5 at a flow rate of 30 μl/min. All experiments were carried out in HBS-N buffer (10 mM HEPES, pH 7.4, 150 mM NaCl). The Biacore T100 evaluation software 2.0.3 was used for data evaluation. For the SPR IgG-Fcγ receptor analysis the data are performed in a non-kinetic analysis mode. The interaction of the IgG samples has been monitored at one fixed concentration with an n = 3. For the evaluation a report point has been set at the end of the association phase. The values of this report point were used for the evaluation of relative differences. The determination of the relative activity is from our point of view more relevant for the IgG-FcγR interaction, because this interaction does not follow a clear 1:1 Langmuir binding model and therefore the binding differences are not adequately assessed by this kind of evaluation. FcγRIIa and IIIa affinity chromatography Biotinylated human FcγRIIa_R131 or FcγRIIIa_V158 was incubated with streptavidin sepharose for 2 hours with mild shaking. The receptor derivatized sepharose was packed in a Tricorn 5/50 Column housing (inner diameter 5 mm, length 50 mm, GE Healthcare) and the affinity column was equilibrated with 20 mM Tris, 150 mM NaCl pH 8.0 (FcγRIIa) or 20 mM Sodium Citrate, 150 mM NaCl pH 6.0 (FcγRIIIa) at flow rate of 0.5 ml/min using the Äkta explorer or Dionex Summit system. The antibody samples containing 50 to 100 μg in equilibration buffer were applied to the respective column. Subsequently, columns were washed with 5 column volumes of equilibration buffer. The samples were eluted with a linear pH gradient of 15 column volumes using 20 mM Citrate, 150 mM NaCl, pH 4.0 (FcγRIIa) or pH 3.0 (FcγRIIIa), respectively. The experiments were carried out at room temperature. The elution profile was obtained by continuous measurement of the absorbance at 280 nm. Cell-based ADCC activity assay Cells. Recombinant Natural Killer effector cells expressing human FcγRIIIa_V158 were generated and cultured as previously described [27]. Target cells were purchased from American Type Culture Collection. Antibody-dependent cellular cytotoxicity (ADCC). Target cells, expressing a receptor of the EGFR family, were labeled with an acetoxymethyl ester of a fluorescence enhancing ligand (BATDA ligand, Perkin Elmer) according to the recommendations given by the supplier. Effector cells and labeled target cells were mixed in growth medium to a ratio of 5:1 and distributed in 96-well microplates. Antibody samples were diluted in growth medium and 100 μl were added to 100 μl effector-target-cell mix. Assay plates were incubated in a humidified incubator at 37°C / 5% CO 2 for three hours. After incubation, plates were centrifuged and 20 μl supernatants from each well were transferred to a white 96-well plate. A highly fluorescent and stable chelate was obtained by complexing the released labeling agent from the lysed target cells by adding to each well two-hundred microliters of europium solution (Perkin Elmer), followed by a short incubation on a plate shaker. Controls for spontaneous and maximum release were prepared following the instructions given by the supplier of the BATDA ligand (Perkin Elmer). Time-resolved fluorescence, correlating with the amount of lysed cells, was measured in RFU (Relative Fluorescence Units) using excitation at 345 nm; emission at 615 nm. Readings were performed on a Spectramax M5 plate reader (Molecular Devices). Specific toxicity values were calculated as follows: 100% x (specific release-spontaneous release)/(maximum release-spontaneous release). For relative ADCC calculation, % specific toxicity was plotted against antibody concentration, and relative ADCC was determined by full curve parallel line analysis [31].
6,918.8
2015-08-12T00:00:00.000
[ "Biology" ]
The curious case of ( ca ff eine ) $ ( benzoic acid ) : how heteronuclear seeding allowed the formation of an elusive cocrystal † Department of Chemistry, University of Cam<EMAIL_ADDRESS>School of Chemistry, University of Southam Department of Chemistry, Faculty of Scien Croatia Division of Materials Chemistry, Rud er Boš Research and Development, AbbVie Inc., No Department of Chemistry, University of Iow † Electronic supplementary information computational studies, cocrystal-scre crystallographic, thermal and spectroscop ESI and crystallographic data in CIF 10.1039/c3sc51419f Cite this: Chem. Sci., 2013, 4, 4417 Introduction In many areas of applied solid-state research, including the pharmaceutical eld, it is crucial that a compound in development be prepared and characterized in the largest possible number of solid forms (e.g.polymorphs, salts, cocrystals). 1 The identication of a wide range of solid forms improves the likelihood of identifying solids with optimal physicochemical properties, 2,3 and simultaneously maximizes patent protection opportunities. 4,5For those industries focused on crystalline solids, this approach may alleviate the risk of encountering new solid forms of a compound aer substantial investments in product development and marketing. 6Consequently, extensive screening for alternative solid forms is seen as a vital step in any solid materials development program. The risks inherent in the development and marketing of organic solids in the pharmaceutical arena are well illustrated by the widely-known cases of the HIV protease inhibitor Ritonavir 7 (Norvir), and the dopamine agonist Rotigotine 8 (Neupro).A second polymorph of Ritonavir was found two years aer Norvir was placed on the market.It transpired that this unexpected second polymorph was thermodynamically more stable than the marketed one, 9 rendering the initial polymorph unobtainable for a substantial period and thus necessitating its temporary removal from the market. 7Rotigotine, which was used for the treatment of Parkinson's disease, was administered through a skin patch to minimize the unpleasant side effects of the drug. 8Soon aer Neupro was released on the market in 2006, a previously unknown and thermodynamically more stable polymorph started to emerge in the Neupro skin patches. 10The unexpected appearance of this new polymorph drastically reduced the efficacy of the patches, and led to a temporary withdrawal of the drug from the market. The appearance of a previously unknown polymorph, accompanied by the complete disappearance of the initially observed form, has been reported before. 11Such disappearing polymorphs may be recovered by the tedious determination of the precise crystallisation conditions that led to their formation in the rst place. 12In addition, computational crystal structure predictions, 13 as well as knowledge-based hydrogen-bond prediction, 14 can be utilized to assess the likelihood that additional polymorphs may exist, and to evaluate relative polymorph stability.Despite these possible solutions, the prospect of encountering new and unanticipated solid forms that might lead to the withdrawal of a product from the marketeven for a short period of timestill remains daunting. 5,7hile the inability to constrain the formation of new polymorphs can have disastrous consequences for a marketed drug, unsuccessful attempts to prepare a multi-component solid (e.g.salt or cocrystal 15 ) can raise the question of whether the solid is actually obtainable in the rst place.In particular, since the crystallization of such a solid competes with the crystallization of the pure components or their solvates/hydrates, it can be quite difficult to estimate whether the formation of the crystallized solid is even thermodynamically favorable.Unobtainable (or at least apparently "hidden" 16 ) solid forms with potentially desirable properties are problematic for multiple reasons: for example, an initial failure to obtain a cocrystal might give competing interests a market entry if the synthesis of this "hidden" solid was subsequently accomplished. 6rystal engineering and cocrystallization (i.e. the formation of multi-component crystalline solids) have recently emerged as important tools for the development of functional organic materials [17][18][19][20][21][22][23][24][25][26][27][28][29] and for maximum patent protection. 4,5A burgeoning area in this eld is the development of efficient and rapid cocrystal screening techniques.Techniques are sought to assure cocrystal formation, assuming that a cocrystal phase exists under desired conditions (e.g. standard ambient conditions).Numerous screening methods (mechanochemical, 30,31 thermal, 32,33 and solution-based 34 ) and crystallization techniques (such as heteronuclear seeding 35,36 ) have been applied to cocrystal screening.These methods are generally used in an integrated manner to prepare the maximum number of crystal forms of a given molecule.The eld, however, still lacks a general methodology that would allow and ensure the crystallization of all designed solid forms predicted to be attainable. Here, we present an approach that uses pre-designed heteronuclear seeds to facilitate the synthesis of the elusive cocrystal 1 based on caffeine (caf) and benzoic acid (BA) in a 1 : 1 ratio.Despite the components' apparent propensity for selfassembly, 37,38 1 has failed to form in the past, 39,40 nor was it prepared by our persistent cocrystallization attempts in four laboratories that involved the use of various established cocrystallization methods.Following a computational study using crystal structure prediction methods, which showed that formation of 1 is thermodynamically favorable, the predicted cocrystal was ultimately obtained by heteronuclear seeding, using structurally related cocrystals, consisting of caf and uorobenzoic acids (FBA), as seeds (Fig. 1).The adoption of such a seeding strategy 41 was motivated by the successful use of crystalstructure prediction in the synthesis of a previously unobserved polymorph of a pharmaceutical compound. 42t is particularly noteworthy that, aer our initial success with seeding, 1 then remained attainable for several months in all four laboratories where these experiments were conducted, without the need for the deliberate introduction of the seeds. The ndings presented here have signicant implications for solid-state and materials science, as well as for pharmaceutical and biomedical researchers.Specically, this work suggests that elusive cocrystals, sometimes rashly regarded as nearly unobtainable (or even nonexistent under desired conditions) because of their presumed unfavorable lattice energies, might actually be readily available using designed heteronuclear seeds.Our ndings also highlight the failings in our current understanding of the nucleation process of cocrystals, and the consequences of such decient knowledge for progress in materials development. The work presented here describes the efforts of four laboratories to prepare 1, a solid of pharmaceutical interest whose synthesis has been previously unsuccessfully attempted by others during the last sixty years. 39,40Our interest in 1 is threefold.First, it arises from the sunscreen effect of physical mixtures of caf and sodium benzoate (SB). 43Second, equimolar solutions of caf and SB were found to facilitate electroconvulsive therapy, 44 and used to treat postdural puncture headaches and migraine attacks. 45In such solutions, SB (or alternatively, citric acid) is added to enhance the solubility of caf and to maintain the solution as sterile.Notably, the presence of undissociated BA in the caf solution might be more desirable (as BA is a more effective antimicrobial agent for preservation purposes than the dissociated SB 46 ), further driving efforts to prepare 1.Third, our interest in 1 was also triggered by our previous ndings in related studies that caf invariably cocrystallizes with a broad range of carboxylic acids, and that cocrystal 1 was the only cocrystal that remained elusive. 37,38,47,48perimental section Crystal structure prediction sample the degrees of freedom that dene the crystal structure (unit cell parameters, molecular positions and orientations).All trial structures were built from one caf and one BA molecule in the asymmetric unit.Molecular structures were kept rigid at this stage, at geometries obtained from isolated molecule optimization at the B3LYP/6-311G** level of theory, using Gaussian03. 50Intermolecular interactions were modeled using an exp-6 repulsion-dispersion model and an atomic partial charge electrostatic model.The FIT parameter 51 set was used for repulsion-dispersion parameters and atomic partial charges were calculated by tting to the B3LYP/6-311G** molecular electrostatic potential, using the CHelpG scheme of tting points.A total of 1.6 million trial structures were generated and energy minimized during the initial search.In the second step, rigid-molecule lattice energy minimizations were performed on approximately 13 000 of the lowest energy structures with an improved intermolecular model potential, using the same exp-6 repulsion-dispersion parameters as in the structure generation step, now coupled with an atomic multipole model for electrostatic interactions, deriving multipoles up to hexadecapole on each atom from a distributed multipole analysis (DMA) 52 of the B3LYP/6-311G** charge densities.These crystal structure calculations were performed using DMACRYS. 51All intermolecular interactions were summed to a 30 Å cutoff, apart from charge-charge, charge-dipole and dipole-dipole interactions, for which Ewald summation was applied.The resulting structures were then clustered using the COMPACK algorithm to remove duplicate structures, leaving 83 distinct crystal structures in a 10 kJ mol À1 energy range from the global energy minimum.In the third step, the 83 crystal structures were re-optimized allowing the optimization of the caf methyl group orientations, and of the dihedral angle between the phenyl ring and the COOH plane in BA.These were performed using the Crystal Optimizer 53 code, which nds a minimum in the total energy, calculated as the sum of intermolecular (FIT + DMA) and intramolecular (B3LYP/6-311G**) energies, calling DMA-CRYS 51 for crystal structure calculations and Gaussian03 (ref.50) for molecular calculations.Finally, all structures obtained in the third step were subjected to a nal rigid-molecule lattice energy minimization using the same exp-6 repulsion-dispersion parameters, but with the DMA of each molecule calculated from a charge density calculated with B3LYP/6-311G** within a PCM 54 representation of its solid state environment, which has been shown to effectively model the inuence of charge density polarization on relative lattice energies. 55The PCM dielectric was chosen as 3 ¼ 3.0, which is a typical value for the organic solid state.The same lattice energy minimization steps described in the third and fourth step were applied to ordered models of caf and BA, as described in the main text. Cocrystal screens The cocrystal screens were conducted in four laboratories under various conditions and using several screening methods, including neat grinding 31 (NG), liquid-assisted grinding 31 (LAG), sonic slurry 30 (SS) and solution-mediated phase transformation 34 (SMPT).The obtained solids were characterized using powder X-ray diffraction and/or Raman spectroscopy.Experimental details and summaries of results obtained from each laboratory are provided in the ESI.† Cocrystal characterisation The prepared cocrystals were characterised using powder X-ray diffraction (PXRD), infrared spectroscopy, thermogravimetric analysis, differential-scanning calorimetry, Raman spectroscopy and, in some instances, 15 N cross-polarisation magic-angle spinning nuclear magnetic resonance spectroscopy (CP-MAS NMR).Experimental details are given in the ESI.† The crystal structures of the cocrystals were determined by ab initio crystal structure solution by simulated annealing and Rietveld renement using powder X-ray data obtained from a laboratory powder diffractometer.Experimental details and Rietveld plots for all cocrystal structures are also available in the ESI.† Results and discussion The preparation of cocrystal 1 was attempted in independent trials by groups from the University of Cambridge, AbbVie Inc., the University of Zagreb and the University of Iowa, using several highly reliable cocrystal screening methods viz., LAG, 31 NG, 31 SS 30 and SMPT. 34These well-established screening methods are trusted to facilitate cocrystal formation, provided that such a cocrystal phase exists.In the SMPT and SS methods, for example, the potential for cocrystal formation is maximized once the activities of both components are kept at values higher than their respective critical values, which is certainly accomplished in suspensions of physical mixtures of the cocrystal components. 34The use of ultrasound in the SS method is also known to facilitate cocrystal nucleation.The mechanochemical methods (i.e.LAG and NG), on the other hand, enable the formation of amorphous phases or eutectic mixtures through the local heating induced by colliding mill balls. 31espite the use of reliable screening methods, however, all cocrystallization attempts failed in all four laboratories over several years, and the negative outcome was largely attributed to "crystal packing effects", i.e. the inability of the two components to pack together in a sufficiently stable crystal lattice. 56onsidering that caf and BA form complexes in solution, 57 as well as the fact that caf is known to readily form cocrystals with a broad variety of benzoic acid derivatives, 58 it was intriguing that such complex formation did not occur in the crystalline solid state. If it were able to form a solid-state complex, the complex between caf and BA was felt more likely to be a cocrystal, rather than a salt.This assumption was based primarily on the DpK a value of the caf : BA pair (i.e., DpK a < À3.5, see Table S2 in ESI †), which clearly suggests the formation of a cocrystal. 59To assess the probability that such a cocrystal is stable, global lattice energy minimization calculations were performed to generate the possible low energy structures of 1.A 1 : 1 stoichiometry was assumed and crystal structures were generated using quasirandom structure generation in 15 commonly observed space groups, each with Z 0 ¼ 1 (i.e. one caf and one BA per asymmetric unit).The most promising trial cocrystal structures generated from an initial search were further optimized in several stages, allowing for molecular exibility within each structure, applying anisotropic models for interatomic interactions and treating polarization of the molecular electron density by the solid state environment.Energies and volumes of the resulting predicted cocrystal structures are summarized in Fig. 2a. To a rst approximation, the thermodynamic driving force for cocrystallization can be assessed by comparing the calculated energies of the putative cocrystal structures with those of the pure crystalline components; this approach has been previously applied to predicting cocrystal formation, stoichiometry and structure. 60,61In the present study, the known crystal structure of BA, and an ordered version of the thermodynamically stable form of caf at room temperature (Lehmann and Stowasser's Z 0 ¼ 5 ordered structure of b-caf 62 ), were lattice energy minimized using the same methods as used in the cocrystal predictions.The calculated energy of this ordered model of b-caf is an approximation of the lattice energy of the orientationally disordered crystal structure, and we estimated an uncertainty in the lattice energy of AE1 kJ mol À1 from the spread in calculated lattice energies of the 512 unique possible orientational congurations reported by Habgood. 63Disorder in the crystal structures of pure caf and BA provides an entropic contribution to the stability of the pure forms, which, for b-caf, were taken from Habgood 63 as -TS z À1 kJ mol À1 at room temperature and were estimated as S disorder z R ln 2 for BA, due to the two occupied congurations of the carboxylic acid group in the hydrogen-bonded dimers in crystalline BA. Even aer accounting for uncertainties in the calculated lattice energies, and for the entropic contributions to the stability of the pure forms, we found about 70 computergenerated cocrystal structures that were predicted to be energetically preferable to the pure single-component crystal structures (Fig. 2a).The lowest energy predicted cocrystal structure was shown to be more than 10 kJ mol À1 more stable than the pure components.We concluded from these computational studies that caf and BA are able to form a stable cocrystal that, on thermodynamic grounds, should be realizable.Furthermore, we were able to predict the likely primary interaction: all but one of the 70 lowest-energy predicted cocrystal structures exhibit carboxylic-acid : imidazole O-H/N hydrogen bonding (the lowest-energy predicted cocrystal structures are available as ESI †).The hydrogen-bonded molecules are nearly co-planar in the lowest-energy cocrystal structure, which is sustained by C-H/O and p/p interactions (Fig. 2b). Parallel to the computational study, the Cambridge group commenced a structural study of related cocrystals, in order to identify heteronuclear seeds that could potentially facilitate the growth of 1.This study attempted the cocrystallization of caf with structural isomers of mono-, di-, tri-, tetra-and penta-uorobenzoic acids (FBAs).Heteronuclear seeding has previously been used to control polymorphism in metal-organic and organic compounds, as well as in cocrystals. 35,36,42,64Our work, however, applied this technique to facilitate the synthesis of multicomponent solids that could not be obtained otherwise. The working hypothesis for this study was that the formation of 1 is presumably hindered by a high kinetic barrier 65 and that such a high barrier may be overcome by introducing a heteronuclear seed that matches the target cocrystal structurally or epitaxially. 66,67Specically, it was proposed that uorinated benzoic acids (FBAs) would likely form cocrystals based on molecular assemblies that are similar in size and shape to those present in 1.Such an analogous assembly could be rationalized by the relatively small size difference between hydrogen and uorine atoms (van der Waals radii: 120 pm vs. 147 pm, 68 respectively). 69ndeed, it was found that 17 (out of 19) isomers of FBAs form cocrystals with caf. 70All the experimentally obtained cocrystal structures were solved using powder diffraction data collected on a laboratory diffractometer.The structural studies showed that all (caf)$(FBA) cocrystals are based on caf : FBA complexes being sustained by the same hydrogen-bonding motifs that are found in the predicted structure of 1, namely imidazole-carboxylic-acid heterosynthons utilizing O-H/N hydrogen bonds (Fig. 3).Out of these cocrystals, four (i.e.cocrystals involving 2-uorobenzoic (2FBA), 2,3-diuorobenzoic (23diFBA), 2,5-diuorobenzoic (25diFBA) and 2,6-diuorobenzoic (26diFBA)) were found to be isomorphous with the lowest-energy predicted crystal structure of 1 (i.e. the structures display the same space group and unit-cell dimensions, as well as the same atom types and the positions, except for an interchange of hydrogen and uorine within the different cocrystals). Interestingly, once the (caf)$(FBA) cocrystals were synthesized in the Cambridge laboratory, and before the actual seeding experiments began there, the Cambridge group became unable to replicate the previously negative results concerning the cocrystallization of caf and BA.Specically, a new crystal phase unexpectedly became achievable in every crystallization attempt, as evidenced by powder diffraction studies and thermal analyses (see Fig. S1, ESI †).This new crystal phase was subsequently identied as the lowest energy predicted form of 1 (Fig. 4a-c) using crystal structure solution from powder X-ray diffraction, natural abundance 15 N CP-MAS NMR, and thermogravimetric analysis. The cocrystal was found to crystallize in the monoclinic P2 1 /c space group with one molecule of caf and BA in the asymmetric unit interacting via the imidazole-carboxylic-acid heterosynthon [d(O/N) ¼ 2.663( 7) Å] (Fig. 4a).Assemblies of caf : BA form head-to-tail-oriented pairs that further stack in an offset manner (offset: $7 Å) held together by p/p interactions.The stacks interact via weak C-H/p forces to form molecular sheets exhibiting a herringbone motif (Fig. 4b).The sheets form a 3D lattice being held together by C-H/p interactions.Furthermore, 15 N CP-MAS NMR was utilized to determine whether the basic N(imidazole) atom of caf was protonated by BA.Specically, it was shown that the solid-state 15 N NMR spectrum exhibits a signal at a high frequency (i.e.194 ppm; Fig. S16 in ESI †) that is consistent with the presence of an unprotonated N(imidazole) atom in the studied solid. 71onsidering that protonated N(imidazolate) atoms exhibit peaks at lower-frequency chemical shis (i.e.120-140 ppm), 48 it is was concluded that caf and BA crystallize as a cocrystal (rather than a salt), as expected.Thermogravimetric analysis was utilized to determine that the investigated caf : BA phase consists solely of caf and BA (Fig. S18, ESI †). One possible explanation for the unexpected formation of 1 was that the crystallization vessels or either batch of caf or BA (or the environment in general) were contaminated with the synthesized (caf)$(FBA) cocrystals, thus seeding the growth of the target cocrystal.Surprisingly, 1 was found to form even aer numerous thorough laboratory and equipment cleanups, and the use of newly purchased glassware and batches of caf and BA.More specically, the laboratory benches were thoroughly scrubbed using bleach and ethanol, whereas the milling jars were cleaned using concentrated solutions of strong inorganic acids and bases (i.e.HCl and NaOH, respectively).New crystallization vials and milling balls were used in each experiment, whereas all used solvents were heated and subsequently ltered through syringe lters with polyvinylidene uoride membranes displaying 0.2 mm pores.The very strong seeding effect of either (caf)$(FBA) crystals or crystals of 1, however, persisted even one year aer the seeding experiments were conducted, thus preventing comprehensive studies of the Fig. 3 X-ray crystal structures of (caf)$(FBA) cocrystals used as heteronuclear seeds for the crystallization of 1 (black labels).Structures shown in the first row are isomorphous with the lowest-energy predicted crystal structure of 1 (red label), unlike the structures shown in the second row.Each cocrystal system is represented with a depiction of the caf: FBA (or caf : BA) assembly (top) and their crystal packing diagrams (bottom).Colour scheme for the crystal packing diagrams: cafblue, BAred. nucleation process and the seeding mechanisms in the Cambridge laboratories. Seeding experiments were also performed at AbbVie Inc. aer earlier attempts to prepare 1 without the use of heteronuclear seeds failed.The initial "seedless" cocrystal screens were based on the NG, LAG and SMPT techniques whereby ethanol, acetonitrile and nitromethane were used as liquids/ solvents.Aer attempts to prepare the target cocrystal failed, four cocrystals that were isomorphous with the lowest-energy predicted target cocrystal (i.e.(caf)$(2FBA), (caf)$(23diFBA), (caf)$(25diFBA) and (caf)$(26diFBA)) were used as heteronuclear seeds, to successfully accomplish the synthesis of 1 (Fig. 4d). The AbbVie group subsequently performed another series of screening experiments, this time without the use of seeds in order to determine the occurrence of possible laboratory contamination and unintentional seeding, as seen in the Cambridge laboratory.Indeed, a LAG experiment conrmed that 1 could be now obtained without the deliberate use of seeds.Another SMPT experiment was conducted using a physical mixture of caf and BA that was stored in a rubber-septasealed vial before any heteronuclear cocrystal seeds were introduced to the AbbVie laboratory.The seedless SMPT cocrystal screen was initialized by injecting acetonitrile through the rubber septum into the solid mixture.In situ Raman spectroscopy measurements showed the presence of a physical mixture in the sealed vial even aer an appreciable amount of time (i.e.19 h).During the isolation of the physical mixture by ultracentrifugation, however, it was observed that the physical mixture converted almost immediately into 1 (as evident from a visible change in the particle morphology, Raman spectroscopy data (Fig. 4e) and PXRD data (Fig. S4 †)), thus demonstrating that seemingly minuscule amounts of contaminants of either 1 or a (caf)$(FBA) in the laboratory are capable of inducing a phase transformation in extremely short periods of time.It is to be further noted that additional seeding experiments involving two cocrystal seeds that are not isomorphous with the target [black: b-caf; grey: BA; redfrom bottom to top: physical mixtures of caf and BA obtained in cocrystallization attempt via LAG using nitromethane as liquid, LAG using ethanol as liquid, LAG using acetonitrile as liquid, NG, SMPT using nitromethane as solvent, SMPT using ethanol as solvent, SMPT using acetonitrile as solvent; greenfrom bottom to top: 1 obtained via LAG using nitromethane as liquid and (caf)$(2FBA) as seed, LAG using nitromethane as liquid and (caf)$(23diFBA) as seed, LAG using nitromethane as liquid and (caf)$(26diFBA) as seed, LAG using nitromethane as liquid and (caf)$(3FBA) as seed, LAG using nitromethane as liquid and (caf)$(2345tetraFBA) as seed; blue: 1 obtained via LAG without the deliberate use of a seed in the (caf)$(FBA) contaminated laboratory (utilising nitromethane as liquid)]; (e) Raman spectra of a caf : BA physical mixture before and after being exposed to a seed-contaminated atmosphere [from bottom to top: b-caf (black), BA (black), physical mixtures of caf and BA sonicated for 5 min (red), physical mixtures of caf and BA sonicated for 5 min and subsequently slurried for 19 hours (red), caf : BA physical mixture from a sealed screening vial that converted to 1 upon exposure to the seed-contaminated atmosphere (green), 1 (black)]. cocrystal, namely (caf)$(3FBA) and (caf)$(2345tetraFBA), have also successfully led to the formation of the cocrystal (Fig. 4d).Interestingly, the (caf)$(BA) cocrystal was obtainable without the use of seeds even aer more than one year aer the initial seeding experiments were conducted. Original efforts to prepare 1 at the University of Zagreb also failed.Both FBA and (caf)$(FBA) seeds were thus utilised to facilitate nucleation and growth of the target cocrystal (see Fig. S7, ESI †).The use of crystals of pure FBAs as seeds was attempted to determine whether a heteronuclear seed could be formed in situ through addition to a caf : BA physical mixture, while the non-isomorphous (caf)$(FBA) cocrystals were selected as seeds to explore the role of epitaxy in the formation of 1. 67 Specically, 3FBA, 2345tetraFBA, (caf)$(2345tetraFBA) and (caf)$(3FBA) were initially used to attempt the formation of the target cocrystal, using LAG as the crystallization method.PXRD studies showed that all four experiments yielded the target cocrystal.These ndings indicate that: (a) that the cocrystal seed and the cocrystal target do not have to be isomorphous, again pointing to the possible role of epitaxy in the nucleation of 1, and (b) that the heteronuclear seed could be generated in situ by addition of the cocrystal former (i.e.FBAs) to the reaction vessel.Crystallization experiments wherein 2FBA, (caf)$(2FBA), 23diFBA, (caf)$(23diFBA), 25diFBA, (caf)$(25diFBA), 26diFBA and (caf)$(26diFBA) were utilized as seeds also resulted in the formation of the target cocrystal, as expected.Notably, the target cocrystal was obtainable without the use of seeds for up to six weeks aer the earlier seeding experiments were performed; thereaer, cocrystallization attempts involving caf and BA alone began to fail again. All results obtained by the Iowa group were consistent with the ones gathered in Cambridge, Zagreb and at AbbViecocrystal 1 could only be obtained aer the crystallization experiments were aided by the use of (caf)$(FBA) seeds (see Fig. S8, ESI †).Specically, (caf)$(2FBA) was found to facilitate the quantitative formation of the target cocrystal in a SMPT experiment using acetonitrile as solvent.While 2FBA, 23diFBA, 25diFBA and 26diFBA led to cocrystal formation during LAG experiments performed by the Zagreb group, the same FBAs were not found to seed cocrystal formation during SMPT-based screens at Iowa.This was attributed to the inability of the FBA to form (caf)$(FBA) seeds under the screening conditions owing to the complete solubilisation of the FBA in the solvent. Finally, given the remarkable sensitivity of the formation of 1 to the presence of miniscule amounts of contaminants in the laboratory environment, it was unclear whether the testing laboratories were suitable for the evaluation of the seeding ability of any (caf)$(FBA) cocrystal seed.To determine whether each of the studied (caf)$(FBA) cocrystals indeed acts as a seed for the target cocrystal, the AbbVie group has preformed a series of control experiments. Physical mixtures of caf and BA (being stored in rubbersepta-sealed vials before the initial heteronuclear cocrystal seeds were conducted) were subjected to an SMPT cocrystal screen by injecting a small amount of acetonitrile through the rubber septum into the vial.In situ Raman spectroscopy measurements showed that no cocrystal formation occurred even aer 112 h of slurrying.Suspensions containing small amounts of one of the cocrystals that are isomorphous with 1, namely (caf)$(2FBA), (caf)$(23diFBA), (caf)$(25diFBA), and (caf)$(26diFBA), were then injected to the suspension of the physical mixture.In situ Raman spectroscopy measurements revealed that the physical mixture converted to the cocrystal within a few minutes post-injection, thus proving that each of the utilised (caf)$(FBA) cocrystals can seed the formation of the target cocrystal.The outcomes of control experiments involving the non-isomorphous (caf)$(3FBA) and (caf)$(2345tetraFBA) were not conclusive and could not be used to determine whether these cocrystals can or cannot seed the formation of the target cocrystal. To verify that FBA cocrystal formers can also be utilised to facilitate the formation of 1, a suspension of 25diFBA crystals was injected into the vial with the physical mixture of caf and BA.The formation of 1 was observed only aer 45 minutes.The longer period required to achieve the crystallisation of the target compound is consistent with our hypothesis that the addition of the FBA cocrystal former rst leads to the in situ formation of the (caf)$(FBA) cocrystal seeds, which then facilitates the formation of 1. Conclusion and outlook That the cocrystal 1 failed to form, even though a stable cocrystal form was predicted to exist, is possibly attributable to a kinetic barrier that hinders the nucleation and growth of the thermodynamically stable target cocrystal.The results presented herein illustrate that elusive multi-component crystal forms can be obtained using cocrystals based on structurally similar cocrystal formers.The results also demonstrate the utility of crystal structure prediction calculations in assessing the likelihood of cocrystal formation.In addition, the results clearly demonstrate that current cocrystal-screening methods need to be improved in order to eliminate the occurrence of false negative results in cocrystal screens, which can seriously impede the development of medicines and functional materials (e.g.cocrystals and coordination compounds with relevant electronic or catalytic properties).Further studies will focus on elucidating the cause of the problematic nucleation of 1, as well as on the development of a thorough understanding of the presumed epitaxial growth of 1 on the surfaces of the (caf)$(FBA) cocrystals. Fig. 2 Fig. 2 (a) Calculated lattice energies of the predicted cocrystal structures of 1, compared to the sum of the pure component lattice energies (horizontal black dashed line) and the sum of lattice energies and entropy resulting from orientational disorder in pure caf (blue dashed line) and pure BA (the shaded areas represent uncertainties in calculated energies due to the disorder); (b) crystal packing in the P2 1 /c lowest energy predicted cocrystal structure, viewed approximately down the ab diagonal. Fig. 4 Fig. 4 (a) X-ray crystal structure of a caf : BA assembly in 1; (b) X-ray crystal structure of a molecular sheet in 1 sustained by p/p and C-H/p interactions (inset depicts two sheets stacked in an offset manner); (c) overlay of the predicted (shown in red) and observed (blue) crystal structure of 1; (d) PXRD traces of solids obtained in cocrystal screens with and without seeds at AbbVie Inc.[black: b-caf; grey: BA; redfrom bottom to top: physical mixtures of caf and BA obtained in cocrystallization attempt via LAG using nitromethane as liquid, LAG using ethanol as liquid, LAG using acetonitrile as liquid, NG, SMPT using nitromethane as solvent, SMPT using ethanol as solvent, SMPT using acetonitrile as solvent; greenfrom bottom to top: 1 obtained via LAG using nitromethane as liquid and (caf)$(2FBA) as seed, LAG using nitromethane as liquid and (caf)$(23diFBA) as seed, LAG using nitromethane as liquid and (caf)$(26diFBA) as seed, LAG using nitromethane as liquid and (caf)$(3FBA) as seed, LAG using nitromethane as liquid and (caf)$(2345tetraFBA) as seed; blue: 1 obtained via LAG without the deliberate use of a seed in the (caf)$(FBA) contaminated laboratory (utilising nitromethane as liquid)]; (e) Raman spectra of a caf : BA physical mixture before and after being exposed to a seed-contaminated atmosphere [from bottom to top: b-caf (black), BA (black), physical mixtures of caf and BA sonicated for 5 min (red), physical mixtures of caf and BA sonicated for 5 min and subsequently slurried for 19 hours (red), caf : BA physical mixture from a sealed screening vial that converted to 1 upon exposure to the seed-contaminated atmosphere (green), 1 (black)].
7,103.6
2013-10-28T00:00:00.000
[ "Chemistry", "Materials Science" ]
Extensions of the SVM Method to the Non-Linearly Separable Data The main aim of the paper is to briefly investigate the most significant topics of the currently used methodologies of solving and implementing SVM-based classifier. Following a brief introductory part, the basics of linear SVM and non-linear SVM models are briefly exposed in the next two sections. The problem of soft margin SVM is exposed in the fourth section of the paper. The currently used methods for solving the resulted QP-problem require access to all labeled samples at once and a computation of an optimal solution is of complexity O(N 2 ). Several approaches have been proposed aiming to reduce the computation complexity, as the interior point (IP) methods, and the decomposition methods such as Sequential Minimal Optimization – SMO, as well as gradient-based methods to solving primal SVM problem. Several approaches based on genetic search in solving the more general problem of identifying the optimal type of kernel from pre-specified set of kernel types (linear, polynomial, RBF, Gaussian, Fourier, Bspline, Spline, Sigmoid) have been recently proposed. The fifth section of the paper is a brief survey on the most outstanding new techniques reported so far in this respect. Introduction Support Vector Machines (SVMs) belong to the class of most effective and popular classification learning tools [1], [2].The learning problem for SVMs can be briefly described as follows.Let us denote by S a system of unknown input-output dependency, the unknown dependency being of deterministic/non-deterministic, linear/non-linear type.Besides, it is possible that the output is influenced by the observable input as well as a series of unobservable latent factors.Being given the lack of information about the input-output dependency of S, the most reasonable modeling should be in probabilistic terms.Unfortunately, in real world problems, there is no information about the underlying joint probability distribution corresponding to the (possible) non-linear dependency   x f y  between the high dimensional space of inputs x and the output space of S. The estimates of the unknown input-output dependency are obtained by a supervised distribution-free method, on the basis of a finite size training set consisting of input-output pairs of observations.The SVM methods belong to the classification class in the sense that the output space of it is a two-valuate domain, conventionally denoted by .Accordingly, a SVM can be viewed as a classifier discriminating between the inputs coming from two classes and the training set corresponds to a sequence of labeled inputs.In spite of the fact that initially the people involved in the field of statistical machine learning believed that the SVM approaches are mostly of a theoretical value, the developments based on SVMs proved significant qualities from applicative perspective.So far a tremendous volume of efforts have been invested in research concerning SVMs, leading to a long list of publications in this area.From mathematical point of view, the core problem of learning SVM is a quadratic programming problem [1], [3].The research in the SVMs area focused mainly on designing fast algorithms for solving the QP optimization problem, refining the concepts aiming to extend the SVMs for discriminating between non-separable classes, and on developing 1 DOI: 10.12948/issn14531305/17.2.2013.14mixture models resulted by combining the SVM with boosting type techniques [4].The main aim of the paper is to briefly investigate the most significant topics of the currently used methodologies of solving and implementing SVM-based classifier.Following a brief introductory part, the basics of linear SVM and non-linear SVM models are briefly exposed in the next two sections.The problem of soft margin SVM is exposed in the fourth section of the paper.The currently used methods for solving the resulted QP-problem require access to all labeled samples at once and a computation of an optimal solution is of complexity O(N 2 ).Several approaches have been proposed aiming to reduce the computation complexity, as the interior point (IP) methods, and the decomposition methods such as Sequential Minimal Optimization -SMO, as well as gradient-based methods to solving primal SVM problem.Several approaches based on genetic search in solving the more general problem of identifying the optimal type of kernel from pre-specified set of kernel types (linear, polynomial, RBF, Gaussian, Fourier, Bspline, Spline, Sigmoid) have been recently proposed.The fifth section of the paper is a brief survey on the most outstanding new techniques reported so far in this respect. The Basic Linear SVM Model Let us assume that the inputs of S are represented by the values of n pre-specified attributes, that is the input space can be taken as , therefore the sequence of observations on input-output dependency of S can be represented as where for each component   x , , i y is the output of S as the response to the input i x .We say that S is linearly separable if there exists a hyperplane that correctly separates the positive inputs from the negative ones.Obviously, since S is a finite set, if it is linearly separable, then the family of correctly separable hyperplanes is infinite.From intuitive point of view, being given the fact that the only information concerning the un-known input-output dependency of S is represented by the finite set S , in order to assure good generalization capacities, the hyperplane should be as equidistant as possible to the positive and negative examples.The linear SVM implements a linearly parameterized classification decision rule, corresponding to a hyperplane almost equidistant to the subsamples labeled by 1 and -1 respectively.The classification decision rule is given by , where the parameters should be such that the hyperplane of equation separates the positive and the negative training examples from S with the largest "gap" between them (optimal margin linear classifier).From mathematical point of view, an optimal margin classifier is a solution of the quadratic programming (QP) problem [1]  The dual problem of (1) is a QP problem on the objective function is a solution of (2), then the optimal value of the parameter w is for some i, DOI: 10.12948/issn14531305/17.2.2013.14 then is called a support vector.The bias term b cannot be determined by solving the SVM problem (1), a convenient choice of b being expressed in terms of the support vectors and * w as follows.According to the Karush-Kuhn-Tucker (KKT) complementarity conditions (3) 3 The Non-Linear SVM Usually, in real world problems, there is no enough evidence to set suitable models for the classes of interest, the whole information concerning them being contained in the set of samples and it is either very difficult or even impossible to check whether S is linearly separable.Moreover, even when S happens to be linearly separable, there are no reasons to assume that the provenance classes are also linearly separable.Consequently, in case the provenance classes are not linearly separable, the use of any classification decision rule based on a linear-type approach would lead to poor results when it classifies new test data. In order to cope with such a possibility, a non-linear transform of the given data onto a new space are hoped to provide more information about the provenance classes, therefore the parameters of a classification decision rule would be better tuned to separate the data coming from these classes, the ideal case being to find a non-linear transform such that in the new space the classes are linearly separable.Obviously, being given the finite type description of the classes represented by S , it is impossible to guarantee that the classes are indeed linearly separable in the new space, therefore we at most could hope that S becomes linearly separable.In such a case, the main problem is to formulate an option concerning the functional expression of a particular non-linear transform without increasing significantly the computational complexity.From mathematical point of view, the non-linear transform is a vector valued function , the image of S in the space being given by the set of new representations of the given data The transform g is referred as a feature extractor, and is called the feature space, its dimension being not necessarily finite. Assuming that the g S is at least "almost linearly separable", it appears quite natural to use a linear classifier in the feature space, the separating surface in between the images of the provenance classes being a hyperplane of equation . Consequently, we get a non-linear classifier of particular type, where the decision rule combines a parameterized expression of linear type to a non-linear dependency of the values of the initial attributes of the form, The performance of the resulted classifier is essentially determined by the quality of the feature extractor g, the main problem becoming the design of a particular informational feature extractor.Another problem is related to the computational complexity involved by the estimation process of the classifier parameters and the classification of new data.The "kernel trick" provides a solution to these problems.It consists in selecting a function K that "covers" the explicit functional expression of g, therefore the evaluation of w b h , is performed exclusively in terms of K. Since g is "hidden" by K, the resulted feature space cannot be explicitly known, therefore its dimension may be even infinite.The core result in approaches of this type is the celebrated theorem due to Mercer [3], [5].According to this results, if Note that the expression of is a continuous symmetric function, the existence of a function g such that for any holds, is guaranteed in case K satisfies a set of quite general additional conditions.Some of the most frequently used kernels are presented in Table 1. According to the developments presented in the previous section, if is a solution of ( 5), then the optimal parameters are , and Note that although apparently the parameters depend on the hidden feature extractor g, the resulted classifier is based exclusively on the values of the particular selected kernel, Soft Margin SVM The aim of the developments presented in this section is to present a modified approach in order to cope with cases when the particular kernel fails to extract enough information from data to discriminate without errors between the positive and negative examples, that is S g is not linearly separable in .In such a case we could search for a classifier h b,w that classifies at least "as correct as possible" the data.This idea can be formulated in mathematical terms as follows.Let g be a particular feature extractor and , where F is a convex and monotone increasing function and t>0 is a weight parameter.An optimality criterion can be expressed in terms of an objective function that combines additively 2 w with the overall effect of the errors, for instance In this case an optimal classifier (w * ,b * ) is a solution of the constrained QP-problem DOI: 10.12948/issn14531305/17.2.2013.14 where C is a conventionally selected constant.Unfortunately, stated in this general form, the problem (8) cannot be solved, but, for particular functional expressions of F and the weight parameter t, its solution can be computed explicitly.The simplest model uses F(u)=u and t=1, the problem (8) becoming the constrained QP-problem Using similar arguments as in case of (1), the dual QP-problem of ( 9) is (10) The parameters of an optimal hyperplane are  is a solution of (10) and the expression of the decision function of the classifier is given by (7). Methods of Learning Parameters of SVMs As it was pointed out in the previous sections, the support vector machines are a class of linear or kernel-based binary classifiers that attempt to maximize the minimal distance between each member of the class and separating surface.In most cases, the task of learning a support vector machine is cast as a constraint quadratic programming problem.The currently used methods for solving the resulted QP-problem require access to all labeled samples at once and a computation of an optimal solution is of complexity O(N 2 ).Several approaches have been proposed aiming to reduce the computation complexity, as the interior point (IP) methods [6], and the decomposition methods such as Sequential Minimal Optimization -SMO [7], as well as gradient-based methods to solving primal SVM problem.These methods exhibit convergence rate independent of the number of samples, which particular useful in case of large datasets.A long series of generalizations and improvements have also been recently proposed by many authors.For instance, in [8] a parallel version of SMO is proposed to accelerate the SVM training.Also, boosting algorithms were proved to be closely related to the primal formulation for SVM [9].IP methods cast SVM learning formulated as a QP-problem subject to linear constraints by replacing the constraints with a barrier function yielding to a sequence of unconstrained problems which can be optimized efficiently using Newton or Quasi-Newton methods.To overcome the quadratic memory requirement of IP methods, several decomposition methods such as SMO [7], SVM light [10], and SVM-Perf [11] switch to the dual representation of the SVM QP optimization problem and employ an active set of constraints thus working on a subset of dual variables.The algorithms belonging to this family are fairly simple to implement and entertain good asymptotic convergence properties, but the time complexity is typically super linear in the training set size N.Moreover, since decomposition methods aim to maximize the dual objective function, they often result in a rather slow convergence rate to the optimum of the primal objective function.The SMO algorithm [7] allows to solve the SVM-QP dual problem without extra-matrix DOI: 10.12948/issn14531305/17.2.2013.14storage.The idea is to use the Osuna's theorem [12] for decomposing the overall QP problem into smaller size QP sub-problems, the smallest size optimization problem being solved at each step.Unconstrained gradient methods were very common in solving optimization problems until the emergence of the ultra-fast IP methods.While gradient -based methods are known to exhibit slow convergence rate, the computational demands imposed by large scale classification and regression problems of high dimension feature space revived the theoretical and applied interest in gradient methods.A refined method combining gradient ascent algorithm with decomposition scheme including heuristic parameters for solving the dual problem of nonlinear SVM was introduced in [13], [14].The proposed refinement consists of the use of heuristically established weights in correcting the search direction at each step of the learning algorithm that evolves in the feature space.The use of weights is justified by the idea of getting better tuning effect to the particular training sequence.The tests pointed out good convergence properties and, moreover, the proposed modified variant proved higher convergence rates as compared to the Platt's SMO algorithm.The main objectives of the research were to evaluate the influence of magnitude of the exponential RBF kernel parameter on the number of iterations required to obtain significant accuracy, as well as on the magnitude of the inter-sample distance, and sample variability and separability degrees.The experimental analysis aimed also to derive conclusions on the recognition rate as well as on the generalization capacities.All linear classifiers proved almost equal recognition rate and generalization capacities, the difference being given by the number of iteration required for learning the separating hyperplanes. The tests pointed out that the variation of the recognition rates depends also on the inner structure of the classes from which the learning data come as well as on their sepa-rability degree.Consequently, the results are encouraging and entail future work toward extending these refinements to multi-class classification problems and approaches in a fuzzy-based framework.The Pegasos (Primal Estimated sub-GrAdient SOlver for SVM) algorithm [15] is an improved stochastic sub-gradient method that uses fixed size subsamples of the training set to compute approximate sub-gradient, two concrete algorithms that are closely related to the Pegasos algorithm being the NORMA algorithm [16] and a stochastic gradient algorithm proposed by Zhang [17].At it is reported in [15], on the basis of a large series of tests, the Pegasos algorithm is substantially faster than SVM-Perv.Boosting is a meta-algorithm for supervised learning that combines several weak classifiers that can label examples only slightly better than random guessing into a single strong classifier with far better classification accuracy.Some of the most successful boosting methods in problems as text recognition, filtering, feature selection and face recognition are AdaBoost and its variants [18], [19].Recently, a new boosting type algorithm based on Pegasos and stochastic gradient descent-based SVM training method was proposed and its performance was experimentally evaluated for both the linear and the kernel-based case [4].The algorithm is a two-phases SVM allowing the use of gradient descent-based methods without the need to fine-tune the kernel parameters.A long series of tests proved that the algorithm is much more efficient than the kernel-based SVM algorithms, both in terms of computing and storage requirements, due to the fact that each weak classifier requires only a single inner product calculation, while the evaluation of kernel expansion terms involved by the use of NORMA and Pegasos algorithms are substantially more computationally expensive to achieve the same accuracy levels.Also, the combination of boosting and online SVM training has the potential to create efficient algorithms that outperform standard training algorithms when the kernel parameters are not known.DOI: 10.12948/issn14531305/17.2.2013.14Moreover, one of the core problem in improving the efficiency of the classifier is to identify the optimal types of kernels and for each type of kernel its optimal parameters and then apply the standard techniques for solving the resulted QP problem.In other words, in these approaches, the problem is to tune the type of kernel together with its parameters to the particular problem at hand.In [14] is reported an experimental analysis on the parameter of the RBF type kernels, the tests being performed on simulated data.Several approaches based on genetic search in solving the more general problem of identifying the optimal type of kernel from pre-specified set of kernel types (linear, polynomial, RBF, Gaussian, Fourier, Bspline, Spline, Sigmoid) were reported in [20] and [21].A new class of approaches contains algorithms, referred as Genetic Algorithms-SVM (GA-SVM or GSVM), and Hybrid Genetic Algorithms SVM (HGA-SVM).In the novel HGA-SVM model [20], the type of kernel and the parameters of SVM are dynamically optimized by implementing an evolutionary process, the approach simultaneously determining the appropriate type of kernel function and optimal kernel parameter values for optimizing the SVM model to fit various datasets.The types of kernel functions (RBF kernel, polynomial kernel and linear kernel) together with all the values of the parameters are directly encoded into the chromosomes using integers and real-valued numbers respectively.Therefore each chromosome is represented by a triple whose entries are the particular type of kernel function, and the first and second parameter values in this particular chromosome in population respectively, the type of the kernel being represented by an integer number, the second and the third parameters coded in terms of real valued numbers.The proposed model can implement either the roulette-wheel or the tournament method for chromosome selection.The chromosomes are modified using the crossover operator and boundary mutation method introduced by Adewuya [22], the method being of elitist type in the sense that only the one best chromosome in each generation is allowed to survive in the succeeding generation.In [21] the GSVM algorithm was applied for effective detection of the Doppler heart sounds.The GSVM algorithm is a genetic algorithm-based SVM classification technique defined in terms of a kernel function type, kernel function parameters, and the soft margin constant C that represents the penalty parameter of support vector machine classifier.The proposed model uses a 28-bits chromosome, grouped as follows.The genes of the first group are the kernel function type represented by 3 bits, the value of the C parameter (3 bits), the value of Gaussian kernel parameter (7 bits).The genes belonging to the second set represent the value of the polynomial kernel parameter (2 bits), the value of the Sigmoid kernel parameter (2 bits), the value of the Bspline kernel parameter (2 bits).The next group of genes encode the value of the RBF (7 bits) and the value of the Fourier kernel parameter (2 bits).The fitness function used in GSVM is based on classification accuracy of the trained SVM classifier, the initial population consisting of 30 chromosomes, each of chromosomes having 28 bits.The new populations are generated in the search initiated by the GSVM algorithm using the crossover and mutation operations.The most important 15 chromosome in population are saved for composing the next population, the chromosomes that have low fitness values being eliminated.A subsample of 40% portion of the optimum chromosomes are randomly selected and subjected to crossover operator.Therefore 10 chromosomes are subjected to crossover operator, 5 bits of the each of the random 2 chromosomes are randomly selected and replaced each other, yielding to a 10 new chromosomes.The bit inversion method is used as a mutation operator and it is applied to 0.4% portion of the total bits numbers of other 5 chromosomes. Conclusions Support Vector Machines are maybe the most effective and popular classification DOI: 10.12948/issn14531305/17.2.2013.14learning tool, the task of learning a SVM being cast as a constrained QP-problem.The respective dual problem is also a constrained QP-problem whose solution can be approximated by an adaptive learning scheme to assure the maximization of the objective function.One of the main benefits of SVMs is their ability to incorporate and construct non-linear predictors using kernels which satisfy Mercer's conditions, the common approach for solving the optimization problem for SVM when kernels are employed being to switch to the dual problem and find the optimal set of dual variables.The performance of the resulted classifier is essentially conditioned by the quality of the feature extractor induced by the selected kernel.The most frequently used kernels belong to polynomial class, or are of exponential type, as for instance Gaussian kernels.Some of the trends in optimizing the learning process of SVM-based classifier aim to design hybrid architectures and to develop methods "tuned" to the particular problem by including special tailored genetic algorithms. support vector .Also, taking into account the constraints of (1), the value of the bias b should be set such that 1 w b h , can be viewed as a combination of a linear filter de-DOI: 10.12948/issn14531305/17.2.2013.14fined by the parameters R linear filter represented by g. Since g S is finite, in case it is linearly separable in the space , there are an infinite number of classifiers w b h , that separate the given data without errors.Let us assume that for a selected kernel K, g S is linearly separable.Then we could search for a linear classifier in that offers the best generalization capacity in the sense that it still classifies at least "almost correctly", new, unseen yet examples.This requirement may be formulated as the task to determine the parameters as possible to all images of the training data in the feature space, therefore it is aimed to separate the examples of g S with the largest "gap" between positive and negative examples.Such a classifier is referred as an optimal margin classifier.Stated in mathematical terms, the problem is formulated as follows.Let K be a kernel and g be the induced feature extractor, of parameters w and b.We include in the model a set of slack variables N The overall importance of the cumulated errors is expressed as
5,335.4
2013-06-30T00:00:00.000
[ "Computer Science", "Mathematics" ]
Machine Learning-Based Figure of Merit Model of SIPOS Modulated Drift Region for U-MOSFET This paper presents a machine learning-based figure of merit model for superjunction (SJ) U-MOSFET (SSJ-UMOS) with a modulated drift region utilizing semi-insulating poly-crystalline silicon (SIPOS) pillars. This SJ drift region modulation is achieved through SIPOS pillars beneath the trench gate, focusing on optimizing the tradeoff between breakdown voltage (BV) and specific ON-resistance (RON,sp). This analytical model considers the effects of electric field modulation, charge-coupling, and majority carrier accumulation due to additional SIPOS pillars. Gaussian process regression is employed for the figure of merit (FOM = BV2/RON,sp) prediction and hyperparameter optimization, ensuring a reasonable and accurate model. A methodology is devised to determine the optimal BV-RON,sp tradeoff, surpassing the SJ silicon limit. The paper also delves into a discussion of optimal structural parameters for drift region, oxide thickness, and electric field modulation coefficients within the analytical model. The validity of the proposed model is robustly confirmed through comprehensive verification against TCAD simulation results. Introduction Power MOSFETs play a crucial role in power management and energy conversion systems.The superjunction (SJ) theory, utilizing a vertical P-N junction in the drift region, has been widely adopted in the design of vertical discrete power MOSFETs rated from 300 V to 1000 V.This approach achieves notably low specific ON resistance (R ON,sp ) and high breakdown voltage (BV), surpassing the conventional MOSFET silicon limit defined by R ON,sp = 8.3 × 10 −9 BV 2.5 [1].To further optimize performance, integrating a deep trench and an extended gate offers potential R ON,sp reduction by minimizing device pitch and inducing an accumulation layer [2][3][4].However, this improvement is hindered by the diminishing electric field (E-field) beneath the trench, and blocking voltage faces limitations due to charge balance issues [5,6]. Several strategies have been proposed to tackle this issue.One approach suggests the use of high-K (HK) dielectric in the drift region, as seen in prior studies [6][7][8].However, the distribution of the E-field in the drift region is significantly affected by the presence of HK dielectric materials, making the complete optimization of the device's overall E-field challenging.Another method involves enhancing the BV in UMOS by combining high aspect ratio trenches with high-resistance semi-insulating poly-crystalline silicon (SIPOS) structures [9].This innovative combination offers UMOS the potential to achieve high BV while maintaining an ultra-low R ON,sp . Recent research has seen a surge in innovative approaches using machine learning techniques for device modeling and optimization [10][11][12][13][14][15][16].For example, Klemme [12] developed a machine learning method for accurately predicting the transfer characteristics of negative-capacitance FinFET devices.Wang [13] improved an artificial neural (ANN) model for general transistors by enhancing data pre-processing.Xu [14] introduced a machine-learning regression approach for single-electron transistors (SETs), training a neural network to effectively model SET pulse currents.These studies collectively illuminate the diverse applications of machine learning in device modeling and performance optimization.Zhang [15] proposed a concise modeling method for collaborative optimization and path searching in advanced design techniques using machine learning.Mehta [16] demonstrated the possibility of predicting full transistor current-voltage (IV) and capacitance-voltage (CV) curves using machines trained by technology computeraided design (TCAD) generated data.These studies collectively illuminate the diverse applications of machine learning in device modeling and performance optimization. This paper presents a physics-informed and machine learning-based model of the SIPOS (S) pillar-modulated structure in superjunction (SJ) UMOS (SSJ-UMOS), as depicted in Figure 1a,b.The explicit analytical model, grounded in Poisson's solution, includes the E-field modulation effect, potential distributions, and charge-coupling effects.The model is constructed through a two-step process.Initially, it is derived using closedform analytical expressions, incorporating Poisson's solution to capture the basic physical mechanism of the device.Subsequently, machine learning techniques, such as Gaussian process regression (GPR), are employed for the figure of merit (FOM = BV 2 /RON,sp) prediction and hyperparameter optimization, fine-tuning the model parameters for optimal performance.This combined approach ensures an accurate representation of device behavior, refining predictions of characteristics like the optimal BV-RON,sp tradeoff, and surpassing the SJ silicon limit [17][18][19].This hybrid modeling strategy synergizes analytical and machine learning methodologies, yielding a robust and precise device model.The analytical approach of this model can guide the optimization design for MOSFET devices with SIPOS E-field modulation. Charge-Coupling Effect of SIPOS Modulated Drift Region Ref. [19] provides the revised optimum doping concentration (ND,SJ) for the N-pillar of the conventional SJ as where WN is the width of the N-pillar, and ECU is the critical E-field for breakdown with a uniform distribution.In contrast, the doping concentration in the drift region of the SSJ-UMOS (ND,SSJ) is determined by the two-dimensional charge coupling of the SJ structure and MIS structure SIPOS/oxide/Si.To achieve an effective charge-coupling effect, the Charge-Coupling Effect of SIPOS Modulated Drift Region Ref. [19] provides the revised optimum doping concentration (N D,SJ ) for the N-pillar of the conventional SJ as where W N is the width of the N-pillar, and E CU is the critical E-field for breakdown with a uniform distribution.In contrast, the doping concentration in the drift region of the SSJ-UMOS (N D,SSJ ) is determined by the two-dimensional charge coupling of the SJ structure and MIS structure SIPOS/oxide/Si.To achieve an effective charge-coupling effect, the highly doped N-pillar region with the total charge Q N,SSJ,total must be completely depleted by the P-pillar of SJ structure with the charge Q SJ,P and the MIS structure of SIPOS/oxide/Si with the charge Q SIPOS,C , as the drain bias approaches the BV, given by When both the N-pillar and P-pillar are simultaneously depleted, and the E-field at junction J SJ reaches E CU , indicating breakdown in the case of a uniform E-field distribution as where N ′ D,SJ represents the equivalent doping concentration for depleting the P-pillar within the N-pillar.Q SIPOS,C denotes the charge of the equivalent plate capacitor for the Si/oxide/SIPOS structure, equating to the partial charge with the equivalent doping concentration of N" D,SJ in the N-pillar as where ∆V represents the potential difference across the thin oxide layers between the SIPOS pillar and the N-pillar.ε OX and ε Si denote the permittivity of the oxide and Silicon.t OX is the oxide thickness.The total doping concentration of the N-pillar due to SIPOS modulation of SSJ-UMOS structure can be obtained as Electric Field of SIPOS Modulated Drift Region Assuming a reverse bias V R is applied, and the drift region is fully depleted, the electrostatic potential ϕ must satisfy the following Poisson equation with appropriate boundary conditions [20]. Considering strong coupling and electric displacement continuity at the semiconductordielectric interface, appropriate boundary conditions in the y-direction can be established as where E (0, y) represents the vertical E-field along the dotted line A-A ′ (x = 0, Figure 2), where the lateral E-field component is zero. where ∆V is the potential difference across the thin oxide layers, between the voltage on the N-pillar ϕ Si (y) and the voltage on the SIPOS pillar ψ SIPOS (y).α and β are coefficients of SSJ-UMOS with values between 0 and 1.The potential in the SIPOS layer is assumed to be linearly distributed in the drift region based on the ohmic behavior of the SIPOS layer as Micromachines 2024, 15, x FOR PEER REVIEW 4 of 12 SSJ-UMOS with values between 0 and 1.The potential in the SIPOS layer is assumed to be linearly distributed in the drift region based on the ohmic behavior of the SIPOS layer as where Ts is expressed as Neff is the effective doping concentration of the N-drift region.Solving (11) with constraints (8)- (10) gives the potential distributions in the N-pillar as In the scenario where the E-field extends through the entire length of the drift region, the magnitude of the E-field in the y-direction E(y) along the middle line of the N-drift region is given by Combining ( 9) an optimum expression for TS can be derived under the criterion that the E-field at the junction JSJ and at the bottom of the trench are equal at the breakdown, for the condition as The potential function is approximated by a second-order Taylor expansion formula.By solving the 2-D Poisson's equations with the boundary conditions ( 6)-( 8), a general differential equation for the potential distribution function in the N-pillar drift region is obtained as where T s is expressed as N eff is the effective doping concentration of the N-drift region.Solving (11) with constraints ( 8)- (10) gives the potential distributions in the N-pillar as In the scenario where the E-field extends through the entire length of the drift region, the magnitude of the E-field in the y-direction E(y) along the middle line of the N-drift region is given by For the SSJ-UMOS structure with N eff = N D,SSJ (5), E SSJ (y) is expressed as Combining ( 9) an optimum expression for T S can be derived under the criterion that the E-field at the junction J SJ and at the bottom of the trench are equal at the breakdown, for the condition as The BV of SSJ-UMOS is expressed as where λ is a coefficient with values between 0 and 1.Combined with the solution of ( 11), (14), and ( 16), the optimum T S,OP is given by Utilizing Equations ( 5), ( 12) and ( 18), we determine the optimal oxide thickness t OX,OP for SIPOS SJ-UMOS as Figure of Merit BV-R ON,sp Model for SSJ-UMOS Combining the SJ and MIS structures enables the SSJ-UMOS to achieve ultra-low R ON,sp .The total drift region resistance is analyzed in two components: one from the highly doped N-pillar drift region and the other from the carrier accumulation layer due to positive gate bias on the MIS structure SIPOS/oxide/Si.The R SJ,sp contributed by the N-pillar drift region is expressed as where W Cell is half the width of the cell (W N + W P + W I ).ρ is the resistivity of the N-pillar drift region.µ N is the electron mobility.When ( 5) and ( 20) are combined, the R SJ,sp contributed by the N-pillar in SSJ-UMOS is expressed as The schematic cross-section illustrates the SIPOS pillar modulated SJ drift region and the carrier accumulation layer along the trench surface in the N-pillar drift region.Due to the uniform resistivity of the SIPOS layer, the voltage across the SIPOS at position y is denoted as The specific resistance R A,sp of the accumulation layer is obtained by integrating the dR A,sp , is expressed as In the ON state, the threshold voltage (V th ) signifies the initiation of the accumulation layer formation.Substituting ( 22) into (23), we obtain the integrated result for R A,sp as As the total R ON,sp contributed by the drift region and the accumulation layer is in parallel, the overall R ON,sp,SSJ for the SSJ-UMOS comprises two components, R SJ,sp and R A,sp as Combining ( 21), ( 24) and ( 25), the R ON,sp,SSJ is obtained as When applying Baliga's formula for the impact ionization coefficient, αSi, to a twodimensional charge-coupling silicon device, as referenced in [19], we derive an expression for the critical electric field in scenarios characterized by a uniform electric field as When ( 9), ( 17), (26), and (27) are combined, the R ON,sp,SSJ is given by BV 7/6 5.53 × 10 5 λ 7/6 (28) The mobility µ N is influenced by the silicon-oxide interface property.In practical processes, the SSJ-UMOS resistance is increased due to side-wall mobility degradation.The R ON,sp,SSJ surpasses the superjunction UMOS Silicon limit mentioned in Ref. [19], which is given by Hyperparameters Optimization Based on Gaussian Process Regression Model Figure 1b illustrates a phased approach for optimizing hyperparameters (α, β, λ) using Gaussian processes.Figure 2 shows the schematic representation of the GPR.Following device model establishment, we analyze the electrical mechanism and conducted Sentaurus TCAD simulations to generate a dataset containing 1000 samples.Subsequently, GPR is applied to construct a FOM = BV 2 /R ON,sp prediction model and identify optimal hyperparameters.Structural parameters such as L D , N D , t OX , W N , closely linked to FOM, are considered during hyperparameter optimization.GPR, a non-parametric Bayesian regression method, assumes the target variable FOM follows a multivariate Gaussian distribution, avoiding specific assumptions about the fitting function F and treating FOM at any data point x as a random variable.Combining ( 16), ( 27) and (28), the FOM calculation formula is the target formula to be optimized for the GPR model, expressed as After establishing the device model, TCAD simulations are employed to generate device data for different combinations of L D , N D,SSJ , t OX , W N .Subsequent data processing leads to the dataset as where L Di , N Di , t OXi , W Ni denote the features of the i-th data point, corresponding to the target value FOM i , representing the FOM of the i-th device. The mean function m(x) represents the average behavior of the target value FOM given the features L D , N D,SSJ , t OX , W N .The covariance kernel function k (x, x ′ ) represents the correlation between different data points x and x ′ in the feature space as We then define the likelihood function based on the derived (32) to express the probability of observing the data given the parameters α, β, λ.In GPR, the likelihood function is represented using a Gaussian distribution and expressed as For each data point, we employ a multivariate Gaussian distribution as the probability distribution, calculating the mean and variance from the dataset.The likelihood function is obtained through maximum likelihood estimation, and a gradient descent optimization algorithm is applied to optimize the three hyperparameters α, β, and λ resulting in the final values α best , β best , and λ best . Off-State Characteristics Numerical results obtained through TCAD simulations and analytical results from the model are compared.To validate the model, simulation results are calibrated to breakdown characteristic (I D -V D ) data extracted from fabricated SJ-VDMOS [21], as depicted in Figure 3a.The TCAD simulation results, with a single set of self-consistent parameters, align well with experimental data.Additionally, the OFF state characteristics of SJ-UMOS and SSJ-UMOS are illustrated in Figure 3a.As the resistivity of the SIPOS layer equals 1.0 × 10 10 Ω•cm, the leakage current of SSJ-UMOS increases from 10 −12 to 10 −10 A due to the SIPOS field plate acting as a high-resistor parallel to the drift region.In the OFF state, there is a uniform potential difference (∆V) between the SIPOS layer and the vertical surface of the N-drift region for and SSJ-UMOS, as shown in Figure 3b.We then define the likelihood function based on the derived (32) to express the probability of observing the data given the parameters α, β, λ.In GPR, the likelihood function is represented using a Gaussian distribution and expressed as For each data point, we employ a multivariate Gaussian distribution as the probability distribution, calculating the mean and variance from the dataset.The likelihood function is obtained through maximum likelihood estimation, and a gradient descent optimization algorithm is applied to optimize the three hyperparameters α, β, and λ resulting in the final values αbest, βbest, and λbest. Off-State Characteristics Numerical results obtained through TCAD simulations and analytical results from the model are compared.To validate the model, simulation results are calibrated to breakdown characteristic (ID-VD) data extracted from fabricated SJ-VDMOS [21], as depicted in Figure 3a.The TCAD simulation results, with a single set of self-consistent parameters, align well with experimental data.Additionally, the OFF state characteristics of SJ-UMOS and SSJ-UMOS are illustrated in Figure 3a.As the resistivity of the SIPOS layer equals 1.0 × 10 10 Ω•cm, the leakage current of SSJ-UMOS increases from 10 −12 to 10 −10 A due to the SIPOS field plate acting as a high-resistor parallel to the drift region.In the OFF state, there is a uniform potential difference (ΔV) between the SIPOS layer and the vertical surface of the N-drift region for and SSJ-UMOS, as shown in Figure 3b. Figure 4a shows the optimum effective doping concentration (Neff) predicted by expressions ( 5), (18), and ( 19) as a function of the WN with the BV as a parameter.Notably, the optimum dose decreases with increasing WN.SSJ-UMOS exhibits a higher optimum Neff than SJ-UMOS, attributed to the enhanced charge coupling effect of SIPOS pillars.In Figure 4b, the dependence of BV and RON,sp on ND for SSJ-UMOS and SJ-UMOS is illustrated.In SSJ-UMOS, SIPOS-assisted depletion of N-pillars reduces RON,sp and increases BV.Compared to SJ-UMOS, the BV of SSJ-UMOS decreases gradually when doping concentration is imbalanced, owing to the E-field modulation of SIPOS pillars.Figure 4a shows the optimum effective doping concentration (N eff ) predicted by expressions ( 5), (18), and ( 19) as a function of the W N with the BV as a parameter.Notably, the optimum dose decreases with increasing W N .SSJ-UMOS exhibits a higher optimum N eff than SJ-UMOS, attributed to the enhanced charge coupling effect of SIPOS pillars.In Figure 4b, the dependence of BV and R ON,sp on N D for SSJ-UMOS and SJ-UMOS is illustrated.In SSJ-UMOS, SIPOS-assisted depletion of N-pillars reduces R ON,sp and in-creases BV.Compared to SJ-UMOS, the BV of SSJ-UMOS decreases gradually when doping concentration is imbalanced, owing to the E-field modulation of SIPOS pillars. Gaussian Process Regression The Gaussian process regression model exhibits exceptional performance in this study.Key evaluation metrics, as shown in Table 1, include a mean squared error (MSE) of 953.56, a root mean squared error (RMSE) of 30.88, and a mean absolute percentage error (MAPE) of only 4.5%.These metrics unequivocally attest to the model's exceptional predictive accuracy.These results highlight the Gaussian process regression model's reliability in fitting and prediction, underscoring the crucial role of parameter optimization in enhancing model performance.We utilized visual representations to showcase the model's performance.In Figure 5a, a confidence interval plot illustrates the model's precision in predicting the target variable and the associated uncertainty.The model demonstrates low uncertainty, indicating high reliability in predictions, especially near the forecasted values.Figure 5b presents the results of parameter sensitivity analysis, revealing optimal hyperparameters: α = 0.8503, β = 0.5261, and λ = 0.7837.Notably, α significantly influences fitting results, highlighting its sensitivity.This insight provides valuable guidance for further parameter optimization, with the potential to improve both fitting quality and predictive accuracy. Gaussian Process Regression The Gaussian process regression model exhibits exceptional performance in this study.Key evaluation metrics, as shown in Table 1, include a mean squared error (MSE) of 953.56, a root mean squared error (RMSE) of 30.88, and a mean absolute percentage error (MAPE) of only 4.5%.These metrics unequivocally attest to the model's exceptional predictive accuracy.These results highlight the Gaussian process regression model's reliability in fitting and prediction, underscoring the crucial role of parameter optimization in enhancing model performance.We utilized visual representations to showcase the model's performance.In Figure 5a, a confidence interval plot illustrates the model's precision in predicting the target variable and the associated uncertainty.The model demonstrates low uncertainty, indicating high reliability in predictions, especially near the forecasted values.Figure 5b presents the results of parameter sensitivity analysis, revealing optimal hyperparameters: α = 0.8503, β = 0.5261, and λ = 0.7837.Notably, α significantly influences fitting results, highlighting its sensitivity.This insight provides valuable guidance for further parameter optimization, with the potential to improve both fitting quality and predictive accuracy. Analytical expression ( 15) is applicable in SSJ-UMOS with modifications to N eff based on Equations ( 5) and (7).In Figure 6a,b, numerical and analytical profiles of vertical E-field and potential for SSJ-UMOS and SJ-UMOS in the middle of the N-pillar along the y-direction (A-A', Figure 1a) are presented.In comparison to SJ-UMOS without the SIPOS layer, the high E-field peak (E PK ) at the gate trench bottom is reduced and BV is improved from 607 V to 725 V. Analytical results for SSJ-UMOS align with numerical results for various T OX values.Optimizing the oxide layer thickness (T OX = 0.05 µm) in SSJ UMOS effectively enhances device performance.certainty, indicating high reliability in predictions, especially near the forecasted values.Figure 5b presents the results of parameter sensitivity analysis, revealing optimal hyperparameters: α = 0.8503, β = 0.5261, and λ = 0.7837.Notably, α significantly influences fitting results, highlighting its sensitivity.This insight provides valuable guidance for further parameter optimization, with the potential to improve both fitting quality and predictive accuracy.Analytical expression ( 15) is applicable in SSJ-UMOS with modifications to Neff based on Equations ( 5) and (7).In Figure 6a,b, numerical and analytical profiles of vertical Efield and potential for SSJ-UMOS and SJ-UMOS in the middle of the N-pillar along the ydirection (A-A', Figure 1a) are presented.In comparison to SJ-UMOS without the SIPOS layer, the high E-field peak (EPK) at the gate trench bottom is reduced and BV is improved from 607 V to 725 V. Analytical results for SSJ-UMOS align with numerical results for various TOX values.Optimizing the oxide layer thickness (TOX = 0.05 μm) in SSJ UMOS effectively enhances device performance.Figure 7a,b present the optimum oxide thicknesses and N-pillar width for SSJ-UMOS with various breakdown voltages, as predicted by the analytical model (18) and (19).The trench oxide thickness increases for devices with larger blocking voltages, staying within practical limits for device processing and fabrication.For a breakdown voltage of 1000 V, the optimal trench oxide thickness is 0.05 μm with a mesa width of 1.0 μm for SSJ-UMOS, aligning with the obtained numerical results.Figure 7a,b present the optimum oxide thicknesses and N-pillar width for SSJ-UMOS with various breakdown voltages, as predicted by the analytical model (18) and (19).The trench oxide thickness increases for devices with larger blocking voltages, staying within practical limits for device processing and fabrication.For a breakdown voltage of 1000 V, the optimal trench oxide thickness is 0.05 µm with a mesa width of 1.0 µm for SSJ-UMOS, aligning with the obtained numerical results.Analytical expression ( 15) is applicable in SSJ-UMOS with modifications to Neff based on Equations ( 5) and (7).In Figure 6a,b, numerical and analytical profiles of vertical Efield and potential for SSJ-UMOS and SJ-UMOS in the middle of the N-pillar along the ydirection (A-A', Figure 1a) are presented.In comparison to SJ-UMOS without the SIPOS layer, the high E-field peak (EPK) at the gate trench bottom is reduced and BV is improved from 607 V to 725 V. Analytical results for SSJ-UMOS align with numerical results for various TOX values.Optimizing the oxide layer thickness (TOX = 0.05 μm) in SSJ UMOS effectively enhances device performance.Figure 7a,b present the optimum oxide thicknesses and N-pillar width for SSJ-UMOS with various breakdown voltages, as predicted by the analytical model (18) and (19).The trench oxide thickness increases for devices with larger blocking voltages, staying within practical limits for device processing and fabrication.For a breakdown voltage of 1000 V, the optimal trench oxide thickness is 0.05 μm with a mesa width of 1.0 μm for SSJ-UMOS, aligning with the obtained numerical results.Figure 9a presents a dynamic performance comparison between SIPOS SJ-UMOS and the conventional SJ-UMOS.The SIPOS pillars increase gate capacitance, generating a surface deep-depletion layer in the drift region in the OFF state, leading to switching delays in SIPOS SJ-UMOS.The turn-on speed is comparable between the two devices, while the turn-off speed of SIPOS SJ-UMOS is slower than that of the conventional model.Nonetheless, MOSFETs with SIPOS terminations have demonstrated resilience under harsh conditions, such as a gradient of 10 kV/μs.In Figure 9b, a comparison of the RON,sp and BV relationships is presented for the three structures, including references [2,3,17,[21][22][23][24][25].Optimum tOX, WN, LD, and ND values for SSJ-UMOS and S-UMOS are chosen for this analysis.For the RON,sp analysis of SSJ-UMOS, VDS is set to 10 V at VGS of 5 V.The plot in Figure 9b clearly indicates that the SSJ-UMOS structure outperforms other structures, surpassing the SJ silicon limit [19]. Conclusions This paper introduces a machine learning-based figure of merit model of SSJ-UMOS featuring a modulated drift region utilizing SIPOS pillars.The tradeoff characteristics be- Figure 9a presents a dynamic performance comparison between SIPOS SJ-UMOS and the conventional SJ-UMOS.The SIPOS pillars increase gate capacitance, generating a surface deep-depletion layer in the drift region in the OFF state, leading to switching delays in SIPOS SJ-UMOS.The turn-on speed is comparable between the two devices, while the turn-off speed of SIPOS SJ-UMOS is slower than that of the conventional model.Nonetheless, MOSFETs with SIPOS terminations have demonstrated resilience under harsh conditions, such as a gradient of 10 kV/µs.In Figure 9b, a comparison of the R ON,sp and BV relationships is presented for the three structures, including references [2,3,17,[21][22][23][24][25].Optimum t OX , W N , L D , and N D values for SSJ-UMOS and S-UMOS are chosen for this analysis.For the R ON,sp analysis of SSJ-UMOS, V DS is set to 10 V at V GS of 5 V.The plot in Figure 9b clearly indicates that the SSJ-UMOS structure outperforms other structures, surpassing the SJ silicon limit [19].Figure 9a presents a dynamic performance comparison between SIPOS SJ-UMOS and the conventional SJ-UMOS.The SIPOS pillars increase gate capacitance, generating a surface deep-depletion layer in the drift region in the OFF state, leading to switching delays in SIPOS SJ-UMOS.The turn-on speed is comparable between the two devices, while the turn-off speed of SIPOS SJ-UMOS is slower than that of the conventional model.Nonetheless, MOSFETs with SIPOS terminations have demonstrated resilience under harsh conditions, such as a gradient of 10 kV/μs.In Figure 9b, a comparison of the RON,sp and BV relationships is presented for the three structures, including references [2,3,17,[21][22][23][24][25].Optimum tOX, WN, LD, and ND values for SSJ-UMOS and S-UMOS are chosen for this analysis.For the RON,sp analysis of SSJ-UMOS, VDS is set to 10 V at VGS of 5 V.The plot in Figure 9b clearly indicates that the SSJ-UMOS structure outperforms other structures, surpassing the SJ silicon limit [19]. Conclusions This paper introduces a machine learning-based figure of merit model of SSJ-UMOS featuring a modulated drift region utilizing SIPOS pillars.The tradeoff characteristics between BV and RON,sp have been theoretically derived, breaking the SJ Silicon limit by ap- Conclusions This paper introduces a machine learning-based figure of merit model of SSJ-UMOS featuring a modulated drift region utilizing SIPOS pillars.The tradeoff characteristics between BV and R ON,sp have been theoretically derived, breaking the SJ Silicon limit by applying three methods for the additional E-field modulation effect, charge coupling effect and majority carrier accumulation, simultaneously.In the analytical model, the optimal structure parameters of the drift region, oxide thickness, and E-field modulation coefficients are also discussed in the analytical model.GPR is employed for an accurate figure of merit prediction and hyperparameter optimization, which can give guidance for the design of power MOSFETs with SIPOS.The proposed model's validity is robustly confirmed through comprehensive verification against TCAD simulation results. →EL represents the vertical potential E-field component generated under the drain bias of V R .At position x = W N/2 , the E-field comprises the vertical potential E-field component −→ E L,SSJ and the lateral plate capacitive potential E-field component −→ E Si-OX,SSJ , expressed as Figure 2 . Figure 2. Schematic representation of the Gaussian process regression model. Figure 3 . Figure 3. (a) Reverse leakage current versus V DS of SSJ-UMOS, SJ-UMOS, and simulation results calibrated to breakdown characteristics (I DS -V DS ) data from the fabricated SJ-VDMOS [21].(b) Electric potential difference (∆V) between the drift region and the SIPOS pillar of SSJ-UMOS. Figure 4 . Figure 4. (a) Optimum doping concentration for two-dimensional charge-coupling, and (b) dependence of BV and RON,sp on ND for SSJ-UMOS and SJ-UMOS. Figure 6 . Figure 6.Simulated and analytical (a) E-field, and (b) potential distributions of SIPOS SJ UMOS and SIPOS UMOS (along the line A-A'). Figure 7 . Figure 7. Optimum (a) oxide thickness tOX and (b) N-pillar width WN for SSJ UMOS.3.3.ON State and Dynamic CHARACTERISTIC Figure 8a displays electron current density distributions in the drift region and output characteristics of SSJ-UMOS and SJ-UMOS.The threshold voltage Vth of the two devices are about 1.2 V.In SSJ-UMOS, drift region resistance (RD) is the parallel connection of the accumulation layer resistance (RA).The maximum electron current density of SSJ Figure 6 . Figure 6.Simulated and analytical (a) E-field, and (b) potential distributions of SIPOS SJ UMOS and SIPOS UMOS (along the line A-A'). Figure 6 . Figure 6.Simulated and analytical (a) E-field, and (b) potential distributions of SIPOS SJ UMOS and SIPOS UMOS (along the line A-A'). Figure 7 . Figure 7. Optimum (a) oxide thickness tOX and (b) N-pillar width WN for SSJ UMOS.3.3.ON State and Dynamic CHARACTERISTIC Figure 8a displays electron current density distributions in the drift region and output characteristics of SSJ-UMOS and SJ-UMOS.The threshold voltage Vth of the two devices are about 1.2 V.In SSJ-UMOS, drift region resistance (RD) is the parallel connection of the accumulation layer resistance (RA).The maximum electron current density of SSJ Figure 7 . Figure 7. Optimum (a) oxide thickness t OX and (b) N-pillar width W N for SSJ UMOS. 3. 3 .Figure 8 . Figure8adisplays electron current density distributions in the drift region and output characteristics of SSJ-UMOS and SJ-UMOS.The threshold voltage V th of the two devices are about 1.2 V.In SSJ-UMOS, drift region resistance (R D ) is the parallel connection of the accumulation layer resistance (R A ).The maximum electron current density of SSJ UMOS reaches 8.23 × 10 4 A/cm 2 , significantly higher than that of SJ UMOS.At a high drain voltage, the second term M in (26) becomes dominant, leading to a strong dependence on ∆V G = V D − V G as shown in Figure8b.Additionally, reducing the pitch W cell can decrease R ON,sp,SSJ .Micromachines 2024, 15, x FOR PEER REVIEW 10 of 12 Figure 9 . Figure 9. (a) Switching waves of SIPOS SJ-UMOS and the conventional SJ-UMOS at the same VDD = 100 V. (b) Comparison of theoretical predictions of RON,sp versus BV relationship of SSJ-UMOS, SJ-UMOS and other published devices with the ideal silicon limit and the SJ silicon limit line in the BV range of 10-1000 V. Figure 8 . Figure 8.(a) Electron current density distribution and output characteristics for SSJ-UMOS and SJ-UMOS.(b) Simulated and analytical R ON,sp,SSJ at the different ∆V G and W Cell for the SSJ-UMOS. Figure 9 . Figure 9. (a) Switching waves of SIPOS SJ-UMOS and the conventional SJ-UMOS at the same VDD = 100 V. (b) Comparison of theoretical predictions of RON,sp versus BV relationship of SSJ-UMOS, SJ-UMOS and other published devices with the ideal silicon limit and the SJ silicon limit line in the BV range of 10-1000 V. Figure 9 . Figure 9. (a) Switching waves of SIPOS SJ-UMOS and the conventional SJ-UMOS at the same V DD = 100 V. (b) Comparison of theoretical predictions of R ON,sp versus BV relationship of SSJ-UMOS, SJ-UMOS and other published devices with the ideal silicon limit and the SJ silicon limit line in the BV range of 10-1000 V. L D t OX W N Normalization W 4x1 Predict Output Confidence interval Feature Input Hyperparameters Regression Output FOM =BV 2 /R ON,sp λ N D α best β best λ best α β Figure 2. Schematic representation of the Gaussian process regression model.
7,207.2
2024-03-01T00:00:00.000
[ "Engineering", "Physics", "Computer Science" ]
Motor Symptom Lateralization Influences Cortico-Striatal Functional Connectivity in Parkinson's Disease Objective: The striatum is unevenly impaired bilaterally in Parkinson's disease (PD). Because the striatum plays a key role in cortico-striatal circuits, we assume that lateralization affects cortico-striatal functional connectivity in PD. The present study sought to evaluate the effect of lateralization on various cortico-striatal circuits through resting-state functional magnetic resonance imaging (fMRI). Methods: Thirty left-onset Parkinson's disease (LPD) patients, 27 right-onset Parkinson's disease (RPD) patients, and 32 normal controls with satisfactory data were recruited. Their demographic, clinical, and neuropsychological information was collected. Resting-state fMRI was performed, and functional connectivity changes of seven subdivisions of the striatum were explored in the two PD groups. In addition, the associations between altered functional connectivity and various clinical and neuropsychological characteristics were analyzed by Pearson's or Spearman's correlation. Results: Directly comparing the LPD and RPD patients demonstrated that the LPD patients had lower FC between the left dorsal rostral putamen and the left orbitofrontal cortex than the RPD patients. In addition, the LPD patients showed aberrant functional connectivity involving several striatal subdivisions in the right hemisphere. The right dorsal caudate, ventral rostral putamen, and superior ventral striatum had decreased functional connectivity with the cerebellum and parietal and occipital lobes relative to the normal control group. The comparison between RPD patients and the controls did not obtain significant difference in functional connectivity. The functional connectivity between the left dorsal rostral putamen and the left orbitofrontal cortex was associated with contralateral motor symptom severity in PD patients. Conclusions: Our findings provide new insights into the distinct characteristics of cortico-striatal circuits in LPD and RPD patients. Lateralization of motor symptoms is associated with lateralized striatal functional connectivity. INTRODUCTION Parkinson's disease (PD) is a neurodegenerative disorder commonly seen in the elderly, which manifests as classical motor symptoms such as bradykinesia, rigidity, and resting tremor, together with multiple non-motor symptoms (1). Dopamine deficiency in the striatum is a pathophysiological hallmark in PD and underlies motor and several neuropsychiatric symptoms. The striatum modulates motor activity, cognition, and behavior through multiple cortico-striatal circuits, which involve several striatal subregions (2,3). Lateralization is characteristic in PD. Motor symptoms usually present initially in one side of the body, and this asymmetry persists long after both sides show motor dysfunction (4,5). Lateralization is unique and a clue for differential diagnosis from other neurological disorders presenting as parkinsonism (6). Uneven bilateral deficiency of dopamine in the striatum can explain this motor asymmetry (7)(8)(9), but this lateralization affects different cortico-striatal circuits simultaneously and is also related to various non-motor symptoms. The interaction between cerebral hemisphere dominance and asymmetric brain impairment leads to different neuropsychological profiles in left-onset (LPD) PD and right-onset (RPD) patients. Studies evaluating cognitive function, anxiety, psychosis, and apathy symptoms showed a series of differences between LPD and RPD patients (10)(11)(12)(13). Lateralization not only affects clinical profile in PD but also modulates therapeutic responses. In a study by Hanna-Pladdy et al. LPD and RPD patients had different responses to levodopa in attention and even paradoxical responses in verbal memory function (14). Due to different severities of dopamine deficiency in the more affected hemisphere and less affected hemisphere, levodopa may have an ameliorating or overdosing effect to different cortico-striatal circuits (14). Therefore, a better understanding of the effect of lateralization on various corticostriatal circuits can shed light on a more precise treatment in PD. Functional magnetic resonance imaging (fMRI) is increasingly used to assess cerebral activity based on the blood oxygen level-dependent (BOLD) effect, which can reflect cerebral blood flow and energy use (15). fMRI can be conducted when the subject is performing a specific task (task-based fMRI) or when the subject lies relaxed [resting-state fMRI (rs-fMRI)] (15). Due to its convenience, rs-fMRI is increasingly used in neurological research. Functional connectivity (FC) is defined as the temporal dependency between different brain regions and is an important approach to analyze rs-fMRI data (15). FC is an ideal technique to explore the impaired cortico-striatal circuits in PD. There have been several studies showing altered FC between striatum and various brain regions in PD patients, but the seeds used in previous studies varied, and the influence of laterality has rarely been investigated. Some researchers used the nuclei of basal ganglia, such as putamen and caudate as the seeds (16)(17)(18)(19)(20)(21)(22); some divided putamen and caudate to the anterior and posterior parts as the seeds (23)(24)(25)(26)(27)(28)(29). Others chose representative seeds of the subregions of the striatum (30)(31)(32)(33)(34)(35). Most of the studies merged LPD and RPD patients as a single group and compared fMRI data of PD patients with the controls (16,17,19,20,22,23,25,28,29,31,(33)(34)(35); some studies only focused on the more severely involved striatum or combined bilateral striatal seeds (27,30). These approaches cannot discern whether the changed FC was mainly contributed by the LPD or RPD patients or a common impairment shared by LPD and RPD patients. In the last century, anatomical labeling techniques have demonstrated the existence of parallel cortico-striatal circuits, which are related to motor, cognitive, and limbic functions. In addition, these circuits display rostrocaudal and dorsoventral patterns (36)(37)(38). With the advent of functional imaging, studies on the striatum using rs-fMRI have been rapidly increasing. Postuma and Dagher conducted a meta-analysis of positron emission tomography (PET) and fMRI studies. They have revealed that functional imaging can disclose different parallel cortico-striatal circuits and suggested the boundaries between dorsal and ventral caudate and putamen, as well as the boundary between rostral and caudal putamen (39). Furthermore, Di Martino et al. carried out an rs-fMRI study. They integrated the results of the study by Postuma and Dagher and anatomical characteristics of the striatum subregions and defined six seeds in each side of the brain for the rs-fMRI study (3). The seeds chosen by Di Martino et al. can reflect the divergence of these striatal subdivisions and their corresponding FC profiles; these definitions performed well in the following studies (30)(31)(32)(33)(34)(35). To date, how lateralization affects different cortico-striatal circuits remains unclear. The present study aimed to utilize rs-fMRI to comprehensively explore the changes of FC of distinct striatal subregions in LPD and RPD patients, in order to reveal the influence of asymmetry on cortico-striatal circuits in PD. The definitions of the seeds are consistent with the studies by Di Participants Between 2012 and 2014, we enrolled 63 PD patients and 33 ageand sex-matched control subjects without history of neurological or psychiatric disorders. All the participants were right handed and recruited from Beijing Hospital. A movement disorder specialist (W.S. or H.B.C) made the diagnosis based on the UK PD Society Brain Bank diagnostic criteria (6). We collected demographic and clinical data, including medical history, and physical and neurological examinations from all the subjects. The side of disease onset was identified through retrospective medical records review and patients' reports and supported by neurological examination. The sum of the Unified Parkinson's Disease Rating Scale (UPDRS) part III (including tremor, rigidity, and bradykinesia-related items) score of the right and left limbs was calculated as right and left motor subscores; then we calculated the laterality index by subtracting the left motor subscore from the right motor subscore. Usually, RPD patients had a positive laterality index, and LPD patients had a negative laterality index (40). Patients whose side of onset could not be confirmed concordantly or with bilateral onset were not included. PD patients with dementia, severe head tremor, deep-brain stimulation, substance abuse, head trauma, or other neurological or psychiatric diseases were also excluded. The MRI scans and clinical and neuropsychological evaluations were performed in a practically defined "off " state, in which the patients had stopped all the antiparkinson agents for ∼12 h (overnight). The Hoehn-Yahr staging, UPDRS, Mini-Mental State Examination (MMSE), Hamilton Depression Rating Scale (HAMD), Hamilton Anxiety Rating Scale (HAMA), and Non-Motor Symptoms Questionnaire (NMSQ) were used to measure motor and non-motor symptoms. MMSE was employed to assess cognitive function of the control subjects. The study was approved by the Ethics Committee of Beijing Hospital, and we conducted the study in keeping with the Declaration of Helsinki. All the subjects signed informed consent prior to participation. Image Acquisition An Achieva 3.0T MRI scanner (Philips Medical Systems, Best, Netherlands) was used for data acquisition. Foam pads were utilized to reduce head motion, and headphones were employed to decrease the scanning noise. The participants were required to lie still with eyes closed, relaxed, and stay awake. A highresolution T1-weighted anatomical image was acquired using the following parameters: repetition time ( rs-fMRI Data Preprocessing Images were preprocessed using RESTPlus version 1.2 (42), which was based on SPM 12 (http://www.fil.ion.ucl.ac.uk/ spm). The preprocessing steps included removing the first 10 volumes to allow for magnetization stabilization, slice-timing to correct for interleaved acquisition, realignment for 3D motion correction, spatial normalization to the Montreal Neurological Institute (MNI) standard space using the co-registered T1 images (43), resampling to 3 × 3 × 3 mm 3 , smoothing with a Gaussian kernel (full-width at half-maximum = 6 mm), time course detrending, nuisance covariate regression and cerebrospinal fluid and white matter signals], and bandpass filtering (0.01 < f < 0.1 Hz). We excluded the subjects whose head movement exceeded 2 mm of displacement or 2 • of rotation. (35). The coordinates of the seeds are shown in Table 1, and the positions of the seeds are illustrated in Figure 1. The mean time series of each seed were extracted; then voxel-wise FC analyses were conducted by calculating the temporal correlation between the time series of each seed and those of each voxel within the whole brain. Correlation coefficients were further transformed to z-values via Fisher's ztransformation. Statistical Analysis We used SPSS (version 23.0, IBM Corp, Armonk, NY) to analyze demographic and clinical information, as well as extracted FC values. The continuous variables are shown as mean ± standard deviation. Data normality was detected by the Kolmogorov-Smirnov test. One-way ANOVA, Kruskal-Wallis test, t-test, or Mann-Whitney U-test was employed for between-group comparisons on continuous data when applicable. Fisher's exact test or a chi-square test was used for analyses of categorical variables. P < 0.05 was considered statistically significant. FC analyses were performed using DPABI version 4.2 (47). Analysis of covariance (ANCOVA) was employed to analyze between-group (LPD, RPD, and control groups) differences in FC of the 14 seeds, with age and gray matter density as covariates. The gray matter mask in DPABI version 4.2 was used in the analyses. Post hoc pairwise analyses were performed using the least significant difference (LSD) method. Multiple comparisons were corrected according to the Gaussian random field (GRF) theory (voxel level P < 0.001; cluster level P < 0.05; twotailed) (48,49). Cohen's 2 was used to evaluate the effect sizes, which was given by DPABI. Pearson's correlation or Spearman's rank correlation was used to investigate the association between the average FC values of significant clusters and clinical and neuropsychological data. Demographic and Clinical Characteristics Finally, 57 PD patients and 32 controls were enrolled in the analysis, and seven subjects were excluded due to the following reasons: five PD patients and one control participant because of excessive head motion and one PD patient due to unsatisfactory image quality. Thirty PD patients were in the LPD group, and 27 PD patients were in the RPD group. Table 2 illustrates the demographic and clinical information. The laterality index differed significantly between the two PD groups. Age, sex, and MMSE scores were comparable between the three groups. The LPD and RPD patients had similar mean disease duration, UPDRS score, Hoehn-Yahr staging, HAMD, HAMA, and NMSQ scores. Group Differences in FC ANCOVA and the followed post hoc pairwise analyses disclosed significant differences in FC between the two PD groups, as well as between the LPD patients and the controls. In the comparison between the LPD patients and the RPD patients, only one seed showed significant difference in FC between the two groups. The LPD patients had lower FC between the left DRP and the left orbitofrontal cortex than the RPD patients (Figure 2 and Table 3). Compared with the controls, LPD patients showed altered FC in three seeds: the right DC, the right VRP, and the right VSs, all in the right side. The aberrant FCs in LPD patients were as follows: (1) decreased FC between the right DC and the cerebellum posterior lobe, the left occipital lobe, the left inferior parietal lobe, and the left superior parietal lobe compared with the controls (Figure 2 and Table 3); (2) decreased FC between the right VRP and the right parietal lobe (Figure 2 and Table 3); (3) decreased FC between the right VSs and the cerebellum posterior lobe, the left occipital lobe, and the right occipital lobe (Figure 2 and Table 3). Correlation Analysis Pearson's correlation or Spearman's rank correlation was used to investigate the relationship between the left DRP-orbitofrontal cortex FC and Hoehn-Yahr staging, contralateral motor subscore of UPDRS part III, laterality index, MMSE, HAMD, HAMA, and NMSQ scores. FC between the left DRP and the left orbitofrontal cortex was significantly associated with the right motor subscore of UPDRS part III and laterality index in PD patients (r = 0.387 and 0.418; p = 0.003 and 0.001, respectively) (Figure 3). DISCUSSION To the best of our knowledge, this is the first study systematically exploring FC related to striatal subregions in LPD and RPD patients separately. We demonstrated that FC between the left DRP and the left orbitofrontal cortex was different between LPD and RPD patients, and LPD patients had a series of differences in FC between various brain regions and the right DC, the right VRP, and the right VSs compared with the controls. The changed FC between the left dorsal rostral putamen and the left orbitofrontal cortex was associated with contralateral motor symptom severity and laterality index. In healthy subjects, the activity of DRP is predominantly associated with sensorimotor areas (3), but in PD, the specificity of its connectivity is reduced and the FC of DRP extends to the ventromedial prefrontal cortex (3,31). Our results showed that LPD and RPD patients differed in the FC between left DRP and left orbitofrontal cortex. In addition, the FC between left DRP and left orbitofrontal cortex was significantly associated with the severity of contralateral motor symptoms; the higher the FC, the more severe the contralateral motor symptoms. These results confirm the role of DRP in regulating movement and indicate that the altered left DRP-orbitofrontal cortex FC might be a pathological change in PD. The significant association between left DRP-orbitofrontal cortex FC and laterality index affirms our hypothesis that motor asymmetry can influence corticostriatal circuits. It is noteworthy that several aberrant FCs were identified only in LPD patients compared with the controls, and these abnormal FCs all involved the striatal seeds of the more severely impaired hemisphere. This finding corroborates our hypothesis that uneven impairment of the bilateral nigrostriatal function leads to lateralized FC changes in PD. On the other hand, the comparison between RPD patients and the controls obtained no significant difference. These two comparisons indicate that LPD patients might have more severe FC impairments than RPD patients, especially in the right hemisphere. Some clinical observations demonstrated that LPD and RPD patients might have different disease severities and risks of future motor complications and that RPD might be a slightly more benign subtype than LPD (50,51). A study by Lee et al. compared gray matter volume across controls and LPD and RPD patients. They found several abnormalities of gray matter volume also in the right hemisphere in LPD patients, but they did not identify any significant difference between the two PD groups or between the controls and the RPD patients (52). Two additional MRI studies using structural and functional imaging techniques also showed more impairments in LPD patients than in RPD patients (41,53). Our findings are consistent with the above studies; LPD patients may have more severe neurodegeneration or less compensation than RPD patients. Maybe a larger sample can better discriminate impaired FC in RPD patients. Additionally, we need to be aware that some controversy exists regarding which type is more susceptible; a study by Baumann et al. showed that RPD patients had a more rapid decline (54). Nevertheless, more clinical and imaging research is needed to clarify the role of laterality in PD. On the whole, only one different FC was identified between the two PD groups; however, there were much more significant differences in the comparison between LPD patients and the controls. This phenomenon is not uncommon. Some previous studies using structural imaging and fMRI techniques failed to identify significant differences in the direct comparison between LPD and RPD patients, although these two groups showed different patterns of abnormalities compared with the controls (41,52,53,55). In the present study, both PD groups had an average disease duration of more than 6 years and an average Hoehn-Yahr stage higher than 2. At the time of examination, most of the PD patients had bilateral striatal impairments. Although the laterality index showed that there was still obvious asymmetry in the PD patients, the impairments and compensation mechanism are complicated in this stage. The effects of asymmetry might be minor and difficult to detect sufficiently with a relatively small sample size and stringent multiple comparison corrections. Additionally, conflicting results exist on the persistence of laterality in PD; some researches showed a decreased degree of asymmetry with disease progression (5,56). The laterality of FC might also decrease with disease progression. Maybe future studies recruiting PD patients in an earlier stage can better demonstrate the influence of lateralization on striatal FC. There has been a variety of abnormal FCs reported in PD, from the early to late stages, but our comparison between RPD patients and the controls attained no significant findings. We need to take the methodological details into consideration. First of all, most of the previous studies combined LPD and RPD patients into a single group. This approach could increase the sensitivity of discovering impaired cortico-striatal FC in PD, particularly those impairments shared by LPD and RPD patients. Dividing the two subgroups according to the side of onset decreases the sample size of each group; this might partially contribute to our negative results in the comparison between RPD patients and the controls. Second, previously, a large number of the FC studies on PD used less strict multiple comparison corrections. To some extent, this might account for the large number of positive findings. This issue was raised by a widely concerned article published by Eklund et al. (57), in which several popular multiple comparison correction approaches had an unsatisfactory performance. For instance, the AlphaSim correction was popular (16,17,22,25,31,58) and was not recommended by recent methodological studies (49,57). Based on these methodological studies, we corrected for multiple comparisons based on GRF theory, with stringent thresholds (voxel level P < 0.001; cluster level P < 0.05; two-tailed). The stringent thresholds and small sample size may limit the sensitivity to disclose aberrant cortico-striatal FC in RPD patients. Future studies with a larger sample size and strict control for multiple comparisons may better reveal FC impairments in RPD patients. Finally, as we have mentioned, RPD patients may have a better neural reserve and/or greater neural plasticity than LPD patients. The impairment of FC of RPD patients may be milder than that of LPD patients and need a larger sample size to be detected. Some limitations should be noted. First, the number of participants in this study is relatively small, and only righthanded PD patients were enrolled. Future studies recruiting more subjects and including left-handed PD patients can provide new insights on the topic of lateralization in PD. Second, the enrolled patients underwent chronic dopaminergic drugs, and the medications might interfere with the rs-fMRI results. To control the pharmacological effects, we evaluated the PD patients during the off period. Although the influence of these medications cannot be completely eliminated, this is a commonly used approach and helps compare with similar studies from other researchers. Furthermore, similar alterations of rs-fMRI results in de novo PD patients, and off-medication patients have been reported (59). Therefore, the influence of dopaminergic drugs should not be a major concern, and future studies using drug-naïve PD patients can better address this issue. Third, the cognitive function was evaluated with MMSE, which was not fully recommended by the Movement Disorder Society (MDS) task force (60). MMSE has limited coverage of executive function. This is a limitation of the present study. The study was designed in 2011 and conducted between 2012 and 2014. In a review article published in 2007 (61), MMSE was proposed as a level 1 testing for the diagnosis of PD dementia. Therefore, MMSE was used as a screening instrument for cognitive dysfunction in the study. In future studies, we will use the Montreal Cognitive Assessment (MoCA) instead of MMSE. In addition, apathy is an important non-motor symptom in PD, but we did not assess apathy in this study. This insufficiency prevents us from analyzing the relationship between changed FC and apathy. In conclusion, we found different cortico-striatal FC profiles between LPD and RPD patients and between LPD patients and controls. Lateralization of motor symptoms is associated with lateralized striatal FC. These results emphasize the necessity of separate investigations of the characteristics of brain activities of LPD and RPD patients in future studies using functional imaging modalities. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by The Ethics Committee of Beijing Hospital. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS WS, H-BC, and C-ZY conceived and designed the experiments. KL analyzed the fMRI data. MC, C-ML, RW, and B-HL were responsible for the fMRI scans and helped fMRI data analyses. WS, H-BC, and S-HL recruited the subjects. X-XM, HZ, KL, and S-HL collected the demographic, clinical, and neuropsychological information of the subjects. KL and WS wrote the manuscript. All the authors have read, revised, and approved the final manuscript.
5,111.4
2021-05-14T00:00:00.000
[ "Biology", "Psychology", "Medicine" ]
The influence of the drugs “Brovermectin-granulate TM ” and “Avesstim TM ” on indicators of non-specific resistance of one year-old carp fish infested with monogeneans . Introduction Ichthyopathological control of inland water of Ukraine, development of ecologically safe methods of prevention of fish diseases is a component of general ecological monitoring of natural ecosystems and measures aimed at preserving biodiversity and rational use of biological resources (Bozhyk & Pukalo, 2012;Kofonov et al., 2020;Prychepa et al., 2021). The epizootic state of reservoirs significantly affects to their fish productivity. Studying the patterns of the occurrence and spread of fish diseases and also their prevention is an important problem of modern fish farming, since the efficiency of breeding aquaculture facilities and the preservation of fish products depends on its solution (Davydov et al., 2013;Dzhmil', 2013). A number of factors contribute to the spread of fish diseases, the main of which are insufficient veterinary supervision during the transportation of fish, violations of quarantine measures, low technological culture and quality of artificial feed, deterioration of the ecological situation in reservoirs and growing conditions. For the early diagnosis of pathological changes in the body of fish, immunological, hematological, biochemical and histological studies are indicative (Zurawski et al., 2001;Jevtushenko et al., 2015;Fedorovych & Gutyj, 2019). It is known from literary sources that parasitic diseases in fish cause disorders of the immune system. Accordingly, this causes increased sensitivity of fish to conditionally pathogenic microflora and other biotic and abiotic factors of the environment. To eliminate the pathogenic effect of ectoparasites on the fish' organism, it is advisable to use immunomodulators in combination with antiparasitic drugs. This will not only increase the effectiveness of therapy, but also increase the immune status and resistance of the fish body. Loboyko Yu. V. and others (Lobojko, 2012;Loboiko et al., 2017) established that the complex use of the drug "Brovermectin-granulate TM " and the immunomodulator "Avesstim TM " for carp lerneosis promotes the activation of the immune system, and the acceleration of the recovery of damaged tissues and cells, the improvement of hemodynamics, metabolism and the general state of the body. In view, we studied the influence of the drugs "Brovermectin-granulate TM " and "Avesstim TM " on the indicators of non-specific resistance of one-year-old white carp and crucian carp, infested with dactylogyrus and hyrodactylus, and scaly carp infested with diplozoans. The purpose of the work The purpose of the work is to study the influence of the drugs "Brovermectin-granulate TM " and "Avesstim TM "on the indicators of non-specific resistance of oneyear-old carp fishes infested with monogeneans (Dactylogyrus lamellatus, Gyrodactylus hypophthalmichtіdis, Eudiplozoon nipponicum ). Research material and methods For conducting experiments, 14 experimental groups were formed (two groups of each species of fish affected by each of the above parasites). The fish of the first experimental group were administered "Brovermectingranulate TM ", the second experimental group were administered a complex of drugs "Brovermectin-granulate TM " and "Avesstim TM ". Fish infected with various ectoparasites served as controls. Fish of each group were kept in separate aquariums with a capacity of 40 dm 3 with artificial aeration at a temperature of 20-22 °С. Their care and feeding was carried out according to the relevant norms and rations. During the entire period of research, the behavior and clinical condition of the fish were observed. The preexperimental period of acclimatization of annuals was 7 days. TreaTMent of infested fish with the drug "Brovermectin-granulate TM " (at the rate of 60 mg/kg of fish weight) and a complex of drugs "Brovermectin-granulate TM " (60 mg/kg of fish weight) and "Avesstim TM " (1 mg/kg of fish weight) was carried out two days in a row by introducing them orally with the help of a probe into the anterior part of the intestine. Before use, the drugs in the specified doses were mixed with 1 ml of 2 % starch paste. Only 1 ml of 2 % paste was administered to the fish of the control groups. On the 14th day after the use of the drugs, a parasitological examination of the fish was performed and blood was taken for research. In the blood of the fish, the same indicators were studied as in the first stage of the experiment. The natural resistance of experimental fish was studied by a complex of humoral blood factors. Lysozyme activity of blood serum was determined by the nephelometric method described by V. G. Dorofeychuk (1986), bactericidal activity of blood serum by the photocolorimetric method described by L. V. Novikova et al. (1981), phagocytic activity of blood neutrophilswas determined according to V. E. Chumachenko (1990), the phagocytic index is the number of phagocytosed microbial bodies per active neutrophil and characterizes the absorption capacity of phagocytes, the phagocytic number is the number of phagocytosed microbial bodies per 100 counted neutrophils (Vlizlo, 2012). The analysis of research results was carried out using the Statistica 6.0 software package. The probability of differences was assessed by Student's t-test. The results were considered reliable at P ≤ 0.05. Results and their discussion It was established that the use of the above-mentioned drugs in the treatment of one-year-old carp fish infested with monogeneans contributed to an increase in nonspecific resistance indicators. Thus, in one-year-old white carp infested with Dactylogyrus lamellatus, which were treated with the drug "Brovermectin-granulate TM ", the lysozyme activity of the blood serum increased by 1.12 % compared to the control group, the bactericidal activity by 1.58, the phagocytic activity of blood neutrophils by by 0.87 %, the phagocytic index by 0.26 and the phagocytic number by 0.18 units. (Table 1). 3.25 ± 0.068 3.43 ± 0.063 3.74 ± 0.065 *** Note: * -P < 0.05, *** -P < 0.001 -compared to the control group A significantly greater stimulating effect of "Brovermectin-granulate TM " on indicators of non-specific resistance of fish affected by parasites was observed when it was used together with the immunomodulator "Avesstim TM ". This is evidenced by the probable growth of all the investigated indicators (the exception is the phagocytic index). Thus, the lysozyme activity of blood serum in one-year-old white carp treated simultaneously with both of the above-mentioned drugs increased by 2.21 % (P < 0.05), bactericidal activity by 2.31 % (P < 0.05), phagocytic activity of blood neutrophils -by 2.27 % (P < 0.05), phagocytic index -by 0.73, and phagocytic number -by 0.49 units. (P < 0.001). According to these indicators difference was also observed between the fish of the first and second research groups and it amounted to 1.09; 0.73; 1.40 %; 0.47 and 0.31 units. (P < 0.01). Similar changes in indicators of non-specific resistance were noted in the blood of one-year-old white carp affected by hyrodactylus (Table 2). Table 2 Indicators of non-specific resistance in the blood of one-year-old white carp infested with Gyrodactylus ctenopharyngodonis, before and after the use of drugs, M ± m (n = 6) 3.27 ± 0.053 3.41 ± 0.046 3.65 ± 0.044 *** Note: * -P < 0.05, ** -P < 0.01, *** -P < 0.001 -compared to the control group After treatment with the drug "Brovermectin-granulate TM ", the lysozyme activity of their blood serum increased by 1.17, the bactericidal activity by 1.68, the phagocytic activity of blood neutrophils by 0.82 %, the phagocytic index by 0.17, and the phagocytic number by by 0.14 units. The condition of the humoral link of their immune system improved even more when using two drugs at the same time. The above indicators in the fish of the second experimental group compared to the control group increased by 2.30 (P < 0.05), 2.70 (P < 0.01), 2.07 % (P < 0.05), 0, 49 and 0.38 units. (P < 0.001), and in individuals of the second experimental group compared to the first experimental group -by 1.13; 1.02; 1.25 %; 0.32 and 0.24 units. (P < 0.01). Under conditions of mixed infestation, a positive effect of "Brovermectin-granulate TM " and "Avesstim TM " on the immune system of sick fish was also observed ( Table 3). The stimulating effect of these drugs on the natural resistance of the body of fish affected simultaneously by dactylohyrus and hyrodactylus is evidenced by the increase in indicators of the humoral link of immunity in them, namely: lysozyme and bactericidal activity of blood serum, phagocytic activity of blood neutrophils and phagocytic index and phagocytic number in the fish of the first test groups compared to the control group increased by 0.78, respectively; 2.03 (P < 0.05); 1.12 %; 0.07 and 0.13 units. (P < 0.05), and in individuals of the second research group by 2.17 (P < 0.05); 3.91 (P < 0.001); 2.57 % (P < 0.01); 0.52 and 0.44 units. (P < 0.001). Table 3 Indicators of non-specific resistance in the blood of one-year-old white carp infested with Dactylogyrus lamellatus and Gyrodactylus ctenopharyngodonis, before and after the use of drugs, M ± m (n = 6) 3.24 ± 0.039 3.37 ± 0.035 * 3.68 ± 0.032 *** Note: * -P < 0.05, ** -P < 0.01, *** -P < 0.001 -compared to the control group According to the investigated indicators of natural resistance there was also a difference between the yearlings of white carp of the first and second research groups, but it was reliable only in terms of bactericidal activity of blood serum, phagocytic activity of blood neutrophils and phagocytic number. According to these indicators, the fish of the second experimental group prevailed over the individuals of the first experimental group by 1.88 (P < 0.05); 1.45 % (P < 0.05) and 0.31 units. (P < 0.001). During the study of the pathogenic effect of ectoparasites on the fish' organism, we established that the indicators of non-specific resistance differed significantly in one year-old carp affected by Dactylogyrus hypophthalmichtidis and in fish treated with the drugs "Brovermectingranulate TM " and "Avesstim TM " ( Table 4). The indicators of the humoral link of protection improved significantly in fish treated with the drug "Brovermectingranulate TM " compared to the control group, but a significant increase was noted only in the bactericidal activity of blood serum and the phagocytic activity of blood neutrophils respectively by 1.02 (P < 0.01) and 1.34 % (P < 0.05). According to most of the investigated indicators, a significant difference was also observed between the fish of the first and second experimental groups, namely: the bactericidal activity of blood serum in individuals that were treated simultaneously with two drugs compared to fish that were injected only with "Brovermectin-granulate TM " increased by 1, 45 (P < 0.01), the phagocytic activity of blood neutrophils by 2.89 % (P<0.001) and the phagocytic number by 0.30 units. (P < 0.05). There was a positive effect of the studied drugs on indicators of the humoral defense of the body of one-yearold carp and their simultaneous damage by Dactylogyrus hypophthalmichtidis and Gyrodactylus hypophthalmichtidis, although it should be noted that it was smaller than on the fish' organism affected by only one parasite (Table 6). 3.29 ± 0.071 3.45 ± 0.071 3.65 ± 0.090 * Note: * -P < 0.05, ** -P < 0.01, *** -P < 0.001 -compared to the control group It was established that in the fish treated with "Brovermectin-granulate TM ", all the above-mentioned indicators increased, however, this increase was probable only according to the bactericidal activity of blood serum by 1.47 (P < 0.05) and the phagocytic activity of blood neutrophils by 1.61 % (Р < 0.01). Simultaneous use of "Brovermectin-granulate TM " and "Avesstim TM " had a much better effect on indicators of non-specific resistance of infested fish. In fish treated simultaneously with two drugs, the lysozyme activity of the blood serum, compared to the control and individuals of the first experimental group, increased by 2.90 (P < 0.01) and 1.74 (P < 0.05), respectively, bactericidal activity by 3.20 (P < 0.001) and 1.73 (P < 0.05), phagocytic activity of blood neutrophils by 4.55 (P < 0.001) and 2.94 % (P < 0.001), phagocytic index by 0.17 and 0.20 and the phagocytic number by 0.36 (P < 0.01) and 0.20 units. A diplozoic invasion also has a significant negative impact on the immune status of fish. The results of our research show that the indicators of non-specific resistance in one-year-old scaly carp affected by Eudiplozoon nipponicum significantly worsened. However, the use of the antiparasitic drug "Brovermectin-granulate TM " and the immunomodulator "Avesstim TM " to the infested fish had a positive effect on the humoral link of its nonspecific resistance (Table 7). Table 7 Indicators of non-specific resistance in the blood of one-year-old carp infested with Eudiplozoon nipponicum, before and after the use of drugs, M ± m (n = 6) 3.80 ± 0.080 4.03 ± 0.097 4.20 ± 0.089 ** Note: ** -P < 0.01 -compared to the control group It should be noted that in one-year-old carp, which were treated only with "Brovermectin-granulate TM ", although there was an increase in the investigated indicators of natural resistance, however, it was unreliable in all cases. With the simultaneous use of two drugs, an increase in the studied indicators was probable in all cases. Thus, the lysozyme activity of blood serum in the fish of the second experimental group compared to the control increased by 1.45 (P < 0.01), bactericidal activity by 1.23 (P < 0.01), phagocytic activity of blood neutrophils by 1, 22 % (P < 0.01), phagocytic index by 0.73 (P < 0.05) and phagocytic number by 0.40 units. (P < 0.01). A difference was found in terms of the above-mentioned indicators between yearlings of the carp of the first and second experimental groups, however, in all cases it was unreliable. Conclusions The use of the drug "Brovermectin-granulate TM " to one-year-old white carp, silver carp and scaly carp affected by monogeneans had a stimulating effect on the resistance of their organism. At the same time, the simultaneous use of the specified drug with the immunomodulator "Avesstim TM " contributed to better activation of the humoral link of non-specific immunity in sick fish.
3,263.4
2022-10-21T00:00:00.000
[ "Biology" ]
Ti-40Al-10Nb-10Cr Porous Microfiltration Membrane with Hierarchical Pore Structure for Particulate Matter Capturing from High-Temperature Flue Gas TiAl-based porous microfiltration membranes are expected to be the next-generation filtration materials for potential applications in high-temperature flue gas separation in corrosive environments. Unfortunately, the insufficient high-temperature oxidation resistance severely limits their industrial applications. To tackle this issue, a Ti-40Al-10Nb-10Cr porous alloy was fabricated for highly effective high-temperature flue gas purification. Benefited from microstructural changes and the formation of two new phases, the Ti-40Al-10Nb-10Cr porous alloy demonstrated favorable high-temperature anti-oxidation performance with the incorporation of Nb and Cr high-temperature alloying elements. By the separation of a simulated high-temperature flue gas, we achieved an ultra-high PM-removal efficiency (62.242% for PM<2.5 and 98.563% for PM>2.5). These features, combined with our experimental design strategy, provide a new insight into designing high-temperature TiAl-based porous materials with enhanced performance and durability. Introduction The immense potential in energy conversion and storage, adsorption and separation applications has generated significant interest in the design and synthesis of hierarchically porous materials [1][2][3][4][5]. Hierarchically porous materials have many unique features, such as tunable porous structures, controllable macroscopic morphologies, a large surface area and an easily functionalizable surface, making them some of the most promising engineering structural materials [6][7][8][9][10][11][12][13]. High-temperature flue gases discharged from electric power, petroleum, chemical and metallurgical operations have the characteristics of high temperature (above 800 • C), high oxygen content, high sulfur content, high nitrogen content and large amounts of dust content [7][8][9]. Dust removal from these high-temperature flue gases remains a major challenge due to the formation of blockage caused by particle-containing high-temperature flue gases and the corrosion of dust removal equipment. Porous metals [14,15] and porous ceramics [16][17][18][19] are widely utilized where high-temperature flue gas is initially released to take full advantage of the filtration efficiency. Unfortunately, the poor oxidation resistivity, poor corrosion resistance and intolerance at elevated temperatures of porous metals and the severe brittleness, poor thermal vibration resistance and unworkability of porous ceramics have severally restricted their potential applications in high-temperature flue gas purification. As such, it is of considerable significance to develop functional porous materials for high-temperature flue gas purification with a simple, highly efficient and scalable approach. TiAl-based porous materials have been very promising candidates as high-temperature structural materials for high-temperature flue gas purification, because they contain a mixture of metallic and covalent bonds that provide sound mechanical properties with outstanding corrosion resistance and excellent oxidation resistance above 600 • C [20][21][22][23]. However, TiAl-based porous materials still need to be improved, owing to the insufficient oxidation resistance in the envisioned application temperature range of 800 • C-1000 • C. The main reason for the inadequate performance is related to the formation of both TiO 2 and Al 2 O 3 rather than continuous Al 2 O 3 during long-term high-temperature oxidation [19,[24][25][26]. For example, the oxidation products of a binary γ-TiAl isothermally oxidized at 1000 • C for 48 h include not only Al 2 O 3 (α-alumina), but also TiO 2 (rutile titanium dioxide), TiN, Ti 2 AlN and α 2 -Ti 3 Al [27]. More specifically, the oxide scale of TiAl alloys generally consists of three layers, an outer layer of TiO 2 , an intermediate layer of Al 2 O 3 and a porous inner layer consisting of TiO 2 and Al 2 O 3 grains. A great deal of research has already been conducted to enhance the anti-oxidation resistance of TiAl-based alloys above 800 • C through the addition of ternary and quaternary alloy elements into TiAl-based alloys, such as Nb [28], Ta [29], Ni [30], Y [31], B [16], Si [32], W [33] and Mo [34], either to form a protective scale or to slow down the oxygen diffusion rate. However, these studies are still at the experimental stages based on bulk TiAl-based alloys. It is still unclear whether or not the benefits would apply to TiAl-based porous alloys, particularly in controlling the surface morphology and pore parameters. The alloy design should be highly effective in improving high-temperature corrosion resistance without compromising the pore parameters. As such, the development of a simple and effective method to substantially improve the hightemperature oxidation resistance of TiAl-based materials while maintaining the desired porous structures is much needed. The main objective of this study is to fabricate a novel TiAl-based porous material with the addition of Nb and Cr elements for high-temperature applications. The formation mechanism of the new porous material was investigated and the effects of Nb and Cr doping into the TiAl porous alloy were also demonstrated. Furthermore, the enhancement of anti-oxidation resistance was studied through the characterization analyses before and after high-temperature oxidation at 900 • C for 100 h. This new research project resulted in a new TiAl-based material with good high-temperature oxidation resistance and excellently structural stability for high-temperature PM capturing. Materials The chemical composition of all samples are as follows: commercial Ti, Al, Nb and Cr powders with a purity of 99.9% and an average particle size of less than 50 µm. All these powders were supplied by DK nano technology Co. Ltd., Beijing, China. Instruments The morphological features of the Ti-48Al, Ti-48Al-6Nb, Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr porous materials were observed with a field emission scanning electron microscopy (FESEM, ZEISS SUPRA 55, Carl Zeiss, Germany), while the compositional analyses of the samples were performed using energy dispersion spectrometry (EDS). X-ray diffraction (XRD; Multipurpose X-ray Diffractometer TTR III, Rigaku Co., Tokyo, Japan) was used for phase analysis. The pore structure was examined by FESEM and the pore parameters were measured by mercury intrusion porosimetry (MIP; Quantachrome AUTOSCAN-33, Boynton Beach, Florida, USA). Preparation Process of TiAl-Based Porous Materials In the preparation process of TiAl-based porous materials, as shown in Figure 1, commercial Ti, Al, Nb and Cr powders with the molar ratios of 52:48, 46:48:6, 48:48:2:2 and 40:40:10:10 were mixed, followed by ball milling at 120 rpm in a ball crusher for 24 h (ball-to-powder weight ratio of 4:1) and the mixtures were subsequently pressed into green pellets with a diameter of 30 mm under the pressure of 230 MPa. A four-step heat-treatment process in a vacuum was then conducted to fabricate TiAl-based porous materials. More specifically, the pellets were heated at 120 • C/1 h for vapor evaporation, at 600 • C/3 h and 900 • C/3 h for the Al and Ti reaction, Al and Nb reaction and phase transformation and, finally, at 1350 • C/3 h to form the TiAl-based porous materials. High-Temperature Oxidation of TiAl-Based Porous Materials To study the thermal cycling oxidation behavior, the Ti-48Al, Ti-48Al-6Nb, Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr porous alloys were treated at 900 • C for a total oxidation duration of 100 h. In detail, all the TiAl-based porous materials samples were kept at 900 • C for high-temperature oxidation, removed from the furnace with various oxidation intervals ( High-Temperature Filtration Performance of Ti-40Al-10Nb-10Cr Porous Alloy Tests A home-made high-temperature PM filtration apparatus was applied to evaluate the filtration performance of the Ti-40Al-10Nb-10Cr porous alloys. The PM in the hightemperature PM filtration apparatus was generated by burning incense [35], while the size and concentration of PM before and after filtration were measured by two laser PM sensors (DT9881, CEM). The simulated high-temperature pollutant gas flowing through the Ti-40Al-10Nb-10Cr porous alloys samples (sample specifications: Φ30 × 0.6 mm, with an effective area of about 706.5 mm 2 ) was placed inside a quartz tube in a furnace (900 • C, 3000 Pa). Two PM counters were placed at the downstream and upstream of the testing samples, respectively, to measure the PM number before and after filtration. During the experiment, high-temperature PM-containing air flowed at a 2 L/min constant rate through the samples. Three Ti-40Al-10Nb-10Cr porous alloys samples were tested to ensure filtration measurement accuracy. Their removal efficiency was calculated by Equation (1) as follows: where ξ 1 and ξ 2 represent the concentrations of incense PM in the downstream and upstream of the filter, respectively. Figure 1 depicts the typical TiAl binary phase diagram (dashed line) and preliminary phase diagram of TiAl with 10Nb (solid line), respectively. It can be observed that, with a 10 at% Nb addition, the phase diagram of TiAl changed. The melting point of the TiAl alloy increased by about 80~100 • C; the phase transition point (β/β+α) was reduced by about 50~80 • C and the phase region was enlarged and extended to the high Al region; the α(α/α+γ) transition point decreased by about 30 • C; the β+α/α transition temperature decreased by about 50~100 • C; the α single-phase region was compressed and moved to high Al content; the γ phase region extended to low Al content; the maximum solubility of Nb in the α 2 and γ phases was about 9.5 at% [36]. In addition, Tang and Shemet suggested that whether addition of Cr does good or harm to a TiAl alloy depends on the amounts used; with less-than-4 at% Cr additions, Cr occupied the position of Ti in TiO 2 in the form of +3 valence, which was harmful to the oxidation resistance. However, with 8 at%~10 at% Cr additions, Cr could promote the formation of an Al 2 O 3 film and was beneficial to the improvement of antioxidation properties [37]. Therefore, Ti-40Al-10Nb-10Cr was selected as the composition with consideration of the effects of both Nb and Cr on the oxidation resistance for high-temperature applications. Phase Composition and Microstructure of TiAl-Based Porous Materials The XRD patterns obtained from the surfaces of the Ti-48Al, Ti-48Al-6Nb, Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr specimens are shown in Figure 2. The Ti-48Al porous alloys were mainly composed of Ti 3 Al and TiAl. Besides Ti 3 Al and TiAl, two new phases, NbAl 3 and Nb 2 Al, were detected in the Ti-48Al-6Nb and Ti-48Al-2Nb-2Cr samples. In addition, the new phase B2 was also found in Ti-40Al-10Nb-10Cr porous alloys. These findings are similar to the results of our previous study [23,38]. Figure 3a depicts typical FESEM images of Ti-48Al porous materials. The results show that Ti-48Al porous materials mainly consisted of irregular spherical particles, leading to a hierarchically porous skeleton (funnel-shaped with big pore mouth and small pore throat). There existed a lot of pores with different pore diameters among the Ti-48Al skeletons. Compared with Ti-48Al porous materials, a larger number of white irregular particles appeared on the surface of the Ti-48Al-6Nb, Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr porous alloys, due to the presence of high Nb phases (NbAl3 and Nb2Al) and the B2 phase, as shown in Figure 3b-d. In particular, a larger number of irregular particles ap- Figure 3a depicts typical FESEM images of Ti-48Al porous materials. The results show that Ti-48Al porous materials mainly consisted of irregular spherical particles, leading to a hierarchically porous skeleton (funnel-shaped with big pore mouth and small pore throat). There existed a lot of pores with different pore diameters among the Ti-48Al skeletons. Compared with Ti-48Al porous materials, a larger number of white irregular particles appeared on the surface of the Ti-48Al-6Nb, Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr porous alloys, due to the presence of high Nb phases (NbAl 3 and Nb 2 Al) and the B2 phase, as shown in Figure 3b-d. In particular, a larger number of irregular particles appeared around the pores of the Ti-40Al-10Nb-10Cr samples, resulting in the formation of an increasing number of small pores on the hierarchically porous skeleton. Pore parameters of TiAl-Based Porous Materials Although the SEM images clearly showed the presence of pore mouths with, predominantly, the size of 20-50 μm, the size of pore throats was less than 10 μm, which made the MIP method still applicable [39] and the pore diameter distribution of the TiAlbased porous alloys were analyzed accordingly, as depicted in Figure 4. As for the Ti-48Al porous alloy, an average pore diameter of 8.319 μm was observed from a Gaussian distribution, with the peak position of the pore diameter occurring around 9.185 μm, 9.376 μm and 10.248 μm for the Ti-48Al-6Nb, Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr samples, respectively. Smaller peaks of pore diameters at 2-5 μm for the Ti-48Al, Ti-48Al-6Nb and Ti-48Al-2Nb-2Cr porous alloys and 0-2 μm for the Ti-40Al-10Nb-10Cr porous alloy were also observed, demonstrating a wide range of pore diameter distribution for TiAl-based porous alloys. In addition, detailed pore parameters of the TiAl-based porous alloys are given in Table 1. Compared to the Ti-48Al porous alloy, the pore area and pore volume of the Ti-48Al-6Nb, Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr porous alloy all decreased to a different extent, due to their wider pore diameter distribution. After Nb and Cr additions, the porosity increased, except for the Ti-48Al-2Nb-2Cr porous alloy. In addition, the porous skeletons increased due to an increase in the apparent skeletal density. The appearance of the high Nb phase (NbAl3 and Nb2Al phase) and B2 phase indicated an increase in the Ti-48Al-6Nb, Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr porous alloys' skeletons. Pore Parameters of TiAl-Based Porous Materials Although the SEM images clearly showed the presence of pore mouths with, predominantly, the size of 20-50 µm, the size of pore throats was less than 10 µm, which made the MIP method still applicable [39] and the pore diameter distribution of the TiAlbased porous alloys were analyzed accordingly, as depicted in Figure 4. As for the Ti-48Al porous alloy, an average pore diameter of 8.319 µm was observed from a Gaussian distribution, with the peak position of the pore diameter occurring around 9.185 µm, 9.376 µm and 10.248 µm for the Ti-48Al-6Nb, Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr samples, respectively. Smaller peaks of pore diameters at 2-5 µm for the Ti-48Al, Ti-48Al-6Nb and Ti-48Al-2Nb-2Cr porous alloys and 0-2 µm for the Ti-40Al-10Nb-10Cr porous alloy were also observed, demonstrating a wide range of pore diameter distribution for TiAl-based porous alloys. In addition, detailed pore parameters of the TiAl-based porous alloys are given in Table 1. Compared to the Ti-48Al porous alloy, the pore area and pore volume of the Ti-48Al-6Nb, Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr porous alloy all decreased to a different extent, due to their wider pore diameter distribution. After Nb and Cr additions, the porosity increased, except for the Ti-48Al-2Nb-2Cr porous alloy. In addition, the porous skeletons increased due to an increase in the apparent skeletal density. The appearance of the high Nb phase (NbAl 3 and Nb 2 Al phase) and B2 phase indicated an increase in the Ti-48Al-6Nb, Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr porous alloys' skeletons. Figure 5 shows the effect of Nb and Cr elemental additions on the high-temperature oxidation behavior of TiAl-based porous alloys. As was measured, the Ti-48Al porous alloys showed a weight gain of 27.39 g/m 2 after a thermal cycling treatment at 900 °C for 100 h. Compared to Ti-48Al porous alloys, all the TiAl-based porous alloys after Nb and Cr addition showed a lower oxidation rate, especially the Ti-40Al-10Nb-10Cr sample, with a minimum weight gain of 6.55 g/m 2 . The great improvement of the high-temperature oxidation resistance achieved through the addition of Nb and Cr powders could be explained by the barriers towards the formation of TiO2 due to the presence of high Nb phases (NbAl3 and NbAl2) and B2 phases. More specifically, the presence of the high Nb phases (NbAl3 and Nb2Al) and B2 phases could act as diffusion barriers, inhibiting the O inward diffusion and the Ti outward diffusion. Figure 5 shows the effect of Nb and Cr elemental additions on the high-temperature oxidation behavior of TiAl-based porous alloys. As was measured, the Ti-48Al porous alloys showed a weight gain of 27.39 g/m 2 after a thermal cycling treatment at 900 • C for 100 h. Compared to Ti-48Al porous alloys, all the TiAl-based porous alloys after Nb and Cr addition showed a lower oxidation rate, especially the Ti-40Al-10Nb-10Cr sample, with a minimum weight gain of 6.55 g/m 2 . The great improvement of the hightemperature oxidation resistance achieved through the addition of Nb and Cr powders could be explained by the barriers towards the formation of TiO 2 due to the presence of high Nb phases (NbAl 3 and NbAl 2 ) and B2 phases. More specifically, the presence of the high Nb phases (NbAl 3 and Nb 2 Al) and B2 phases could act as diffusion barriers, inhibiting the O inward diffusion and the Ti outward diffusion. High-Temperature Oxidation Performance Membranes 2021, 11, x FOR PEER REVIEW 8 of 12 Figure 5. High-temperature oxidation behavior of TiAl-based porous materials: Ti-48Al (black), Ti-48Al-6Nb (red), Ti-48Al-2Nb-2Cr (blue) and Ti-40Al-10Nb-10Cr (green). Figure 6 shows the micro-pore changes in the TiAl-based porous alloys after 900 °C/100 h isothermal treatment. The Kirkendall voids were quickly removed due to the formation of irregular white TiO2 on the surface of the treated Ti-48Al porous alloy, as shown in Figure 6a. Moreover, microcracks could be seen on the surface of the treated Ti-48Al-6Nb porous alloy, attributed possibly to the stress of the high degree of oxidation, as shown in Figure 6b. As for the treated Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr samples, little change was observable to its surface after 900 °C/100 h thermal cycling treatment, as shown in Figure 6c and Figure 6d. The surface compositions of TiAl-based porous alloys after thermal cycling treatment are listed in Table 2. In comparison with the treated Ti-48Al porous alloys, the content of O reduced greatly, while Ti, Al and Nb increased. The results further suggest that, for the Ti-48Al-6Nb, Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr porous alloys, the formation of high Nb phases (NbAl3 and NbAl2) and B2 phases prevented the further formation of TiO2 and Al2O3 during thermal cycling treatment. These results are consistent with their high-temperature oxidation results shown in Figure 5. Figure 6 shows the micro-pore changes in the TiAl-based porous alloys after 900 • C/100 h isothermal treatment. The Kirkendall voids were quickly removed due to the formation of irregular white TiO 2 on the surface of the treated Ti-48Al porous alloy, as shown in Figure 6a. Moreover, microcracks could be seen on the surface of the treated Ti-48Al-6Nb porous alloy, attributed possibly to the stress of the high degree of oxidation, as shown in Figure 6b. As for the treated Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr samples, little change was observable to its surface after 900 • C/100 h thermal cycling treatment, as shown in Figure 6c,d. The surface compositions of TiAl-based porous alloys after thermal cycling treatment are listed in Table 2. In comparison with the treated Ti-48Al porous alloys, the content of O reduced greatly, while Ti, Al and Nb increased. The results further suggest that, for the Ti-48Al-6Nb, Ti-48Al-2Nb-2Cr and Ti-40Al-10Nb-10Cr porous alloys, the formation of high Nb phases (NbAl 3 and NbAl 2 ) and B2 phases prevented the further formation of TiO 2 and Al 2 O 3 during thermal cycling treatment. These results are consistent with their high-temperature oxidation results shown in Figure 5. Table 2. EDS composition analysis of the surface scans of TiAl-based porous alloys after the thermal cycling treatment shown in Figure 5. The high-temperature PM filtration of Ti-40Al-10Nb-10Cr porous alloys was tested in the device shown in Figure 7a, with high-temperature PM (including PM<2.5, PM2.5-10) filtered through Ti-40Al-10Nb-10Cr porous alloy. The PM<2.5 and PM2.5-10 concentration after filtration using Ti-40Al-10Nb-10Cr porous alloys' filtration was much lower than the concentration before filtration (Figure 7b and Figure 7c). The results shown in Figure 7d confirm that both high-temperature PM<2.5μm and PM>2.5μm could be filtered through the Ti-48Al-6Nb membrane with a separation efficiency of 62.242% (SD: ± 1.099%) and 98.563% (SD: ± 0.449%), respectively. Furthermore, a comparison between Ti-40Al-10Nb-10Cr porous alloys and various porous materials in previous studies [2,19,40] shows that the Ti-40Al-10Nb-10Cr sample exhibited a relatively higher PM>2.5μm removal efficiency at a The high-temperature PM filtration of Ti-40Al-10Nb-10Cr porous alloys was tested in the device shown in Figure 7a, with high-temperature PM (including PM <2.5 , PM >2.5 ) filtered through Ti-40Al-10Nb-10Cr porous alloy. The PM <2.5 and PM >2.5 concentration after filtration using Ti-40Al-10Nb-10Cr porous alloys' filtration was much lower than the concentration before filtration (Figure 7b,c). The results shown in Figure 7d confirm that both high-temperature PM <2.5µm and PM >2.5µm could be filtered through the Ti-40Al-10Nb-10Cr membrane with a separation efficiency of 62.242% (SD: ±1.099%) and 98.563% (SD: ±0.449%), respectively. Furthermore, a comparison between Ti-40Al-10Nb-10Cr porous alloys and various porous materials in previous studies [2,19,40] shows that the Ti-40Al-10Nb-10Cr sample exhibited a relatively higher PM >2.5µm removal efficiency at a much higher pressure. It is of note that the upper limit of service temperature of as-prepared Ti-40Al-10Nb-10Cr porous alloys could survive a temperature of up to 900 • C. These results indicate that our Ti-40Al-10Nb-10Cr porous alloys could achieve flow-through filtration with high removal efficiency, showing great commercialization prospects for hightemperature PM filtration. much higher pressure. It is of note that the upper limit of service temperature of as-prepared Ti-40Al-10Nb-10Cr porous alloys could survive a temperature of up to 900 °C. These results indicate that our Ti-40Al-10Nb-10Cr porous alloys could achieve flowthrough filtration with high removal efficiency, showing great commercialization prospects for high-temperature PM filtration. Conclusions In conclusion, a simple and effective strategy is proposed for enhancing high-temperature oxidation resistance and corrosion resistance of TiAl-based porous materials. A novel Ti-40Al-10Nb-10Cr porous alloy with controlled lamellar microstructure was fabricated by manipulating a chemical reaction of Ti and Al with Nb and Cr powders. Compared with the Ti-48Al, Ti-48Al-6Nb and Ti-48Al-2Nb-2Cr porous materials, the Ti-40Al-10Nb-10Cr porous alloy exhibited improved high-temperature oxidation resistance (only 0.39648 g/m 2 weight gain after thermal cycling treatment at 900 °C/100 h). This indicates that the Ti-40Al-10Nb-10Cr porous alloy could provide expanded opportunities for higher-temperature PM capturing with ultra-high PM removal efficiencies (98.563% for PM>2.5, 62.242% for PM<2.5). These findings represent an important step toward fabricating Conclusions In conclusion, a simple and effective strategy is proposed for enhancing high-temperature oxidation resistance and corrosion resistance of TiAl-based porous materials. A novel Ti-40Al-10Nb-10Cr porous alloy with controlled lamellar microstructure was fabricated by manipulating a chemical reaction of Ti and Al with Nb and Cr powders. Compared with the Ti-48Al, Ti-48Al-6Nb and Ti-48Al-2Nb-2Cr porous materials, the Ti-40Al-10Nb-10Cr porous alloy exhibited improved high-temperature oxidation resistance (only 6.55 g/m 2 weight gain after thermal cycling treatment at 900 • C/100 h). This indicates that the Ti-40Al-10Nb-10Cr porous alloy could provide expanded opportunities for higher-temperature PM capturing with ultra-high PM removal efficiencies (98.563% for PM >2.5 , 62.242% for PM <2.5 ). These findings represent an important step toward fabricating TiAl-based porous alloys using powder metallurgy, achieving an excellent high-temperature oxidation resistance due to the unique structure and demonstrating great potential for applications in environmentrelated fields where highly effective and robust high-temperature filtration is required.
5,238.8
2022-01-18T00:00:00.000
[ "Environmental Science", "Engineering", "Materials Science" ]
Modeling and Simulation for Nonlinear Pressurized Water Reactor Cores Using Gap Metric and Fuzzy Model with Transfer Function This investigation is to deal with the modeling issue for nonlinear pressurized water reactor cores. Nonlinearity measure with gap metric and T-S fuzzy modeling are exploited to build the fuzzy model of a nonlinear core which approximates this nonlinear core model. The gap metric of the core is proposed to quantify the core nonlinearity. The curve in the whole range of core power level that is solved adopting the gap metric is the core nonlinearity measure. In terms of the measure, six linearized core models at six power levels are selected as local models of the nonlinear core model. Based on the local models and the introduction of the triangle membership function, the core fuzzy model is achieved. The core fuzzy model and nonlinear core model are simulated. Simulation results show that the core fuzzy model can approximate and substitute the nonlinear core model. Introduction Nuclear energy as a kind of clean energy provides an incentive to sustainable development of nuclear power plants (NPPs) for electricity generation.Meanwhile, of research fields for NNPs, the modeling study on plants of NPPs such as reactor cores is always popular.The pressurized water reactor (PWR) cores in NPPs possess nonlinear characteristics, and parameters of them are varying at different operating conditions.Hence, cores are essentially time-varying complex systems so that modeling their models is difficult.It is necessary and meaningful to research the modeling issue of nonlinear cores. Many researchers have utilized the point reactor core modeling method to construct PWR core models at the full power level.Kerlin et al. [1] modelled the PWR core of the H. B. Robinson NPP at the full power level which was also verified to be correct.PWR cores at the full power level were modeled by Edwards et al. [2][3][4][5]to design core power control systems. However, such modeling for cores based on one power level cannot represent the dynamics of nonlinear core within the global ranges of power level.Stimulated by the considerations, the work in this paper is to model a nonlinear PWR core within the global ranges of power level by the use of gap metric to quantify the core nonlinearity measure and the T-S fuzzy principle.Finally, the T-S fuzzy model and nonlinear model of the core are simulated, and conclusions are drawn. PWR core model The point reactor core modeling as a traditional modelling method is adopted to model a PWR core, which is a lumped parameter method.For this modeling, a core is regarded as one point without any space profile, and parameters of the core only vary with time and have nothing to do with space positions.) ICCAE 2016 1 ( 2016According to the point reactor core modeling [1][2][3], the nonlinear PWR core is modelled adopting the point kinetics equations with six groups of delayed neutrons and reactivity feedbacks due to control rod movement and variations in fuel temperature and coolant temperature.Main model parameters are given in Table 1.The nonlinear core model is showed as Eqs.( 1)- (6).The small perturbation linearization methodology is utilized to linearize this nonlinear core model, and the linearized core model is expressed by Eqs.( 7)-(11). One group delayed neutron model is utilized and the coolant inlet temperature is treated as a constant [2][3][4].The transfer function and the state equation of the core are separately calculated and expressed by where u=δrod-the input; y=δP r -the output; A transfer function of the core at a power level 10*i% is denoted by G 10*i% (i=1, . . .,10).These transfer functions are calculated by using parameters from Ref. [6], in which the total primary heat output is 2200 MW, the primary coolant inlet temperature is 285 o C, the primary coolant outlet temperature is 317 o C, the primary coolant average pressure is 15.5 MPa, and the primary coolant mass flow is 12861.1 kg/s. Nonlinearity measure with gap metric for core The nonlinearity measure on the basis of the gap metric for the core is proposed to quantify the core nonlinearity.One group of linearized core models can be selected to approximate the nonlinear core in terms of distributing situations of the core nonlinearity measure in the while range of power level, and each linear model is regarded as a local model of the core. According to the work [7,8], the gap metric of the core is developed as follows to solve the core nonlinearity.( , ) ( ( ), ( )) where Δ(•)is a metric.Linear systems such as G of the core are viewed as mappings in a Hilbert space.Hence, G is treated as an operator. Let H ∞ and H 2 be the standard Hardy spaces of functions, and let RH ∞ be the subspace of real rational functions in H ∞ .G 10*i% is a 1×1 rational transfer matrix and has the normalized right coprime factorization as follows. where ) ( ' G denotes a directed gap. Fuzzy modeling The T-S fuzzy method [9] can be used to handle modeling issues of nonlinear plants such as the core at varying operating conditions.According to this modeling, the power level operating range of the nonlinear core is partitioned into separate subregions, local dynamics in each region are represented by one linear model as a local model of this core; fuzzy rules are proposed adopting membership functions that are utilized to develop fuzzy sets of which the intersection of two ones is nonempty, the T-S fuzzy model that is the overall model to approximate the nonlinear core is then achieved via "blending" of all linear local models based on fuzzy rules. Core local models based on nonlinearity measure G 10% is chosen as a referenced model and a gap metric between G 10% and G Pr for P r [0.1 1] is calculated by means of Section 3.This calculated gap metric is the core nonlinearity measure between 10% and P r .The nonlinearity measure curves of the core are obtained and shown in Fig. 1, where the curve approximates a parabola.Comparing with two level areas [0.1 0.6] and (0.6 1.0], the curve change relatively rapidly in [0.1 0.6], and vary relatively gently in (0.6 1.0] in the form of an approximated straight line.These indicate that the core has the strong nonlinearity in [0.1 0.6] and the weak nonlinearity in (0.6 1.0].Therefore, choose three linear models at 10%, 20%, 40%and 60% to substitute the nonlinear core in [0.1 0.6] with the strong nonlinearity, select two linear models at 80% and 100% to substitute the nonlinear core in (0.6 1.0] with the weak nonlinearity.Finally, linearized core models at five levels are selected as local models of the core to substitute the nonlinear core in (0 100%]. Let Core fuzzy model The triangle membership function is utilized to set up fuzzy rules.The membership function of the core fuzzy model at whole power levels is shown in Fig. 2. The fuzzy logic system (FLC) is of the following form. Rule i: where Rule i denotes the ith fuzzy rule; M 1 , M 2 , M 3 , M 4 , M 5 and M 6 respectively represent the fuzzy sets corresponding to the power levels 10%, 20%, 40%, 60%, 80% and 100%; μ Mi (P r ) represents the membership when a power P r belongs to the fuzzy set M i , which is calculated in the light of a relationship between the membership and the core power level. In the light of Fig. 2, weight values q i for the "blending" of G i (i=1,…,6) is calculated as Eq.(20). where P r (01], q i [0 1]. One value q i indicates how the ith local model at a power level belongs to a core fuzzy model at this power level.Finally, the core fuzzy model is expressed by Eq.( 21) or (22). Simulation To verify effectiveness and correctness of this fuzzy modeling for the nonlinear core, the core fuzzy model ( 21) and the core nonlinear model are compared via simulations. When the input δrod is taken as a 0.01 step, the fuzzy modeland the nonlinear model at ten typical power levels 10×i% (i=1,…,10) are simulated, and output responses of the fuzzy modeland nonlinear model at these levels are shown in Fig. 3, which is namely responses of δP r .In fig.3, the curve FMi (i=1,…,10) represents the output response of the fuzzy model at 10×i%, and the curve NLi (i=1,…,10) is the output response of the nonlinear model at 10×i%.From FMi and NLi, it can be observed that FMi approaches NLi.Similarly, the similar simulation results are able to be achieved for other power levels. Conclusions The work handles the modeling problem of nonlinear PWR cores.On the basis of modeling a nonlinear core and calculating the core nonlinear measure with the introduction of the gap metric, one group linear local models of the core is defined.In terms of local models and the triangle membership function, the T-S fuzzy model for the core is developed.From numerical simulations, it is obtained that the core fuzzy model can substitute the core nonlinear model.The proposed modeling principle as a reference method can be applied to model other reactor cores and nonlinear plants, and also applicable for stability analysis and control system design of nonlinear systems. Lemma 3 . 2 . The gap between G 10*i% and G 10*j% is calculated Figure 1 . Figure 1.Nonlinearity measure of the core. Figure 2 . Figure 2. Membership function of core fuzzy model at whole power levels Figure 3 . Figure 3. Output responses of fuzzy model and nonlinear model for the core at ten power levels Hence, the core fuzzy model can approximate the core nonlinear model according to simulation results. Table 1 . Main model parameters. (i=1,2) be two closed linear operators in a Hilbert space H s .H s ×H s is also a Hilbert space with inner product derived from H s .The graph of O i is denoted by g r (O i ).As O i is linear, g r (O i ) is a subspace of the product Hilbert space H s ×H s .Definition 3.1.The gap between two closed linear operators O 1 and O 2 is defined by the gap between g r (O 1 ) and g r (O 2 ): =I.where for i=1 or 2, M i andN i belong to RH ∞ , M i (s) * = M i (s) T , N i (s) * = N i (-s) T .gr (G 10*i% ) or g r (G 10*j% ) is a closed subspace of H 2 ×H 2 that consists of all pairs (δrod, δP r ) such that δP r =G 10*i% *δrod or δP r =G 10*j% *δrod.They are presented by * M 1 +N 1 * N 1 =I and M 2 * M 2 +N 2 * N 2
2,532.6
2016-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Evaluation of 3D seismic survey design parameters through ray-trace modeling and seismic illumination studies: a case study This paper describes a case study of wavefront construction-based ray-trace modeling to access the 3D seismic exploration parameters that significantly impact achieving the exploration target in seismic data acquisition. The traditional methods assessment is based on the horizontal reflector concept, which does not consider the subsurface's inhomogeneities. This case study provides a methodology by considering the effect of subsurface variations on the estimation of seismic survey parameters. As the first step in this methodology, an elastic earth model is created to propagate theoretical seismic rays from a 3D seismic survey to generate seismic ray properties. Then illumination maps and synthetic seismic sections are generated from these seismic ray attributes to evaluate seismic survey performance in seismic imaging targets. The results proved that the method helps estimate seismic survey efficiencies in seismic target imaging. Therefore, the ray-trace modeling methodology can help obtain the target fold coverage in complex geological settings by designing and verifying the seismic survey parameters. Introduction Seismic data are one of the essential tools for identifying hydrocarbon reserves and developing reservoirs because of its spatial continuity nature (Nwaezeapu et al. 2019). A seismic reflection survey is required to obtain the seismic data to understand the subsurface's structure and stratigraphy (Hart 1999). For decades, the oil and gas industry has been using this seismic reflection technology to identify different prospects and ideal places for drilling wells for production (Yao et al. 2018). The seismic survey begins with a geoscientist's quest to image subsurface features. These requirements will provide the idea of required survey parameters, fold, shooting direction, etc. A seismic survey can be either two-dimensional (2D) or three-dimensional(3D). Generally, 2D/3D surveys can be chosen based on the subsurface information requirements to image the target depth (Le et al. 2019). For few decades, the seismic method has been used to explore reservoirs by mapping the structural and stratigraphic information. The seismic reflection technique uses a receiver network to record their refraction/reflection arrivals due to impedance variation distribution as elastic waves propagate through the ground. Last few decades, 3D seismic surveys have become an essential tool in exploring and exploiting hydrocarbons. In the late 1970s, the first 3D seismic surveys were carried out but became widespread in the early 1900s (Stone 1994). The 3D seismic survey design mainly concentrated on bin analysis, fold, offset distribution, and azimuth distribution (Vermeer 1998). In seismic acquisition activities, the main agenda of seismic exploration is to image the seismic target with proper resolution. A group network of receivers (geophones/hydrophones) is placed in the land data acquisition to record the ground vibrations generated by active seismic sources such as explosives and vibrators. These geophones are deployed using predefined survey parameters such as spacing between receivers and the number of geophones in a single line for the 2D seismic survey and more than one line for the 3D seismic survey. Usual seismic survey design starts from analyzing critical information of subsurface in the acquisition area such as maximum target depth, target thickness, maximum geological dip (if not virgin area) (Galbraith 1994). Based on the requirements, minimum fold requirements will be calculated. The critical acquisition parameters such as receiver group interval, source interval, the maximum offset, bin size, active channels, number of receiver lines (for 3D), number of shots in a salvo (for 3D), and type of spread (symmetric or asymmetric). The survey design concentrates on achieving the regular offsets and good azimuthal coverage at each bin using selected acquisition parameters. In the conventional approach, quality control is qualitative only, and it just relies on the offset distribution and azimuthal distribution of surrounding CMP bins (Stork 2011). On the target surface/ depth, the offset distribution and azimuthal distribution of a bin are ignored. The conventional acquisition parameters are based on the common-midpoint method, which uses the horizontal layer concept without considering lateral and vertical subsurface variations. However, none of these are valid in the entire field of acquisition. As a result of these assumptions, the acquired data may have significant footprints, and there is no guarantee of uniformity in the target surface/zone illumination (Mahgoub et al. 2012). The general practice of designing seismic survey parameters assumes that the subsurface comprises horizontal layers of constant velocity and density. Based on this concept, a collection of source-receiver geometries has been used for understanding the coverage in the acquisition area (Xia et al. 2004). The range of target depths and dips, maximum and lowest propagation velocities, and desired fold of coverage are the only parameters included during the design process. The fundamental assumption of flat horizontal layers ignores the complexities frequently present in subsurface layers in high oil exploration or production interest regions (Evans 1997). However, the conventional survey designing process ignores these complexities. In conventional geometry analysis, seismic survey parameters are evaluated only on surface attributes, providing uniformity in fold coverage and regularity in offset distribution (Shukla et al. 2014). Geometry parameters generated using traditional methods might be subjected to an optimization issue (Li and Dong 2006). With the growing interest of geoscientists in highquality subsurface images, seismic data acquisition planning should focus on, and the geometry preparation should meet all acquisition targets. The success of every oil or gas exploration hinges on the design of the 3D seismic survey (Mondol 2010). In complex geological settings, studies related to optimizing and assessing seismic survey parameters for target depth play a key role in petroleum exploration before the actual acquisition (Liu et al. 2005). When dealing with complex structures, traditional methods for defining 3D seismic survey parameters are insufficient. Acquisition parameters in a seismic survey to be effective, appropriate illumination of the target zone should be reached. However, an ideal seismic survey design not only fulfills this condition but requires comfortable acquisition logistics with the lowest probable cost. The solution for these issues may be addressed by evaluating each geometry and comparing the target surface results. Illumination maps can be created at the target reflectors for each of a few competing geometry templates, with the best one chosen by a qualitative evaluation of those maps. Over the years, 3D seismic ray modeling has become an operational tool for studying subsurface illumination in seismic studies (Saffarzadeh et al. 2018). A preferable procedure would be to begin with a subsurface structural and velocity model and utilize it to prepare/finalize acquisition settings. However, we do not have a complete subsurface model at the time of acquisition; otherwise, we might not have needed to acquire the data at all. The required information for the preparation model may get from earlier seismic data studies, from well logs, from geological interpretation, from any other conceptual geological model, from a combination of all of these. Ray-trace modeling studies are one way to model the target zones and illumination of the target surface. These ray-trace methods provide the theoretical travel times and amplitudes between source and receiver through the model. Ray-trace modeling studies can have comprehensive scope studies such as survey planning and assessment, synthetic seismogram generation, velocity inversion, wave propagation studies, and 4D analysis. Rahimi Dalkhani et al. (2018) used perfectly matched layers and absorbing boundaries to evaluate finite & spectral element modeling techniques. The advantage of ray-trace modeling is that it allows geoscientists to access different scenarios and analyze the impact of various parameters on the quality of seismic illumination. The acquisition geophysicist can optimize the survey parameters associated with technical and financial constraints (Lines and Newrick 2004). Many researchers have contributed to various studies in accessing the acquisition parameters using modeling studies before actual acquisition. Suarez (2004) employed seismic ray-trace modeling studies to assess 3D seismic survey design to give a better image and optimize acquisition parameters for various surveys. de Oliveira et al. (2009) were applied ray-trace modeling techniques for accessing marine acquisition parameters by illumination maps. Zühlsdorff et al. (2020) explained practical benefits of ray-trace modeling studies in geometry evaluation. Zühlsdorff and Drottning (2013) planned a survey design for VSP using ray-trace approaches. A comprehensive discussion on illumination analysis and different levels measurements was discussed by (Xie et al. 2006). Lecomte et al. (2009) applied modeling studies for illuminating target horizons to access the designed seismic surveys. Paraschivoiu (2016) has worked on crooked line effect on the illumination of subsurface after data acquisition. In the present study, we utilized ray-trace modeling to evaluate the seismic acquisition parameters in KG Basin, India. For this study, we have considered the stratigraphic heterogeneity in subsurface geology in terms of velocity, density, and structural complexity found and designed by horizons. Illumination maps were generated on the target surface and compared. By this comparison, seismic raytrace modeling studies can test the acquisition parameters of geometry for the target objective, especially in complex reservoirs. This paper has assessed a 3D seismic acquisition survey design using ray-trace modeling techniques through illumination maps. Methodology This study used ray-trace modeling to estimate seismic acquisition parameters in heterogeneity medium using NORSAR modeling software. As part of this study, elastic earth was generated using strati and structural interpretation information like horizons, interval velocity, and density. The theoretical seismic waves were created using the wavefront construction (WFC) technique to travel into the elastic earth model by a 3D seismic acquisition design (Chambers and Kendall 2008). From these theoretical wavefront rays, we generated ray attributes (travel times and amplitudes). The illumination maps were created on the target horizon/ depth by ray attributes to analyze the effect of the seismic acquisition parameters. Synthetic seismic shot gathers were generated for every shot using the seismic ray attributes in the seismic survey and applied general processing sequence on shot gathers to get migrated post-stack section. This synthetic post-stack migrated section was compared to the previous survey's post-stack migrated section and the brute stack section of the selected seismic survey. Ray-tracing modeling In seismic acquisition, ray shooting and bending are the traditional methods that simulate the seismic acquisition parameters. Nevertheless, these methods are time-consuming due to calculating only a single ray between the shot and receiver. In this application, the WFC method generates the mathematical seismic rays for generating the illumination maps and synthetic data to evaluate the survey design and parameters on the target horizons in heterogeneous subsurface (Cerveny 2001). The WFC method is flexible to estimate seismic modeling attributes by maintaining uniform ray density concerning wavefront propagation. The WFC technique works on the concept of asymptotic, high solution from the elastodynamic equation (Babich and Kiselev 1989). These ray-trace modeling (WFC) studies are an approximation of high frequencies in seismic wave propagation. Model preparation The important part of seismic modeling studies is preparing the most realistic model analogous to the subsurface inhomogeneities. Previous interpreted information such as different structural horizons and varying vertical velocity and density uses a subsurface model for seismic wave propagation (Astebøl 1994;Vinje et al. 1999). The model contains different blocks with specific material properties (density and velocity) separated by interfaces. Each interface structurally followed the pre-interpreted horizons and internally developed a triangular network with many nodes. The triangular representation of interfaces provides smoothness for an approximate solution on WFC ray-tracing interfaces, requiring first and second derivative smoothness. The explicit discontinuity between the blocks in the subsurface model requires applying Snell's law during the seismic wave propagation (Vinje et al. 1993b). Figure 1 shows the triangular framework of the interface (Black) and theoretical ray network of the wavefront from source by WFC (white). Wavefront construction Method The WFC method was applied in 2D models (Vinje et al. 1993a) and later adopted for 3D models (Vinje et al. 1993b, Fig. 1 The ray network of the wavefront (white mesh) reflected from the interface (black mesh) (Vinje et al. 1999) 1999). In the WFC method, the ray field generates by propagating wavefront step by step other than the ray concept as in conventional methods. The wavefront is generated step by step in time to maintain a uniform density of rays (Coman and Gajewski 2001). It means that the entire wavefront is moved a one-time step forward through the model for creating a new wavefront. The uniform ray field is not able to maintain in ray shooting and ray bending methods. So, the WFC method controls the divergence between the rays (Vinje et al. 1999). The main essential steps in the WFC method are the creation of a primary WF, the transmission of WF in model with one-time step, maintain the uniform density of rays on the WF, creation of new rays at beyond the predefined limits of distance and angle, attributes estimation at a receiver such as travel time and amplitude. The WF topology is named connection points between WFs and ray field (node) and interior connecting lines. The triangle framework of WF topology is shown in Fig. 2. These triangle frameworks are able to stretch and twist while transmitting through the model. The triangle framework can be made relatively simple in propagation, interpolation, and approximation of the receiver's attributes. Each node of WF topology is at time t defined by its position, direction from source, wave type, normal, and dynamic properties of each ray. With the assistance of these parameters of nodes in WF at time t, new WF can create at time step Δt. So, a new wavefront has been created at t + Δt. The information at two times (t, t + Δt) will create new rays and calculate the ray attributes at the receiver location. The WF creation in successive time intervals with generating new triangles for all sides is shown in Fig. 3. The new triangles created and interpolated new rays based on distance excess than the limit of maximum distance (DS max ) and angular distance (DA max ) between rays. The wavefront ray can be reflected and transmitted at interfaces in the model where the same characteristics of the triangles network are available (Rueger 1993). Gjøystdal et al. (2002) explain the procedure of the estimation of arrivals at each receiver. There is a need to modify the attributes from the propagating WF to find the parameter information at receivers. The ray cells are generated from the volume between the old WF and the new WF. These ray cells connected the old & new WFs through the triangle and three rays, which are the boundaries for ray cells. A box was created with the ray cell's complete information using these six boundaries of the ray cell. The ray cell, along with The concept of identifying the information at receiver bounded by ray cell. The rays (r 1, r 2, and r 3 ) are interpolated for estimating the travel time and amplitude. (Vinje et al. 1993a) a receiver shown in Fig. 3b. Coordinates of ray cells found the arrival information such as amplitude and travel time details of a receiver within the ray cell. These coordinates such as barycentric coordinates (u, v) and travel time information (t) at the ray cell. The information on travel time and amplitude at the receiver can be used for computing the synthetic seismogram. In this study, a synthetic seismogram is generated based on the 1D convolution modeling equation. The convolution of subsurface earth reflectivity creates a time-domain seismic trace with a wavelet or time series. The ray-tracing method produces synthetic seismograms and simulates time migrated sections by the concept of image ray (Gjøystdal et al. 2007;Lecomte et al. 2015). Illumination study In this study, the efficiency of seismic survey design has been assessed by illumination techniques in imaging the acquisition object (Moldoveanu et al. 2003). These illumination techniques can also be used in seismic data processing to understand acquisition-related amplitudes changes or shadow zones (Laurain and Vinje 2001;VerWest et al. 2001). A general definition of illumination in optics and light theory is "surface intensity of light." However, the geophysical definition is the seismic wave energy falling and reflected on a reflector (Sheriff 2002). The illumination studies are primarily used to generate illumination maps (hit maps) to understand the feasibility of the seismic acquisition design (Sassolas et al. 1999). These illumination studies provide quantitative analysis to reducing the risk of seismic exploration. With the help of the illumination on the target horizon, the acquisition parameters can be assessed, modified, or simply rejected. If we design and implement in the field without prior analysis that has a high chance of missing the objective and also increasing the high cost (Cain et al. 1998). The binning methods are the most common illumination studies in the industry. Ray-based binning methods are fast and flexible and valid for high-frequency waves. The results of ray-tracing on the target surface are used to generate maps for analysis. These analyses can help acquisition geophysicists understand the regions having the high reflection points fall and low reflection fall regions (shadow areas) on the subsurface target depth for a specific acquisition geometry. The illumination maps (hit maps) provide information about hits (reflection points) in each cell of the target reflector (Bear et al. 2000). In conventional techniques, a grid is defined based on the flat surface for creating the CMP fold maps, and the earth's geometry and elastic properties are assumed uniform. However, in reality, it is no more valid. In the highly complex areas, hitting the reflections lead to uneven illumination on the target reflector. However, the earth subsurface is not uniform, and in such cases, conventional survey design methods lead to an inaccurate picture of the subsurface illumination (Hoffmann 2001;Campbell et al. 2002). Instead of the conventional CMP fold method, planning and designing the seismic acquisition parameters based on the illumination on target surfaces is the most appropriate method. Figure 4 explains information collecting from the target surface to create illumination maps. Application The wavefront construction is based on the ray-tracing modeling method used for assessing the seismic acquisition parameters to verify if accurate delineation of the subsurface is possible or not. The ultimate aim of the seismic data exploration is to image the subsurface targets by providing high-quality data concerning all limitations in surface & subsurface and financial limitations. Here, the first illumination maps (hit maps) and the post-stack migrated section assess the seismic acquisition parameters. The illumination maps of two acquisition geometries have been compared, and post-stack migrated sections of synthetic seismic data Fig. 4 The Illumination strategy of ray-trace method and previous seismic data with a brute stack of present seismic data to verify the seismic acquisition design. In this study, the methodology consists of a workflow to assess the seismic survey parameters. It follows as design the seismic survey, prepares a subsurface elastic model, generates ray attributes in the model by seismic acquisition parameters, creation & compared the illumination maps of different surveys, generation of the synthetic seismogram, applied seismic processing to synthetic seismic shot gathers, and compared the synthetic results with post-stack migrated sections. We have used two geometries for this study. Geometry 1 was acquired a few years back in this area, which did not provide sufficient resolution on the target depth (Horizon-5) for interpretation. Geometry 2 is the present proposed one to illuminate the exploratory objective. The seismic acquisition parameters of these two surveys have shown in Table 1. The CMP fold of the two geometries is shown in Fig. 5a and b. The first stage of this methodology is creating a subsurface elastic model with provided horizons and properties of blocks/layers such as interval velocity and density. The subsurface model is divided into blocks, and each block has its isotropic properties. The parameters used for preparing the model showed in the following Table 2. Figure 6 shows the block view and interface view of the model. This model has been finalized based on significant undulations observed on Horizon-4 and Horizon-5. Structurally, it is an essential portion to illuminate in this acquisition area. Most of the wavefield energy dispersed on Horizon-5, and many shadow zones (low illumination areas) have been identified. To improve illumination on Horizon-5 is the primary goal for new acquisition parameters. The next step is in the methodology that is to define a set of instructions at interfaces termed ray codes. These ray codes guide the wavefield to generate a specific type of ray at source position, type of reflection/transmission, and specific type of conversion at the model's interface. Generally, a ray path is defined as the source to target and target to the receiver. The P-wave to P-wave conversion ray code is used in this study at all five interfaces of the model. Once defined all instructions for ray field propagation at all interfaces in the model, the seismic wavefield has been transmitted using the application of the common-shot wavefront tracer of NORSAR Modeling software (Lecomte et al. 2015). The theoretical WF propagates by time step 100 ms for creating new WF and developing the new triangles in the WF network with maximum interpolated distance (DS max ) of 300 m and maximum angular distance (DAmax) of 5° between the rays. Figure 7 shows the seismic ray transmitting through interfaces in the model. The ray-trace modeling produces ray striking points on the target surface, amplitude, travel time as ray attributes. These attributes are used for generating the illumination maps and synthetic shot gathers. The number of hits per bin is the hit count represented as hit maps that are used for the preliminary assessment of the seismic survey. Then the reflection coefficient of the model used to generate the synthetic shot gathers and applied the basic seismic processing steps to develop the post-stack migrated section. Results and discussion A seismic acquisition geophysicist desires to provide the quality data with respect to surface and subsurface limitations and financial constraints (Saffarzadeh et al. 2018). This study evaluated the seismic geometry parameters through the illumination maps and comparison of processing seismic migrated sections of synthetic data and real seismic data. Figure 8 shows the illumination maps (hit maps) of the target horizons produced by two types of seismic parameters. The Horizon-5 was considered as target of interest in the model (Fig. 6). The illumination maps of horizon-5 by Geometry 1 & Geometry 2 shown in Fig. 8. Figure 8a shows illumination maps generated by Geometry 1 which was leaves many areas with poor illumination especially in the undulation zone. An illumination map idea is based on the reflected points from target surface. The undulation zone has shown improved coverage with the new survey parameters. Figure 8b shows that the new seismic acquisition parameters fill the illumination holes on Horizon-5. The difference between these illumination maps is observed mainly in the undulation area marked by an arrow (Fig. 8). The basis processing steps applied to synthetic shot gathers to generate migrated seismic sections by this methodology. Figure 9 shows the processing section of synthetic seismic data generated by the NORSAR modeling software. It has proved that the processing stacked seismic section Fig. 8 a Hitmap generated by the Geometry 1 and b Hitmap generated by the Geometry 2 Fig. 9 The post-stack migrated seismic section of synthetic seismic data generated by Geometry 2 using the PARADIGM Echo's processing software shows better illumination in the basement. We understand that the present survey can illuminate the target depth (Horizon-5) very clearly from this section. Figure 9 is compared with Fig. 10b, which is the stacked section of real field data. It clearly understands that new acquisition parameters have achieved and reached seismic energy to horizon 5 in theoretical and real-time acquisition mode. Figure 10a and b show the original processing sections surveyed by the Geometry 1 (old-new parameters) and the brute stack of the present survey data (new survey parameters). This comparison shows that new seismic data acquisition has been illuminated quite better than old geometry, especially the seismic data events after the 1.7 ms. The above results (Figs. 8 and 9) have substantiated that ray-tracing modeling can help choose and verify seismic survey designs concerning objectives. Some observations have been made from the present study on selected acquisition parameters. • The target surface area has been covered by new acquisition parameters (shown in table_1). It provided reasonably uniform coverage within the full fold boundary (black color). • The post-stack migrated section of synthetic seismic data has given more confidence in new survey parameters. • The preliminary processing section (Brute stack) of real field data (Fig. 10b) has proven that the results of the present methodology (illumination maps and synthetic data processed section) are validated Conclusions The main goal of any seismic acquisition is to achieve the target depth with excellent illumination. The seismic parameters play a crucial role in the achieve this goal, especially in complex subsurface. In this paper, we assessed the acquisition parameters for understanding geometry effectiveness. Conventional geometry analysis methods are valid only in the ideal condition, such as horizontal and isotropic medium. The inhomogeneity of the subsurface has been considered while preparing the geometry and done the validation. In this present work, we have used the ray-based modeling tool to assess parameters of geometries based on the WF construction method using NORSAR modeling software. A 3D subsurface model has been constructed to generate illumination maps and synthetic seismic shot gathers. The illumination maps (hit maps) have been compared and analyzed to understand the coverage of each geometry. Then compare the final migrated section of previous acquisition parameters (Geometry 1) with the brute stack of Geometry 2 to understand the imaging ability of Geometry 2. Finally, the ray-trace modeling studies having the ability to evaluate the seismic acquisition parameters for better imaging and also for cost-driven decision. Finally, comparing the illumination maps of geometries can give us a quick understanding of geometry effectiveness. Furthermore, the comparison of processing sections of synthetic data provides confidence in the effectiveness of geometries. Therefore, seismic acquisition modeling methods helping in finalizing a geometry for illuminating target depth. Moreover, these ray-trace methods can also
6,020
2022-04-13T00:00:00.000
[ "Geology" ]
Colorectal Cancer Aggressiveness is Related to Fibronectin Over Expression , Driving the Activation of SDF-1 : CXCR 4 Axis C l i n M e d International Library Citation: Gouveia-Fernandes S, Carvalho T, Domingues G, Bordeira-Carriço R, Sérgio Dias, et al. (2016) Colorectal Cancer Aggressiveness is Related to Fibronectin Over Expression, Driving the Activation of SDF-1:CXCR4 Axis. Int J Cancer Clin Res 3:072 Received: August 12, 2016: Accepted: November 16, 2016: Published: November 19, 2016 Copyright: © 2016 Gouveia-Fernandes S, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Gouveia-Fernandes et al. Int J Cancer Clin Res 2016, 3:072 Introduction Cancer hallmarks result from dynamical interactions between tumor cells and tumor microenvironment, including normal cells, growth factors and extracellular matrix (ECM) components [1].Fibronectin (FN) is a high molecular weight, multidomain glycoprotein [2,3], present as a soluble form in body fluids, such as plasma (plasma FN), and as an insoluble form in ECM (cellular FN) [2,4,5].FN exists, generally, as a dimer, composed of two nearly identical polypeptide chains covalently linked by a pair of C-terminal disulfide bonds.This glycoprotein has been implicated in a wide variety of cell functions, particularly those involving 19503-Qiagen) and cDNA was synthesized from 1 µg of RNA, using Superscript II ® (CatNo-18064-014Invitrogen), according to manufacturer's protocols. Construction of pcdna3.1 -full-length fibronectin (pcdna3flfn) Full-length FN (embryonic) was amplified from cDNA synthesized from SW480 cells (ATCC, CCL-228™) RNA.cDNA synthesis was performed as described above, but a panel of 23 reverse primers of 15 mer (Supplementary Table 1) was used instead of random primers, to ensure the specificity and enrichment of cDNA from FN; cDNA synthesis occurred for 4 hours.PCR was performed by using an Expand Long Template PCR System (CatNo-11681834001-Roche), according to manufacturer's instructions.Primers were designed as instructed by pcDNA3.1 Directional TOPO Expression kit protocol (CatNo-K4900-40-Invitrogen) (Supplementary Table 1). The amplified product was visualized in a 1.2% (w/v) agarose gel electrophoresis with 0.005% (v/v) ethidium bromide.DNA fragment with the expected size was extracted using a QIAquick Gel Extraction kit (CatNo-2870-Qiagen), according to manufacturer's protocol.pcDNA3-flFN expression vector was constructed by using the pcDNA™3.1 Directional TOPO Expression kit (CatNo-K4900-40-Invitrogen), and cloned by transforming One-shot TOP 10 Chemically Competent Escherichia coli (Invitrogen), according to the manufacturer's protocol.Plasmids were isolated by using a Plasmid DNA MiniPreps kit (EasySpin), according to manufacturer's instructions. Sequencing reactions were performed with BigDye ® Terminator v3.1 Cycle Sequencing kit (CatNo-4337456-Applied Biosystems).The automated sequencing was performed in an ABI Prism™ 310 Genetic Analyzer (Applied Biosystems) and analyzed with Sequencing Analysis 3.4.1 software. Real-time pcr and human metastasis array Real-time PCR was performed using 1 µl of cDNA, synthesized as described above, and SYBR ® Green Master Mix (CatNo-4309155-Applied Biosystems), according to the manufacturer's instructions.RNA 18S was used as housekeeping gene.Samples were analyzed in triplicate.Reactions were developed in ABI Prism ® 7900HT (Applied Biosystems). For apoptosis analysis, cells were rinsed with PBS 1X and ressuspended in 0.1% (w/v) BSA in PBS 1X.Cells were incubated with FITC-Annexin V (CatNo-640906-BioLegend) (diluted in annexin binding buffer, 1:40) for 15 minutes, at room temperature.Cells were rinsed with 0.1% (w/v) BSA in PBS 1X and ressuspended in annexin-V binding buffer.Acquisitions were performed using a FACSCalibur (BD Biosciences) and further analyzed with CellQuest Pro software. Cell aggregation assay was performed in triplicate by coating wells in a 48-well plate with 150 µl of soft-agar solution with subsequent cell seeding (4 × 10 4 cells per well).Aggregation was evaluated at 24 h under an Olympus CK2 inverted optical microscope (Olympus). For in vitro wound healing assay, cells were grown in a 3.8 cm 2 tissue culture wells to a confluent monolayer.In each well, a scratch was made with a P20 pipette tip to the length of the well.After the scratch, the culture medium was replaced to remove detached cells.A time-lapse experiment was performed and followed under an Olympus CK2 inverted optical microscope (Olympus).Phase images were acquired at the following time points: 0, 8, 24 and 36 hours. Statistics Data were analyzed using t-tests (two-tailed) with Excel (Microsoft) and GraphPad prism 5. Statistically significant changes were determined at p < 0.05. Generation of hct15-fn colorectal cancer cell line To study the role of FN in modulating tumor onset and progression, a colon adenocarcinoma cell line (HCT15) over expressing FN was generated, by stable transfection with human embryonic FN gene (full length FN), pcDNA3-flFN.This cell group was compared to its wild type counterpart (HCT15 control) in in vitro and in vivo assays.As shown in figure 1A, FN-transfected cells over expressed FN at the mRNA level; this was further confirmed at the protein level (Figure 1B).Interestingly, HCT15-FN cells also over expressed integrins alpha3 and beta 3, when compared to HCT15 control cells. Fn overexpression results in increased cell proliferation, larger cell aggregates and reduced apoptosis To evaluate the role of FN over expression in cell proliferation, growth curves were performed, at different time points -0, 3, 6, 24 and 30 hours (Figure 1C).Data showed HCT15-FN had a higher growth rate than HCT15 control cells. The effect of FN over expression on cell-cell adhesion was also evaluated, by slow aggregation assay (Figure 1D).An increase in cellcell adhesion was observed for cells over expressing FN, as well as the formation of larger aggregates than in HCT15 control cells. Apoptosis was assessed by Bax/Bcl-2 ratios (Figure 2A) and annexin V staining (Figure 2B).Bax (pro-apoptotic) and Bcl-2 (antiapoptotic) expressions were determined by Western blotting.The results showed a lower Bax/Bcl-2 ratio for HCT15-FN, in comparison to HCT15 control cells, suggesting a lower activation of apoptosis. Annexin V staining was used to quantify the number of cells undergoing apoptosis within each cell group (Figure 2B).Data showed HCT15-FN cells undergo apoptosis at a statistically significant (p < 0.05) lower rate than HCT15 control cells, at 3 and 30 hours time points. Zymography Culture supernatants were concentrated by using Amicon ® Ultra-4 Centrifugal Filter Units (Millipore).SDS loading buffer 5X was added to each sample and the mixtures were loaded in a 12% polyacrilamide gel with 0.1% (w/v) gelatin.Electrophoresis was performed with a Mini-PROTEAN Tetra Electrophoresis System (Bio-Rad), 150 V, for 90 minutes, into TGS buffer 1X (Bio-Rad).Gel was incubated in renaturating buffer (25% TritonX-100 (v/v)) for 1 hour, with agitation, and then gel was incubated, overnight, 37 ⁰C, in collagenase buffer.Staining was performed with 0.5% (w/v) Coomassie Blue R-250 (CatNo-20278-Thermo Scientific) for 30 minutes and destaining with destilled water, with agitation.The gel was scanned and bands were quantified using Image J software. After 2 months, mice were euthanized in a CO 2 chamber. Tumor volume was determined as (minor axis 2 × major axis)/2.Tumors and main target organs for metastasis were collected for histological analysis. Sdf-1:cxcr4 axis blockade reduces tumor progression and metastasis in vivo Considering the relevance of the SDF-1:CXCR4 axis in cancer progression [22,23], we sought to investigate if this chemokine could be responsible for the overall effects of FN over expression.For this purpose, and after the confirmation of the differential expression of CXCR4, at protein levels (Figure 3C), in vivo experiments were performed to assess the effect of FN over expression on tumor growth and to determine the effect of CXCR4 blockade in the FN-mediated effects.First, the tumorigenic capacity of HCT15 control and HCT15-FN cells was assessed by generating orthotopic xenograft tumors in 6-weeks-old BALB/c-SCID mice.As shown in figure 3A, survival was longer in mice inoculated with HCT15 control, with all animals alive at the end of the experimental period; in contrast, for mice inoculated with HCT15-FN, the overall survival was reduced by 75% at the end of the experiment and mice had to be euthanized.In parallel, we treated mice inoculated with HCT15 control or HCT15-FN cell lines with CXCR4 antagonist-AMD3100.Notably, 100% of mice undergoing AMD3100 treatment survived until the end of the experiment, from HCT15 control and HCT15-FN groups. Necropsy revealed larger primary tumors in mice inoculated with HCT15 overexpressing FN, when compared to mice inoculated with HCT15 control cells (Figure 4B and Figure 4C).Tumors from mice treated with AMD3100 were however smaller than the respective control group (Figure 4C).Concerning metastasis, histological analysis of liver and lungs revealed the presence of more micrometastases in mice injected with HCT15-FN cells.In sharp contrast, mice treated with AMD3100, regardless of being inoculated with HCT15 control or HCT15-FN cells, did not develop metastases (Figure 4D). Fn overexpression is associated with higher proteolytic activity and higher migratory capacity of hct15 cells Another attribute of cancer is the ability of cells to migrate from the original location to invade the surrounding tissues and, ultimately, metastasize at distant organs [21].During these stages, ECM proteolysis is crucial and MMPs assume an important role in this context [5,7]. FN ability to interfere in MMPs production/secretion and/ or activity was assessed by gelatinolytic zymography of cell culture conditioned media (Figure 2C).The results revealed higher enzymatic activity of MMP-9 and MMP-2 in HCT15-FN cells than in HCT15 control cells. In vitro wound healing assay was performed to assess the role of FN on the migratory rate of cells, by comparing HCT15 control and HCT15-FN (Figure 2D and Figure 2E).HCT15-FN showed a higher migratory capacity than HCT15 control cells. Fn ovexpression induces cxcr4 and flt-4 expression We exploited the mechanisms underlying the phenotypes described above, related to FN over expression.For this purpose, an array of genes related to human metastasis was carried out for HCT15 control and HCT15-FN.Data were validated by a second real-time PCR (Figure 3A).Statistically significant (p < 0.05) higher expression of CXCR4, FLT4 and MMP13 and lower expressions of MTA1 and TIMP3 were found for HCT15-FN in comparison to HCT15 control cells. Given these results, another real-time PCR was carried out targeting CXCR4 and FLT4 ligands -SDF-1 and VEGF C, respectively (Figure 3B).A statistically significant (p < 0.05) increase in SDF-1 and VEGF C expression was also seen in HCT15-FN in comparison to HCT15 control cells.suggested by some studies using different cancer models [14,15,[31][32][33].The correlation between FN and MMPs expression was further supported from data obtained by performing a PCR-metastasis array, which showed increased expressions in MMP-13 and tissue inhibitor of metalloproteinases-3 (TIMP-3), which target MMPs including MMPs -2 , -9 and -13 [7]. As mentioned, ECM proteolysis, mainly by MMPs, is one of its most important events accompanying cell migration [34].Data reported here, showing increased migration and MMP activity by HCT15-FN cells, are therefore in agreement. Next, we exploited the mechanisms underlying the correlation between FN and further tumor progression, by performing a screening for human metastasis targets.Results revealed an increased expression of CXCR4 and FLT4 and their respective ligands SDF-1 and VEGF C in HCT15-FN in comparison to HCT15 control cells. Considering the wide spectrum of roles of SDF-1:CXCR4 axis [22,23], we hypothesized this could be an important causal link between increased FN amounts and a more aggressive phenotype in colorectal cancer. To corroborate the in vitro findings and to validate this assumption, two in vivo experiments were developed in parallel.First, survival of mice inoculated with HCT15-FN was shown to be lower than that of mice injected with parental/control cells.Tumors formed by HCT15-FN cells were also larger than those formed by HCT15 control cells (Figure 4B and Figure 4C); and mice inoculated with HCT15-FN showed more metastases than HCT15 control mice (Figure 4D).These observations are consistent with the in vitro results.Second, CXCR4 blockade was shown to promote survival by reducing tumor growth and metastases formation both in mice inoculated with HCT15 control and, more significantly, in HCT15-FN cells. Taken together, we assumed that SDF-1:CXCR4 axis emerges as a causal link between increased FN amounts and a more aggressive phenotype in colorectal cancer.A role for SDF-1:CXCR4 has already Discussion ECM represents a critical player in oncogenic transformation.The several steps of cancer are largely dependent on the permissive nature of the microenvironment and ECM proteolysis assumes great importance in this context [24][25][26].FN is one of the most abundant components of ECM [2] and several studies related FN levels to tumor progression: in cancer patients, an increase in both FN and FN fragments levels was observed [6,10,11,27] and, in several reports where functional studies on cancer cell lines were performed an equivalent correlation has also been observed [11,[13][14][15][16]. In the context of solid tumor growth, colorectal cancer assumes a relevant position as one of the most common and lethal malignancies worldwide [28].In order to explore the role of FN in tumor development and progression, colon cancer was the chosen model.For this purpose, a stably FN-overexpressing HCT15 cell line was generated, which features were compared to its wild type counterpart (HCT15 control) in in vitro and in vivo assays. Interestingly, the increase in FN in HCT15-FN cells was accompanied by an increase in α3 and β3 integrins expression, which may suggest a cross-regulatory mechanism, since these integrins are FN receptors [29]. Subsequent detailed characterization of the FN-transfected cell line versus its parental counterpart revealed that the first had a higher proliferative rate, was able to form larger cell aggregates, showed lower levels of apoptosis and increased migratory capacity.MMP-9 and MMP-2 enzymatic activities were also increased in HCT15-FN, accounting for the increased invasion capacity.Besides the evidence that MMPs activity is strongly associated with tumorigenesis [7], MMP-9 and MMP-2 are thought to be crucial for metastases formation [5,8,9,30].These increases in MMPs activity might be related to higher amounts of FN in supernatants of HCT15-FN cells, which is cleaved by MMPs [7,11], and/or FN may activate signaling pathways responsible for MMPs regulation, as it has been been suggested in other cancers, including breast [35], lung [36], prostate [37] and pancreatic cancer [38].CXCR4 expression was also shown to be associated with poor prognosis in colorectal cancer patients [39,40] and with the presence of distant metastases in patients with colorectal cancer, as well as with tumor cells migration in vitro [41,42].Studies on prostate cancer cell lines also linked CXCR4 to increased cell migration and metastasis, through its regulation of MMPs [43,44] and VEGF [43]. On the other hand, the inhibition of either the expression or function of CXCR4 produced a beneficial (clinical) effect in different neoplasms [35,[45][46][47][48][49].Moreover, the SDF-1:CXCR4 axis has been related to ECM components, namely FN, in malignancy contexts.A study on breast cancer showed the involvement of chemokines in tumor cells migration and motility, attributing an important role of CXCR4 in guiding breast cancer cells to target organs, due to a cross-signaling with integrins in an ECM-dependent way [35].SDF-1-induced integrin activation, through CXCR4, in small cell lung cancer, also appeared to co-operate in mediating adhesion, namely to FN, and chemoresistance [49].SDF-1 was also shown to affect the metastasis-related behavior of colorectal cancer cells expressing CXCR4, including an increase in cell proliferation, cell adhesion to FN and cell migration [42]. Conclusions In conclusion, our findings suggest an active role for FN in promoting colorectal cancer progression and metastases formation, via integrins modulation and by promoting SDF-1:CXCR4 interaction.These studies shed light on the mechanisms whereby an ECM component perturbs and favors neoplastic progression, and reveals novel targets for therapeutic intervention in a setting of invasive colorectal cancer. Figure 2 : Figure 2: FN over expression reduces apoptosis and is associated to a higher cell migration rate.A) Assignment of Bax/Bcl-2 ratios, by Western blotting, for control and FN-transfected cells.β-actin was used as housekeeping protein.One hundred µg of protein were loaded in each lane.Bands were quantified using ImageJ software; B) Apoptosis analysis by annexin V -7AAD staining in control and HCT15-FN at 0, 3, 6, 24 and 30 hours ( * p < 0.05); C) Zymography of cells media supernatants.D, Wound healing assay for control and transfected cells, at 0, 8, 24 and 30 hours.Phase microscopy (original magnification: 200x); E) Quantification of wound closure, at 0, 8, 24 and 30 hours.Bands were quantified using ImageJ software.Error bars represent standard deviation. Figure 3 : Figure 3: Screening for metastasis-related targets.A) Validation of PCR array for human tumor metastasis by real time PCR.Relative quantification results were compared between control and HCT15-FN ( * p < 0.05); B) Real-time PCR for SDF-1 and VEGF C expression.Relative quantification of mRNA levels in control and FN-transfected cells ( * p < 0.05); C) Immunoreactivity for CXCR4 in control and FN-transfected cells.Fluorescence microscopy (original magnification: 200x).Nuclei were stained with DAPI (blue).Scale bar: 100 µm.Error bars represent standard deviation. Figure 4 : Figure 4: Orthotopic xenograft tumors induction of control and HCT15-FN under standard conditions and upon CXCR4 antagonist AMD3100 treatment.A) Kaplan-Meier survival curves for mice inoculated with control and HCT15-FN, over 8 weeks; B) Macroscopic observation of xenograft orthotopic BALB/c-SCID.Representative images of the animals inoculated with control and FN-transfected cells; C) Primary tumors volumes ( * p < 0.05); D) H&E and immunofluorescense (anti-CK19; FITC).Representative images of the lungs of animals inoculated with control and HCT15-FN, not injected and injected with AMD3100, in H&E black arrows indicate metastasis foci.Light and fluorescence microscopy (original magnification: 100x).Scale bar: 50 µm.Error bars represent standard deviation.
3,934
2016-12-31T00:00:00.000
[ "Biology", "Medicine" ]
Optically Transparent Metamaterial Absorber Using Inkjet Printing Technology An optically transparent metamaterial absorber that can be obtained using inkjet printing technology is proposed. In order to make the metamaterial absorber optically transparent, an inkjet printer was used to fabricate a thin conductive loop pattern. The loop pattern had a width of 0.2 mm and was located on the top surface of the metamaterial absorber, and polyethylene terephthalate films were used for fabricating the substrate. An optically transparent conductive indium tin oxide film was introduced in the bottom ground plane. Therefore, the proposed metamaterial absorber was optically transparent. The metamaterial absorber was demonstrated by performing a full-wave electromagnetic simulation and measured in free space. In the simulation, the 90% absorption bandwidth ranged from 26.6 to 28.8 GHz, while the measured 90% absorption bandwidth was 26.8–28.2 GHz. Therefore, it is successfully demonstrated by electromagnetic simulation and measurement results. Conventionally, high-loss materials have been used to fabricate electromagnetic wave absorbers. For example, a wedge-tapered absorber, which is based on ferrite or a composite material, can excellently absorb electromagnetic waves [28][29][30]. However, this absorber is bulky and costly. The Jaumann absorber was proposed in 1994 to overcome these drawbacks of the wedge-tapered absorber [31]. The Jaumann absorber is based on a resistive sheet and has a resonance structure; moreover, it has a small size and high absorptivity. However, the material size should be a quarter wavelength (λ/4), and, at low frequencies, it has a bulky size. Recently, the development of transparent conductive materials has led to research on optically transparent electromagnetic devices [41][42][43]. This futuristic topic elicits interest in metamaterial Numerical Simulations The numerical simulation involving full-wave analysis by using the ANSYS High-Frequency Structure Simulator (HFSS, ANSYS, Washington, PA, USA) was used to design the proposed metamaterial absorber. Figure 1 shows the geometry of the unit cell of the proposed metamaterial absorber. To achieve the feature of optical transparency, apart from using two PET films (dielectric constant ε r = 3 and loss tangent = 0.12) for the substrate and an ITO film for the bottom layer, as shown in Figure 1a, a thin square conductive loop was introduced with a width of 0.2 mm on the top layer of the PET substrate. The dimensions of the substrate and conductive pattern were a = 3 mm, w = 0.2 mm, and l = 2 mm, where a is the length of the substrate and w and l are the width and length of the conductive pattern, respectively. Adhesive tape of length 0.05 mm (t 2 ) (ε r = 3 and loss tangent = 0.05) was used to bind both PET substrates, as shown in Figure 1c. The thicknesses of the upper and lower PET substrates were t 1 = 0.25 mm and t 3 = 0.2 mm, respectively. The bottom layer was fully covered with a 5 Ω (R s ) ITO conductive sheet to prevent wave transmission. In order to achieve the best performance and quantify the sharpness, the parameter values were determined by conducting a parametric simulation study. The sharpness in the resonance can be defined by the following equation: where, f r is resonance frequency, f 1 is lower frequency of 3 dB bandwidth, and f 2 is higher frequency of 3 dB bandwidth, respectively. Figure 2 shows the simulated S-parameters for different values of the parameters. When the conductive loop width was varied from 0.1 to 0.2, and 0.3 mm, the resonance frequency varied from 26.5 to 29 GHz and the sharpness varied from 35 to 413, and 48, respectively as shown in Figure 2a. the width was set at 0.2 mm on the basis of the proposed concept and the fabrication process capability. Next, when the length was varied from 1.8 to 2.0, and 2.2 mm, the resonance frequency varied from 31.5 to 24 GHz and the sharpness varied from 64 to 413, and 95, respectively as shown Figure 2b. The length was determined to be 2.0 mm to match the resonance frequency of 28 GHz. The thickness of the unit cell was varied to analyze the correlation between the thickness (t) of the substrate and the simulation result. When the thickness was varied from 0.4 to 0.5, and 0.6 mm, the resonance frequency slightly changed from 28 to 27 GHz and the sharpness varied from 27 to 413, and 95, respectively as shown in Figure 2c. Lastly, when the resistivity of the bottom ground plane was varied from 3 to 7 Ω, we observed that transmission coefficient varied from −31 to −27 dB and then to −24 dB, as shown in Figure 2d. The numerical simulation involving full-wave analysis by using the ANSYS High-Frequency Structure Simulator (HFSS, ANSYS, Washington, PA, USA) was used to design the proposed metamaterial absorber. Figure 1 shows the geometry of the unit cell of the proposed metamaterial absorber. To achieve the feature of optical transparency, apart from using two PET films (dielectric constant εr = 3 and loss tangent = 0.12) for the substrate and an ITO film for the bottom layer, as shown in Figure 1a, a thin square conductive loop was introduced with a width of 0.2 mm on the top layer of the PET substrate. The dimensions of the substrate and conductive pattern were a = 3 mm, w = 0.2 mm, and l = 2 mm, where a is the length of the substrate and w and l are the width and length of the conductive pattern, respectively. Adhesive tape of length 0.05 mm (t2) (εr = 3 and loss tangent = 0.05) was used to bind both PET substrates, as shown in Figure 1c. The thicknesses of the upper and lower PET substrates were t1 = 0.25 mm and t3 = 0.2 mm, respectively. The bottom layer was fully covered with a 5 Ω (Rs) ITO conductive sheet to prevent wave transmission. In order to achieve the best performance and quantify the sharpness, the parameter values were determined by conducting a parametric simulation study. The sharpness in the resonance can be defined by the following equation: 2 1 Sharpness (Quality-factor) 3 where, fr is resonance frequency, f1 is lower frequency of 3 dB bandwidth, and f2 is higher frequency of 3 dB bandwidth, respectively. Figure 2 shows the simulated S-parameters for different values of the parameters. When the conductive loop width was varied from 0.1 to 0.2, and 0.3 mm, the resonance frequency varied from 26.5 to 29 GHz and the sharpness varied from 35 to 413, and 48, respectively as shown in Figure 2a. the width was set at 0.2 mm on the basis of the proposed concept and the fabrication process capability. Next, when the length was varied from 1.8 to 2.0, and 2.2 mm, the resonance frequency varied from 31.5 to 24 GHz and the sharpness varied from 64 to 413, and 95, respectively as shown Figure 2b. The length was determined to be 2.0 mm to match the resonance frequency of 28 GHz. The thickness of the unit cell was varied to analyze the correlation between the thickness (t) of the substrate and the simulation result. When the thickness was varied from 0.4 to 0.5, and 0.6 mm, the resonance frequency slightly changed from 28 to 27 GHz and the sharpness varied from 27 to 413, and 95, respectively as shown in Figure 2c. Lastly, when the resistivity of the bottom ground plane was varied from 3 to 7 Ω, we observed that transmission coefficient varied from −31 to −27 dB and then to −24 dB, as shown in Figure 2d. The absorptivity A(ω) can be defined as where Γ(ω) and T(ω) are the reflection coefficient and transmission coefficient, respectively. Since the proposed metamaterial absorber is fully covered with a conductive ground plane under the bottom layer, the transmission coefficient is zero. Therefore, the highest absorptivity performance of the metamaterial absorber can be achieved by minimizing the reflection coefficient, which can be defined as follows: where Z 0 is the free-space impedance (377 Ω) and Z M is the metamaterial absorber impedance. Thus, the highest absorptivity can be achieved when Z 0 and Z M are equal, since the reflection coefficient is minimized. In order to achieve the highest absorptivity performance, the normalized impedance and reflection coefficient was simulated; these parameters are shown in Figure 3. Figure 3a shows the normalized impedance. The proposed metamaterial absorber has 365.2 Ω of real impedance and −3.99 Ω of imaginary impedance at 27.7 GHz. Therefore, it can be observed that the normalized real impedance and normalized imaginary impedance are 0.95 and 0, respectively. The optimized impedance corresponds to the minimized reflection coefficient, as shown in Figure 3b. The proposed metamaterial absorber has a reflection coefficient of −35.1 dB at 27.7 GHz. Thus, we achieved 99.9% absorptivity at this frequency. 26.5 to 29 GHz and the sharpness varied from 35 to 413, and 48, respectively as shown in Figure 2a. the width was set at 0.2 mm on the basis of the proposed concept and the fabrication process capability. Next, when the length was varied from 1.8 to 2.0, and 2.2 mm, the resonance frequency varied from 31.5 to 24 GHz and the sharpness varied from 64 to 413, and 95, respectively as shown Figure 2b. The length was determined to be 2.0 mm to match the resonance frequency of 28 GHz. The thickness of the unit cell was varied to analyze the correlation between the thickness (t) of the substrate and the simulation result. When the thickness was varied from 0.4 to 0.5, and 0.6 mm, the resonance frequency slightly changed from 28 to 27 GHz and the sharpness varied from 27 to 413, and 95, respectively as shown in Figure 2c. Lastly, when the resistivity of the bottom ground plane was varied from 3 to 7 Ω, we observed that transmission coefficient varied from −31 to −27 dB and then to −24 dB, as shown in Figure 2d. The absorptivity A(ω) can be defined as where Γ(ω) and T(ω) are the reflection coefficient and transmission coefficient, respectively. Since the proposed metamaterial absorber is fully covered with a conductive ground plane under the bottom layer, the transmission coefficient is zero. Therefore, the highest absorptivity performance of the metamaterial absorber can be achieved by minimizing the reflection coefficient, which can be defined as follows: where Z0 is the free-space impedance (377 Ω) and ZM is the metamaterial absorber impedance. Thus, the highest absorptivity can be achieved when Z0 and ZM are equal, since the reflection coefficient is minimized. In order to achieve the highest absorptivity performance, the normalized impedance and reflection coefficient was simulated; these parameters are shown in Figure 3. Figure 3a shows the normalized impedance. The proposed metamaterial absorber has 365.2 Ω of real impedance and −3.99 Ω of imaginary impedance at 27.7 GHz. Therefore, it can be observed that the normalized real impedance and normalized imaginary impedance are 0.95 and 0, respectively. The optimized impedance corresponds to the minimized reflection coefficient, as shown in Figure 3b. The proposed Figure 4 shows a simulated electric field distribution and vector current density at 27.7 GHz. In Figure 4a, the electric field is strongly distributed at both edges of the square ring, implying that the width and length are the main factors determining the resonance frequency. This can be verified from Figure 5 shows the simulated reflection coefficient of the proposed metamaterial absorber for different incident angles. The proposed metamaterial absorber has a 10 dB bandwidth from 26.6 to 28.8 GHz under normal incidence. Under oblique incidence, the 10 dB bandwidth is kept from 26.6 to 28.8 GHz at 10° in the transverse electric (TE) mode and 20° in the transverse magnetic (TM) mode. However, when the incident angle is varied from 0° to 90°, the 10 dB bandwidth is shifted from 26.4 to 28.5 GHz at 20° in the TE mode and from 26.8 to 29 GHz at 30° in the TM mode. Nevertheless, the angular stability of the proposed absorber is not competitive to other metamaterial absorbers with Figure 4 shows a simulated electric field distribution and vector current density at 27.7 GHz. In Figure 4a, the electric field is strongly distributed at both edges of the square ring, implying that the width and length are the main factors determining the resonance frequency. This can be verified from Figure 2a,b. Similarly, the vector current shows strong flows at the top and bottom sides of the square ring ( Figure 4b). Additionally, antiparallel flows were observed, which are part of a circulating loop as shown in Figure 4c. Figure 5 shows the simulated reflection coefficient of the proposed metamaterial absorber for different incident angles. The proposed metamaterial absorber has a 10 dB bandwidth from 26.6 to 28.8 GHz under normal incidence. Under oblique incidence, the 10 dB bandwidth is kept from 26.6 to 28.8 GHz at 10° in the transverse electric (TE) mode and 20° in the transverse magnetic (TM) mode. However, when the incident angle is varied from 0° to 90°, the 10 dB bandwidth is shifted from 26.4 to 28.5 GHz at 20° in the TE mode and from 26.8 to 29 GHz at 30° in the TM mode. Nevertheless, the angular stability of the proposed absorber is not competitive to other metamaterial absorbers with Figure 5 shows the simulated reflection coefficient of the proposed metamaterial absorber for different incident angles. The proposed metamaterial absorber has a 10 dB bandwidth from 26.6 to 28.8 GHz under normal incidence. Under oblique incidence, the 10 dB bandwidth is kept from 26.6 to 28.8 GHz at 10 • in the transverse electric (TE) mode and 20 • in the transverse magnetic (TM) mode. However, when the incident angle is varied from 0 • to 90 • , the 10 dB bandwidth is shifted from 26.4 to 28.5 GHz at 20 • in the TE mode and from 26.8 to 29 GHz at 30 • in the TM mode. Nevertheless, the angular stability of the proposed absorber is not competitive to other metamaterial absorbers with wider incidence angles [54][55][56] because we focused on the optical transparency in this work. An optical transparent metamaterial absorber with angular stability will be the next work. wider incidence angles [54][55][56] because we focused on the optical transparency in this work. An optical transparent metamaterial absorber with angular stability will be the next work. Experimental Measurements To experimentally verify the simulation results, we fabricated a prototype of the metamaterial absorber. The conductive square ring was printed using a FUJIFILM Dimatix materials printer (DMP-2831, FUJIFILM, Minato, Tokyo, Japan) with a 1 pl cartridge (DMC-11601, FUJIFILM, Minato, Tokyo, Japan) and ANP silver nanoparticle ink (DGP 40LT-15C, ANP, Bugang-myeon, Sejong, Korea). To fabricate the 200 μm of line width, the fabricated line width was set as 150 μm because the printed line was a little bit spread. The vertical and horizontal lines are observed at different drop spacing as shown in Table 1. When the drop spacing is 15 μm, the vertical and horizontal lines are unstable because the ink spills out of the line. When the drop spacing is increased from 15 to 25 μm, it is observed from Table 1 that the vertical line is stable, but the horizontal line is not clear. When the drop spacing is increased from 25 to 35 μm, both vertical and horizontal lines are clearly printed. However, when the drop spacing is increased from 35 to 45, and 55 μm, the vertical and horizontal inks are leaking because the drop spacing is too far. Therefore, the drop spacing is set as 35 μm to realize the best line shape. In addition, when the drop spacing is set as 35 μm, the vertical and horizontal linewidth has only 2 and 4.5 percentages of tolerance, respectively, as shown in Figure 6. Finally, the cartridge head is set at 0° and the drop spacing is determined at 35 μm. In addition, three nozzles with 100 dpi resolution were used to print the designed pattern. Experimental Measurements To experimentally verify the simulation results, we fabricated a prototype of the metamaterial absorber. The conductive square ring was printed using a FUJIFILM Dimatix materials printer (DMP-2831, FUJIFILM, Minato, Tokyo, Japan) with a 1 pl cartridge (DMC-11601, FUJIFILM, Minato, Tokyo, Japan) and ANP silver nanoparticle ink (DGP 40LT-15C, ANP, Bugang-myeon, Sejong, Korea). To fabricate the 200 µm of line width, the fabricated line width was set as 150 µm because the printed line was a little bit spread. The vertical and horizontal lines are observed at different drop spacing as shown in Table 1. When the drop spacing is 15 µm, the vertical and horizontal lines are unstable because the ink spills out of the line. When the drop spacing is increased from 15 to 25 µm, it is observed from Table 1 that the vertical line is stable, but the horizontal line is not clear. When the drop spacing is increased from 25 to 35 µm, both vertical and horizontal lines are clearly printed. However, when the drop spacing is increased from 35 to 45, and 55 µm, the vertical and horizontal inks are leaking because the drop spacing is too far. Therefore, the drop spacing is set as 35 µm to realize the best line shape. In addition, when the drop spacing is set as 35 µm, the vertical and horizontal linewidth has only 2 and 4.5 percentages of tolerance, respectively, as shown in Figure 6. Finally, the cartridge head is set at 0 • and the drop spacing is determined at 35 µm. In addition, three nozzles with 100 dpi resolution were used to print the designed pattern. Figure 7, and they were very close to the value of 0.2 mm used in the simulation. Figure 8 shows a schematic and a photograph of the measurement setup. For performing measurements of the prototype, the distance was set between the prototype sample and a horn antenna as 0.5 m for far-field conditions. To avoid unexpected reflected signals, the measurement was performed in an anechoic chamber and a Salisbury screen absorber was placed behind the sample. An Agilent E8361A programmable network analyzer (AGILENT, Santa Clara, CA, USA) and two horn antennas (frequency range: about 26.5-33 GHz) were used for the measurement, as shown in Figure 8a. The reflected signals were measured from the metamaterial plane and the reverse side of the ground plane. Next, the metamaterial plane and ground plane were compared to obtain the reflection coefficient, which was referenced to the reverse side of the ground plane. Figure 8 shows a schematic and a photograph of the measurement setup. For performing measurements of the prototype, the distance was set between the prototype sample and a horn antenna as 0.5 m for far-field conditions. To avoid unexpected reflected signals, the measurement was performed in an anechoic chamber and a Salisbury screen absorber was placed behind the sample. An Agilent E8361A programmable network analyzer (AGILENT, Santa Clara, CA, USA) and two horn antennas (frequency range: about 26.5-33 GHz) were used for the measurement, as shown in Figure 8a. The reflected signals were measured from the metamaterial plane and the reverse side of the ground plane. Next, the metamaterial plane and ground plane were compared to obtain the reflection coefficient, which was referenced to the reverse side of the ground plane. Finally, the measured reflection coefficients were obtained as shown in Figure 9a. The measured reflection coefficients had a bandwidth of −10 dB from 26.8 to 28.2 GHz, and the measured reflection coefficient at the resonance frequency of 27.5 GHz was −28.5 dB. Equation (2) was used for calculating the 90% absorption bandwidth, which ranged from 26.8 to 28.2 GHz as shown in Figure 9b. Since, the metamaterial absorbers have an infinite periodic structure, their absorptivity depends on the polarization angle of the incident electromagnetic wave. However, practical applications require an absorber whose performance can be kept constant even with varying polarization incidence. Therefore, metamaterial absorbers are required to have a polarization insensitive characteristic for the practical applications. Figure 10b shows the measurement of the prototype at various polarization angles to demonstrate its polarization insensitivity. As shown in Figure 10b, the prototype results hardly changed when the polarization angle was changed because the proposed metamaterial absorber was designed by symmetric structure. To verify the excellence of the proposed work, the proposed optically transparent metamaterial absorber is compared with metamaterial absorbers proposed by other studies, and the comparison is shown in Table 2. As you can see in Table 2, the proposed metamaterial absorber has the advantage not only of being cost effective compared with using ITO sheet for conductor but also of advanced transparency compared with electro-textile or other metal mesh fabric methods. Hence, from the entire numerical simulation and experimental measurements, one can simply infer that the proposed metamaterial absorber has high absorptivity having polarization insensitivity. Finally, the measured reflection coefficients were obtained as shown in Figure 9a. The measured reflection coefficients had a bandwidth of −10 dB from 26.8 to 28.2 GHz, and the measured reflection coefficient at the resonance frequency of 27.5 GHz was −28.5 dB. Equation (2) was used for calculating the 90% absorption bandwidth, which ranged from 26.8 to 28.2 GHz as shown in Figure 9b. Since, the metamaterial absorbers have an infinite periodic structure, their absorptivity depends on the polarization angle of the incident electromagnetic wave. However, practical applications require an absorber whose performance can be kept constant even with varying polarization incidence. Therefore, metamaterial absorbers are required to have a polarization insensitive characteristic for the practical applications. Figure 10b shows the measurement of the prototype at various polarization angles to demonstrate its polarization insensitivity. As shown in Figure 10b, the prototype results hardly changed when the polarization angle was changed because the proposed metamaterial absorber was designed by symmetric structure. To verify the excellence of the proposed work, the proposed optically transparent metamaterial absorber is compared with metamaterial absorbers proposed by other studies, and the comparison is shown in Table 2. As you can see in Table 2, the proposed metamaterial absorber has the advantage not only of being cost effective compared with using ITO sheet for conductor but also of advanced transparency compared with electro-textile or other metal mesh fabric methods. Hence, from the entire numerical simulation and experimental measurements, one can simply infer that the proposed metamaterial absorber has high absorptivity having polarization insensitivity. Conclusions This paper proposes an optically transparent metamaterial absorber fabricated using inkjet printing technology. To make the metamaterial absorber optically transparent, a thin conductive loop pattern was introduced on the top surface and PET films were used to fabricate the substrate. The thin conductive pattern was prepared using an inkjet printer, and its width was 0.2 mm. Thus, an Conclusions This paper proposes an optically transparent metamaterial absorber fabricated using inkjet printing technology. To make the metamaterial absorber optically transparent, a thin conductive loop pattern was introduced on the top surface and PET films were used to fabricate the substrate. The thin conductive pattern was prepared using an inkjet printer, and its width was 0.2 mm. Thus, an optically transparent metamaterial absorber was realized with small width and optically transparent substrate. The metamaterial absorber was simulated using a full-wave electromagnetic simulator and measured with a free-space measurement setup. The numerical simulation indicated that the 90% absorption bandwidth of the metamaterial absorber ranged from 26.6 to 28.8 GHz, while experimental measurements yielded a range from 26.8 to 28.2 GHz. Furthermore, the proposed metamaterial absorber has a polarization insensitive characteristic. In conclusion, it is successfully demonstrated by the numerical simulation and measurement results.
5,499.2
2019-10-01T00:00:00.000
[ "Materials Science" ]
Ultra-precise optical to radio frequency based chip-scale refractive index and temperature sensor Chip-scale high-precision measurements of physical quantities such as temperature, pressure, refractive index, and analytes have become common with nanophotonics and nanoplasmonics resonance cavities. Despite several important accomplishments, such optical sensors are still limited in their performances in the short and, in particular, long time regimes. Two major limitations are environmental fluctuations, which are imprinted on the measured signal, and the lack of miniaturized, scalable robust and precise methods of measuring optical frequencies directly. Here, by utilizing a frequency-locked loop combined with a reference resonator, we overcome these limitations and convert the measured signal from the optical domain to the radio-frequency domain. By doing so, we realize a highly precise on-chip sensing device with sensing precision approaching 10−8 in effective refractive index units, and 90 μK in temperature. Such an approach paves the way for single particle detection and high-precision chip-scale thermometry. © 2016 Optical INTRODUCTION Resonance cavities are excellent transducers to convert small variations in the local refractive index into measurable spectral shifts.As such, these cavities are being used extensively in a variety of disciplines ranging from, e.g., bio-sensing [1][2][3][4][5], chemical sensing [6], temperature sensing [7], and pressure gauges [8][9][10] to atomic and molecular spectroscopy [11].Specifically, chip-scale microring and microdisk resonators (MRRs) are widely used for these purposes [12][13][14] owing to their miniaturized size, relative ease of design and fabrication, high quality factor, and versatility in the optimization of their transfer function. The principle of operation of such resonative sensors is based on monitoring the wavelength dependence of the resonator subject to minute variation in its surrounding (e.g., different types of atoms and molecules, gases, pressure, temperature).Traditionally, wavelength monitoring has been achieved either by comparing the spectra prior to and after the sensing event, or by monitoring the resonator's temporal intensity variations at a fixed frequency.Yet both techniques are akin to thermal drifts and other noise sources of both the MRR and the interrogating laser, which limit the sensitivity and accuracy of such measurements both in the long and the short terms.Thus, in order to monitor such minute perturbations to the refractive index over time (representing, for example, the temporal changes in a concentration of a molecule), one needs to have both the MRR and the laser fully stabilized.The level of such stabilization will dictate the sensitivity limit of the system.Considering a silicon photonic chip operating as a refractive index sensor, with a target refractive index sensitivity of 10 −8 , which is beyond the current state of the art, one needs to stabilize the MRR to the ∼100 μK regime, while the laser needs to be stabilized to the megahertz level.Stabilization to such values is highly challenging.In order to overcome frequency uncertainties and enable real-time and precise sensing, it is desired to implement a differential sensing scheme.Indeed, such schemes have been explored using either external reference systems or more recently by the use of a reference MRR on-chip [14,15]. While the concept of a reference resonator provides a significant advance, the implementation of highly precise sensing is still limited to the quality of the local oscillator, e.g., the laser that is being used and the ability to precisely define the resonance frequency by using conventional spectroscopic measurements.A promising approach for overcoming these bottlenecks is the implementation of active frequency stabilization schemes.Frequency modulation (FM) spectroscopy, wavelength modulation (WM), and the similar Pound-Drever-Hall techniques are widely used to lock the radio and optical frequencies to a desired resonance frequency and to measure the dispersive properties of resonant phenomena [16][17][18].In these methods, a signal proportional to frequency difference of the local oscillator and the resonator (coined an error signal) is fed back to the local oscillator, and consequently aligns the laser to the desired resonance.Indeed, by using such schemes the stabilization of different sources has been made possible [19,20].Moreover, one can simultaneously monitor the resonator frequency variations with high precision [21].Indeed, it should be noted that the precision of such frequency variations can substantially exceed the Q-factor.For example, in an atomic clock a frequency uncertainty (Δf ∕f ) of 10 −11 at time constant of 1 s is achieved [22], being often 5 orders of magnitude smaller than the inverse Q-factor of the atomic line. Here, taking advantage of concepts borrowed from the wellestablished field of frequency metrology, and exploiting the differential immunity of our cascaded MRR system to environmental perturbations, we demonstrate a sensing platform enabling both short and long time highly precise sensing.Specifically, we frequency lock two independent lasers to two MRRs, which are situated in the vicinity of each other on the same chip.As the frequency difference between the MRRs is in the radio-frequency (RF) regime, our system has the capability to transduce minute environmental perturbations, (e.g., in the form of pressure variations, temperature variations, or the presence of analyte and particles) to a RF signal.By doing so, and considering the ability to measure RFs exceptionally precisely, a conceptual breakthrough in nanophotonics-based sensing is achieved.To demonstrate the usefulness of our approach, we measure the difference between these two frequency locked lasers by beating them upon a photodetector, achieving a highly sensitive sensor capable of measuring a refractive index uncertainty of 1.5 • 10 −7 RIU • τ −1∕2 (τ being the averaging time constant), with an unprecedented noise floor of 1.5 • 10 −8 RIU, equivalent to temperature uncertainty of about 90 μK at ∼200 s.Typical drifts were measured to be 10 −7 RIU∕h enabling long time stability in the stateof-the-art level and even beyond.One direct application of our system is the measurement of local temperatures and temperature gradients on a chip.This latter issue is becoming in the modern era, when the local heating of central processing units (CPUs) is becoming one of the major bottlenecks preventing the improvement of computer performance beyond the current state of the art.Having a CMOS compatible platform that can perform such precise temperature measurements without being sensitive to electromagnetic noise is greatly needed.We cope with this issue by inducing a temperature gradient between the two MRRs.By doing so, we are able to distinctly demonstrate an on-chip local thermometer capable of measuring temperature differences as low as 0.8 mK over the course of minutes, corresponding to a measurement uncertainty of 1.2 mK • τ −1∕2 . A. Fabrication and Concept of Operation A schematic representation of our differential sensing apparatus is sketched in Fig. 1(a), where we present a chip consisting of two cascaded MRRs coupled to a bus waveguide.The first (right MRR) serves as the reference MRR, while the second (left MRR) is the sensing unit, which is subject to the perturbation to be measured (analytes, temperature, pressure, etc.).We seek to monitor the refractive index variations of this sensing MRR, which manifest as a resonance frequency shift, as illustrated in Fig. 1(b).Here, two adjacent resonance lines (green line) originating from each of the cascaded MRRs are illustrated.The frequency difference between these two lines is 10 GHz, and is assumed to be relatively constant, as both MRRs are subject to the same environment.By applying a perturbation to the sensing MRR, this frequency difference will change, as illustrated in the blue line in Fig. 1(b).Thus, monitoring the frequency difference yields a precise and accurate method to measure small changes in refractive index, temperature, or pressure.Moreover, as the ability to accurately (orders of magnitude better than what is reported in this paper) measure RFs is readily available, relatively cheap, and miniaturized, the proposed method offers a prominent advantage with respect to optical differential schemes. In Fig. 2(a) we present a scanning electron micrograph (SEM) of the two cascaded MRRs.The MRR chips presented here are fabricated using low-loss waveguides based on the concept of local oxidization of silicon (LOCOS) [23][24][25][26].The waveguide dimensions were 450 nm width and 220 nm height, whereas the MRR radius was 30 μm.Next, we characterize the transmission spectrum of the cascaded MRRs.In Fig. 2(b), we plot the transmission of the MRRs as a function of wavelength.We observe a few distinct absorption dips separated by the free spectral range (FSR) of the MRR (corresponding to ∼3 nm).By closely examining the spectrum [Fig.3(c)], we observe that each dip within an FSR consists of two distinct dips, with a separation of 78.5 pm corresponding to 10.03 GHz.Such frequency separation complies with the abovementioned requirement for an easily detectable separation in the RF regime.Finally, the resonance of each dip is ∼30 pm wide, corresponding to a Q-factor of about 50,000, with extinction of about 15 dB.Next, in Fig. 3 we illustrate a schematic representation of the dual locking scheme.Each locking scheme relies on the acquisition of a signal proportional to the difference between the laser frequency and the MRR frequency.This error signal is fed back to the laser's frequency actuator.Specifically, two lasers, labeled "source A" and "source B," are combined and coupled to our cascaded MRR chip.Both lasers are wavelength modulated by two different frequencies, f 1 and f 2 , being in the few 100 Hz regime, and with a modulation depth corresponding to a fraction of the width of the resonators.The signal of both lasers is coupled through the bus waveguide to a lensed fiber connected to an InGaAs detector.The generated photocurrent feeds two lockin amplifiers (LIAs), referenced to two different frequencies, f 1 and f 2 .The demodulated signals provide us with error signals that are redirected to sources A and B. Such error signals, which can be viewed as the differential of the MRR lineshape, provide the laser with the needed "correction" in order to be aligned with the MRR.An additional integrator serves as a "memory" for each individual servo-loop.Such technique is well established, and more and implementations can be found in, for instance, Refs.[27][28][29][30].Finally, to monitor the frequency difference between the two MRRs, sources A and B are combined to illuminate a fast photodetector.The frequency beat signal is monitored using a RF frequency counter. Finally, we add an electrical switch connected to the output of the modulation f 1 and f 2 .We do so because using wavelength modulation, i.e., operating at low frequencies and relatively high modulation depths, yields a dithered beat spectrum.Obviously such a spectra is difficult to interpret.By switching the modulation (and "freezing" the integrator's output), we are able to obtain clear beat signals.In future implementations, this switching can be avoided using either frequency modulation [16], where the modulation frequency has to exceed the resonators width, or the previously demonstrated dither cancelation techniques [28]. B. Frequency-Locked Loop Results Next, we apply the locking scheme presented earlier, and characterize the system capabilities.To do so, we tune both lasers to the vicinity of the resonance dips and close the two servo-loops.Now, the two lasers frequencies are aligned with two adjacent resonant frequencies, each originating from a different MRR.To verify that indeed the difference in resonance frequencies between the two MRRs is relatively stable over time, we monitor both integrators' outputs (representing the error signals) as a function of time.In Fig. 4(b) we plot typical error signals of both MRRs. One can clearly observe that the two error signals follow each other, thus the lasers are tracking each MRR individually in a correlated fashion.We attribute this correlation in error signal to be mainly due to temperature fluctuations in the room.Such temperature fluctuations affect both MRRs almost identically, as both MRRs are subjected to a very similar heat environment: they are situated on the same chip, in close vicinity to each other, and are of the same dimensions and materials.Using the thermo-optic coefficient of silicon [31] (1.8 • 10 −4 RIU∕K), and by calibrating the frequency modulation transfer function of the lasers, we estimate room temperature fluctuations of about 0.1°K, which is a typical value measured in our laboratory over a time scale of 1 h.Finally, in Fig. 4(b), we plot the error signal difference.It can be readily seen that the error signal difference deviations are much smaller than the deviations of each of the error signals.The deviations of the error signals are of the order of a few gigahertz, whereas those of the error signal difference are 1½orders of magnitude lower.We note that error signals do not represent frequency adequately because they may incorporate laser drifts, as well as piezoelectric (such as hysteresis and creep).Indeed, a better choice is to analyze the beat signal directly.This is because the beat frequency can be fully attributed to the MRR frequency separation and decoupled from the laser fluctuations. Next, following the above discussion, we analyze the beat frequency obtained by tapping about 10% of the signal emerging from each of the two lasers, combining it into a single fiber and detecting the combined signal using a fast photodetector connected to a frequency counter, as illustrated in the left portion of Fig. 3.For this purpose, a wafer scale set of cascaded MRRs has been fabricated by UV lithography, and a specific set of MRRs separated by ∼4 GHz has been selected.We once again stress that this frequency separation is most likely a consequence of fabrication tolerances.In Fig. 5 we plot the normalized frequency difference (Δf ∕f ) as a function of time, while switching among three modes of operation of the servo-loops to reveal the different stability characteristics of the system components.In Fig. 5(a) we present the case where, initially (denoted as t 0), each of the two lasers is locked to its dedicated MRR, followed by the mode in which one laser is free running, while the other is still locked, and finally once again both lasers are locked.As can be seen, when the lasers are locked to both MRRs we obtain a relatively constant frequency difference.This is in contrast to the case where one laser is locked to a MRR and the system drifts.Next, in Fig. 5(b) we plot the measured normalized frequency difference for the case where initially each of the two lasers is locked to its dedicated MRR, and subsequently both lasers are free running.Once again, the system seems to drift significantly when the lasers are free running, representing the relative instability of the two lasers.The magnitude of this drift is appreciably lower than that of that presented in Fig. 5(a).We conclude that when operating in the case of a single servo-loop, considering the relatively long time constants, we are most likely tracking the MRR drift, which is dominant with respect to the laser drift.Indeed, as the MRR temperature is not stabilized in any manner, such drifts, which correspond to a relative frequency drift of ∼10 −5 over the course of ∼40 min , are highly likely, as they correspond to a temperature drift of ∼0.1 mK (considering a thermo-optic coefficient of the order of ∼10 −4 ∕C ). Next, we turn to analyze the instability of the beat frequency representing the frequency difference instabilities between the two MRRs.To comply with the widespread and conventional frequency stability analysis, we apply an overlapping Allan deviation to the measured normalized frequency presented in Fig. 5. The Allan variance is a highly common time domain measure of frequency stability.Similar to the standard variance, it is a measure of the fractional frequency fluctuations, and yet has the advantage of being convergent for most types of noise.For a discrete series of N measurements, the Allan variance can be defined as follows [32]: where y k is the fractional frequency of sample k, averaged on the time interval τ.Here, we use an overlapping Allan deviation that is an implementation of the Allan deviation utilizing all possible combinations of the measured dataset. The normalized frequency is translated to refractive index units by multiplying the normalized frequency by the effective group index of the guided mode.In Fig. 6 we plot the overlapping Allan deviation as a function of integration time τ for three different scenarios.The first, represented in blue, corresponds to the case where each of the lasers is locked to its dedicated MRR.The significance of applying this approach is twofold.First, it quantifies the sensor metrics in the time domain, i.e., it reveals the instability of the sensor at different time constants.Second, the Allan deviation is a very powerful tool to discern different noise sources by examining the slope at different time constants.For instance, as one can see, the instability of the locked system averages out at a rate of ∼ 1 ffi ffi τ p , revealing our system at these time constants (3-100 s) to be white frequency noise limited [33].At the time constant of ∼200 s, the slope levels out, which is typical of frequency flicker noise.The sensor reaches a floor of ∼1.5 • 10 −8 , representing the ability to measure variations in index of refraction with unprecedented precision approaching 10 −8 at these time constants.Even if we take into account a realistic biosensing or gas-sensing scenario in which the optical mode interacts with the analyte only partially due to the limited mode confinement in the cladding, we still maintain precision beyond the state of the art. Next, we compare our result to the free-running case (green line) and the single lock case (red line).Clearly, one can see the trend illustrated in Fig. 5 revealed in this analysis.The single lock, as well as the free-running lasers, exhibit significant instable operation at long times when compared to the double MRR lock scheme.Such plots exemplify the long time stability advantages of our system.A single MRR shows excellent short time stability (∼3 s) comparable to the two rings.Yet, as laser drifts (green line) and temperature drifts of the MRRs (red line) become dominant, the system losses its ability to precisely measure refractive index changes.We note that even when we remove the relatively deterministic linear drift from our analysis (not shown here), both the free-running lasers and the single MRR lock schemes still exhibit significant instability in comparison with our two-MRR system.We note that although both MRRs are subjected to the same thermal environment, varying temperature gradients across the chip Fig. 6.Overlapping Allan deviation (based on measurements in Fig. 5) of the refractive index presented for three cases: a single laser locked to an MRR with the second laser free running (red line), two lasers free running (green line), and the case where each of the two lasers is locked to its dedicated MRR (blue line). (resulting from the absence of thermal management and/or isolation of our chip, in the current demonstration) might induce slight fluctuations and drifts in the fractional frequency.Another mechanism that may induce such drifts is related to imperfections in the servo-loop, e.g., in the form of a parasitic (and drifting) input to the integrator.Indeed, we witness a small frequency drift of the order of 10 −7 RIU∕h.Investigation of the exact mechanism of such small and yet important drifts will be pursued in the future. Generally, we further note that our laser's linewidth and overall performance affects the servo-loop performance.Indeed, our laser's linewidth is narrower than the linewidth of the ring resonances.In principle, lasers with larger phase noise would require longer integration times in order to achieve the same performance. C. Chip-Scale Thermometry When translating the normalized frequency uncertainty of our double-locked MRR system in terms of temperature sensitivity (right axis in Fig. 6), one reveals an unprecedented ability to implement on-chip precise thermometry.Here, a temperature difference precision of 1.2 mK • τ −1∕2 with a floor of ∼90 μK at 200 s is predicted.To demonstrate such capabilities explicitly, we introduce a localized source of light illuminating one of the MRRs [see Fig. 7(a)], and thus by direct absorption create a deliberate temperature gradient.To create such localized illumination, we use a near-field optical microscope (NSOM) in which a fiber coupled probe, with an aperture of 300 nm, is used in illumination mode.The NSOM probe is set to be in contact with the surface, and positioned at the center of one of the MRRs [Fig.7(a)].Operating at the wavelength of 980 nm, we expect the light to diffract into the silicon dioxide layer (having a thickness of 2 μm), and then be absorbed at the silicon substrate beneath it.Such a process generates a lateral heat gradient across the chip, which can be measured precisely using our cascaded doublelocked MRR apparatus.In Fig. 7(b) we plot the temperature difference (calculated using the relation ΔT n g Δf ∕f α, where α is the thermo-optic coefficient of silicon and n g is the effective group index of refraction) between the two MRRs as a function of time.The temperature gradient is controlled by varying the power coupled to the NSOM probe.To maintain a reference baseline, we turn off the laser in between each illumination sequence, and compensate for a small linear drift.As we decrease the power level, we also intentionally increase the time we measure the temperature difference in order to obtain a better signal-to-noise level. As can be seen in Fig. 7(b) (inset), we apply optical power in the range of a few hundreds of nanowatts (calibrated separately using a photodetector) and obtain a linear temperature offset of a few millikelvin.For the last illumination sequences in the Fig. 7(b) (corresponding to power levels of ∼90 and ∼70 nW; see dashed horizontal lines) we measure corresponding average temperatures of 3.09 mK and 2.32 mK, i.e., a difference of 800 μK between the two measurements.From the Allan deviation curve we estimate the uncertainty to be of ∼90 μK at the measurements time constant (∼300 s).It is thus not surprising that we can easily differentiate between these two temperatures. In recent years, there has been significant effort to construct optical chip-scale thermometers, with designs exploiting both photonic crystal cavities and MRRs.Such devices offer prominent advantages with respect to other temperature measurement techniques, as they offer high sensitivity, large temperature range, and immunity to electro-magnetic disturbance.The cascaded double-locking scheme presented here not only compares favorably in its sensitivity, but also allows one to keep this high degree of precision over long times, without the need to stabilize both the chip and the interrogating system.As such, our approach offers fantastic prospects in terms of integration with relatively cheap and compact lasers, such vertical-cavity surface-emitting lasers (VCSELs). DISCUSSION We have demonstrated an on-chip sensor capable of detecting unprecedentedly small frequency changes, which can be traced back to minute perturbations in refractive index or temperature.Our approach consists of two cascaded microring resonators; one serves as the sensing device, while the other plays the role of a reference, eliminating environmental and system fluctuations (temperature, laser frequency, etc.).By utilizing a servo-loop locking scheme, we are able to translate the measured effects from the optical domain to the radio frequency domain.By doing so, we can quantify our system capabilities utilizing the well-established RF technologies, such as frequency counters, spectrum analyzers, and atomic standards. Experimentally, we have locked two lasers to the two cascaded MRRs such that each of the lasers is now aligned to its respective MRR.By tracking the error signals of the servo-loops we note that, while each error signal drifts significantly (∼few gigahertz), the difference in error drifts about 2 orders of magnitude less (∼50 MHz).Generally, error signals cannot be directly mapped to frequency deviations (due to laser frequency drift, laser piezo hysteresis.etc.).Yet this result exemplifies the advantage of using a reference resonator to address environmental perturbations. To fully exploit the system capabilities to convert optical frequencies to the RF domain, we directly measured the beat frequency between the two lasers.First, by directly observing frequency over time we observed that the laser drift is significantly smaller than the drifts of the MRRs in an uncontrolled environment.Hence, MRRs are shown to be unstable frequency references over long times, and thus the sensing capabilities are limited.In contrast, by analyzing the beat signal of the locked lasers, we could observe stable frequency, up to the level of ∼10 MHz∕h.Utilizing the well-established techniques borrowed from the disciplines of frequency metrology, we calculate the Allan deviation and find our system to be capable of observing extremely low perturbations in refractive index, down to the level of 7 • 10 −9 at 200 s.The abovementioned value can be used for the purpose of temperature sensing down to the 50 μK level.Indeed, by taking advantage of the interband transition, we used a near-field light probe as a local heat source and we explicitly demonstrate the usefulness of our approach for temperature-sensing applications.We could easily observe temperature difference of 500 μK with high signal-to-noise ratio. While having an on-chip thermometer with 50 μK temperature precision is, to the best of our knowledge, the state of the art in the field, it is tempting to consider further advancements.The precision scales with the Q-factor and the signal-to-noise ratio.As we have recently reported, silicon MRRs with Q-factors of ∼5 • 10 6 can be achieved [23].Thus, assuming the same signal-to-noise ratio as in our current demonstration, one may expect temperature sensing precision below 1 μK and refractive index sensing in the 10 −10 regime.Obviously, achieving such fantastic results should require additional system efforts such as thermal management, and mechanical stability.An additional important consideration for implementing a temperature sensor, in contrast to a refractive index sensor, is the need to separate spatially the temperature sensing MRR from the reference MRR.For instance, to probe the variation in temperature across the chip, one would ideally deploy several MRRs in different locations across the chip.Another important issue is the ability to fully calibrate such sensors.Indeed, introducing a highly calibrated refractive index change is highly challenging to achieve, as a fully traceable refractive index standard does not exist.The major challenge to our obtaining a fully traceable sensor will probably be a topic of interest in the years to come. Next, we discuss our system from the engineering perspective of implementing the above-demonstrated platform as a refractive index sensor.For implementing a refractive index sensor and maintaining the same principle of operation mentioned above, one would like both MRRs to be subjected to the same cladding environment (discarding the sensing analytes).One approach, recently demonstrated by Kim et al., is to construct a liquid-based refractive index sensor having a common flow cell above both MRRs that is able to maintain the flow of two solutions above each MRR separately [15].A second approach is to construct a gas-based sensor, where both MRRs have hollow chambers above them.Here, the reference MRR is encapsulated and the sensing MRR is exposed to the environment.Similar designs have been reported in the context of atomic spectroscopy with rubidium atoms integrated above a MRR [34].Finally, we address the prospects of fully integrating the above-demonstrated system, including sources, detectors, and servo-loops, into a chip-scale sensor.To do so, we propose to exploit the fantastic achievements in miniaturized components such as vertical-cavity surfaceemitting lasers (VCSELs), microprocessors, and voltage-controlled oscillators.Inspired by the revolution in chip-scale atomic clocks [35], sharing similar frequency-locked loop architectures, we believe that a fully integrated low-cost and low power consuming frequency-locked cascaded sensing system is feasible. Fig. 1 . Fig. 1.(a) Schematic representation of cascaded MRRs.(b) Illustration of the spectrum of the reference MRR (in blue) and the sensing MRR (solid green and dashed green).The sensing MRR curve is illustrated with and without the refractive index change. Fig. 3 . Fig. 3. Schematic illustration of dual locking schemes.Two lasers are locked simultaneously to the cascaded MRRs.The beat signal of these two lasers is measured using an oscilloscope or a spectrum analyzer. Fig. 4 . Fig. 4. (a) Calibrated integrators output as a function of time of both servo-loops.(b) Difference between the integrators output as function of time. Fig. 5 . Fig. 5. Normalized frequency difference as a function of time for two different operation regimes: (a) one laser locked to a MRR and the second laser free running, and subsequently both lasers locked to both MRRs, and (b) two lasers locked to both MRRs and subsequently both lasers free running. Fig. 7 . Fig. 7. (a) Schematic illustration of our NSOM tip illuminating light on the left MRR, and thus creating a heat gradient via optical absorption in the silicon bottom layer.(b) Temperature difference between the two MRRs as inferred from the measured beat frequency as a function of time, while changing the optical power illuminated by the NSOM probe.Inset: temperature as a function of illuminating power.
6,542.2
2017-01-20T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
Deep learning driven segmentation of maxillary impacted canine on cone beam computed tomography images The process of creating virtual models of dentomaxillofacial structures through three-dimensional segmentation is a crucial component of most digital dental workflows. This process is typically performed using manual or semi-automated approaches, which can be time-consuming and subject to observer bias. The aim of this study was to train and assess the performance of a convolutional neural network (CNN)-based online cloud platform for automated segmentation of maxillary impacted canine on CBCT image. A total of 100 CBCT images with maxillary canine impactions were randomly allocated into two groups: a training set (n = 50) and a testing set (n = 50). The training set was used to train the CNN model and the testing set was employed to evaluate the model performance. Both tasks were performed on an online cloud-based platform, ‘Virtual patient creator’ (Relu, Leuven, Belgium). The performance was assessed using voxel- and surface-based comparison between automated and semi-automated ground truth segmentations. In addition, the time required for segmentation was also calculated. The automated tool showed high performance for segmenting impacted canines with a dice similarity coefficient of 0.99 ± 0.02. Moreover, it was 24 times faster than semi-automated approach. The proposed CNN model achieved fast, consistent, and precise segmentation of maxillary impacted canines. The maxillary canine is the second most frequently impacted tooth, characterized by the failure of a canine to emerge through the gingiva and assume its correct position following the anticipated eruption time.This is due to the fact that it is often the last tooth to erupt and has a long pathway from its developmental position deep within the maxilla to its final location in the oral cavity 1 .Maxillary canine impaction occurs in approximately 2% of the population (range from 1.7 to 4.7%), with a higher prevalence in females than males 2 .Several etiological factors might contribute to its impaction, including genetic factors, lack of space, tooth root developmental abnormalities, trauma or injury and presence of oral pathological lesions 3,4 . The proper positioning of maxillary canine in the dental arch is critical for functional occlusion 5 and aesthetics 1,6 .A delayed diagnosis or lack of treatment can result in complications such as midline shift, tooth displacement, arch length defect, ankylosis, follicular cyst development, internal tooth resorption, pain, caries, and recurrent infection 7 .Hence, early detection and intervention are crucial.The diagnosis of canine impaction and determination of the appropriate treatment plan necessitates the utilization of radiographic imaging in conjunction with patient history and clinical examination.In this context, cone beam computed tomography (CBCT) is the most optimal radiographic imaging tool due to its ability to accurately determine the tooth's three-dimensional (3D) position and assess its relationship with the neighboring teeth and other neighboring anatomical structures.This enables clinicians to accurately assess potential treatment options and plan the most effective course of action [8][9][10][11] . Recently, the field of oral healthcare has witnessed a shift towards the utilization of digital workflows for diagnostic and treatment planning purposes.These workflows have addressed the shortcomings of conventional methods by offering enhanced precision, time-efficiency, and improved patient care 12,13 .The implementation of such workflows has facilitated patient-specific virtual planning, orthodontic treatment simulation, treatment progress monitoring, and 3D printing of orthodontic appliances [14][15][16] .This could prove particularly advantageous for complex treatment procedures such as those involving impacted canines. In digital dental workflows involving impacted canines, CBCT image segmentation is a crucial initial step for creating an accurate 3D model of the tooth for either diagnosis, planning or outcome assessment.Any error at this stage can adversely affect the final result 17 .Both manual and semi-automated segmentation (SS) have been applied as clinical standards for creating virtual impacted canine models, where manual segmentation is time-consuming and operator dependent 18,19 .Meanwhile, SS relies on threshold selection and often requires manual adjustments, which also makes it prone to human error 20,21 .Recent application of deep convolutional neural networks (CNNs) has demonstrated improved performance over conventional segmentation methods for modeling of the dentomaxillofacial region, with promising results for automated segmentation (AS) of teeth, upper airway, inferior alveolar nerve canal, mandible, and maxillary sinus on CBCT images [22][23][24][25][26][27][28][29][30] .However, there is a lack of evidence regarding the application of CNNs for the AS of impacted canines. Therefore, the aim of the present study was to train and assess the performance of a CNN-based tool for AS of maxillary impacted canine on CBCT images. Material and methods This retrospective study was conducted in compliance with the World Medical Association Declaration of Helsinki on medical research.Ethical approval was obtained from the Ethical Review Board of the University Hospitals Leuven (reference number: B322201525552). Dataset A total of 200 CBCT scans (46 males and 54 females; age range: 8-54 years) having uni-or bilateral maxillary impacted canine cases were collected during the period 2015-2022, from the radiological database of UZ Leuven Hospital, Leuven, Belgium.Inclusion criteria consisted of previously clinically and radiologically diagnosed unilateral/bilateral, horizontal/oblique/vertical and complete/partial maxillary canine impactions.Teeth with both complete and partially formed roots were included.The majority of cases in these datasets had orthodontic brackets.Exclusion criteria involved scans with motion artifacts and poor image quality, where margins of canine could not be optimally delineated.The CBCT images were obtained utilizing two devices, NewTom VGi Evo (Cefla, Imola, Italy) and 3D Accuitomo 170 (J Morita, Kyoto, Japan) with variable scanning parameters of 90-110 kV, a voxel size between 0.125 and 0.300 mm 3 and a field of view between 8 × 8 and 24 × 19 cm. All images were exported in Digital Imaging and Communications in Medicine (DICOM) format.Thereafter, the DICOM datasets were uploaded to a CNN-based online cloud platform known as the 'Virtual patient creator' (Relu, Leuven, Belgium), to assess if the tool would be able to segment impacted canines, as it had been previously trained for permanent erupted teeth segmentation 24,28 .Based on the visual assessment by two observers (A.S, B.E), 100 images from the total dataset of 200 images could not be segmented automatically by the platform.Hence, these failed cases were randomly divided into two subsets, training set (n = 50), to train and better fit the CNN model for impacted canines using semi-automatically segmented ground truth data; and testing set (n = 50), to test the model performance for AS compared to the ground truth data.Figure 1 illustrates the data distribution for training and testing subsets. Data labelling The ground truth for the training and testing sets was obtained through SS of impacted canines on the online platform using cloud tools such as the contour tool and smart brush function 26 .The contour tool interpolates the interslice region between selected contours, while the smart brush function groups voxels based on their intensities.The operator adjusted the segmentation until satisfied with the result, and all contours were verified in axial, coronal, and sagittal planes.The segmentation was performed by one observer (A.S) and subsequently reassessed by two additional observers (NM & RJ) with 10 and 25 years of experience, respectively.The canines were then exported as standard tessellation language (STL) files for further processing in the CNN pipeline. AI architecture The training of the CNN model involved the utilization of two 3D U-Net architectures (Fig. 2), each comprising four encoding and three decoding blocks.The architecture included two convolutional layers with a kernel size of 3 × 3 × 3, ReLU activation function, and group normalization with eight feature maps.Max pooling with a kernel size of 2 × 2 × 2 and strides of two was applied to reduce the resolution by a factor of two across all dimensions 25,27 .Binary classifier training (0 or 1) was performed on both networks.All scans were resampled to a uniform voxel size.To circumvent GPU memory limitations, the entire scan was down-sampled to a fixed size.A lowresolution segmentation was achieved using the first 3D U-Net to propose 3D patches.The segments corresponding to the impacted canines were only extracted.A second 3D U-Net was employed to segment and fuse the relevant patches, which were subsequently used to construct a full-resolution segmentation map.The binary image was binarized, retaining only the largest connected component, and a marching cubes algorithm was applied.The resulting mesh was smoothed to generate a 3D model.The optimization of the model parameters was performed using a deep learning model optimization technique known as ADAM 31 , with an initial learning rate set to 1.25e4.During the training process, random spatial augmentations such as rotation, scaling, and elastic deformation were applied. Model testing and consistency of refined segmentations The performance of the CNN model was evaluated using a testing set and compared to the ground truth obtained through SS performed by observer B.E.The images were uploaded to the online tool and the resulting AS was downloaded in STL file format.Moreover, a visual evaluation of the segmented testing set was performed by two observers (A.S, B.E) to determine if any refinements were necessary (Fig. 3).If required, these refinements were implemented using the brush function on the online tool to add or remove voxels from the selection.The refined segmentation was also downloaded in STL file format.The intra-and inter-observer repeatability of refined segmentations was confirmed by both observers performing the refinements twice at a two-week interval. CNN performance evaluation The CNN model's performance was evaluated based on time duration and voxel-and surface-based metrics. Time analysis The duration of testing set segmentation with the SS approach was recorded using a digital stopwatch, starting from the import of CBCT data until the generation of the canine model.On the other hand, the online platform automatically provided the time needed to obtain the final segmentation map. Performance metrics The performance of the CNN model was assessed by utilizing confusion matrix for voxel-wise comparison of SS ground truth and AS maps according to the following metrics: Dice similarity coefficient (DSC), Intersection over union (IoU) and 95% Hausdorff Distance (HD).In addition, the surface-based analysis involved importing superimposed STL files of SS and AS to 3-matic software (Materialise NV, Leuven, Belgium), followed by automated part comparison analysis to calculate the root mean square (RMS) difference between both segmented models. Statistical analysis Data were analyzed using GraphPad Prism, Version 9.0.(GraphPad Software, La Jolla, CA).A paired sample t-test was used to compare the time between SS and AS.The performance metrics were represented by mean and standard deviation values.An IoU score of < 0.5 or HD value of > 0.2 mm would indicate towards poor performance.Inter-class correlation coefficient (r) was applied to assess intra-and inter-observer consistency of the refined segmentations.A p value of less than 0.05 was considered significant. Informed consent Since data were evaluated retrospectively, pseudonymously and were solely obtained for treatment purposes, a requirement of informed consent was waived by the Ethical Review Board of the University Hospitals Leuven (reference number: B322201525552). Results Upon visual inspection of the testing dataset, it was determined that 20% (n = 10) of the cases required minor refinements.The mean values for intra-observer consistency of refinements were 92% for IOU and 96% for DSC. Inter-observer consistency yielded IOU and DSC values of 87% and 93%, respectively (Table 1).Intra-observer repeatability was determined to be 0.992, while inter-observer repeatability was 0.986.The CNN model required an average of 21 s to perform the AS of impacted canines, while the SS took 582 s.This indicates that the CNN model was approximately 24 times faster than the SS method, with a statistically significant difference of (p < 0.0001) (Fig. 4).The performance metrics of AS demonstrated high values of IoU (0.99 ± 0.04) and DSC (0.99 ± 0.02) when compared to SS.A mean HD value of 0.01 ± 0.03 mm was detected with RMS difference of 0.05 ± 0.25 mm between SS and AS (Table 2 and Fig. 5), hence indicating towards almost a perfect overlap between semi-and fully-automated segmented canine surfaces. Discussion A precise 3D segmentation of impacted canine is essential mainly for digital orthodontic treatment planning workflows [33][34][35] .Despite being a challenging and time-consuming task through manual and semi-automated approaches, CNN-based automation has the ability to produce highly accurate 3D virtual models in a timeefficient manner 22,24,28 .Hence, the goal of this study was to introduce and assess the performance of a CNN model for the segmentation of maxillary impacted canines.In this study, we utilized a pre-existing cloud-based platform that had been previously trained to segment multiple dentomaxillofacial structures (permanent teeth, maxillary sinus, inferior alveolar nerve, and jaw bones) and apply automated CBCT-intraoral scan registration.The performance of the model was comparable to that of SS performed by clinical experts.It is noteworthy that the model showed 100% consistency without the issue of human variability, where it was able to produce identical results when segmenting the same case multiple times.Moreover, only minor refinements were required which confirmed high similarity between AS and SS.The presented CNN model was able to automatically segment impacted canines in 21 s, which was 24 times faster than the SS approach.Hence, demonstrating the benefits of incorporating automation into the digital workflow to increase clinical efficiency.A comparison with existing studies regarding time-efficiency was challenging due to a lack of reported time data.Time is a crucial factor in clinical dentistry and is integral to an optimal digital workflow, hence it should be reported in such studies incorporating artificial intelligence (AI) based solutions. Limited research has been conducted on the application of deep learning-based CNNs for either classification or segmentation of impacted teeth.Specifically, no studies have focused on impacted canine segmentation on CBCT images.Hence, comparison with existing evidence was deemed difficult.Kuwada et al. 36 evaluated the performance of three CNN systems (DetectNet, VGG-16, AlexNet) for detecting and classifying maxillary impacted supernumerary teeth on panoramic images.They found that DetectNet had the highest detection performance with a recall and precision of 1. Celik et al. 37 proposed a deep learning-based tool for detecting impacted mandibular third molars.They compared a two-stage technique (Faster RCNN with ResNet50, AlexNet, and VGG16 as backbones) with a one-stage technique (YOLOv3) and found that YOLOv3 had the highest detection efficacy with an average precision of 0.96.Imak et al. 38 used ResMIBCU-Net to segment impacted teeth (including impacted canines) on panoramic images and achieved an accuracy of 99.82%.Orhan et al. 39 evaluated the diagnostic performance of a U-Net CNN model for detecting impacted third molar teeth on CBCT images and showed an accuracy of 86.2%.Meanwhile, the findings of the present suggested a high scoring of 0.99 based on both DSC and IoU.It is noteworthy that the use of accuracy as an evaluation metric for 3D AS tasks can result in misleading conclusions due to the inclusion of true negatives in the calculation.This phenomenon, known as the accuracy paradox, can result in a high accuracy value despite poor model performance 40 .This is particularly evident in imbalanced datasets where the over-representation of one class can lead to an overestimation of accuracy.Alternative evaluation metrics, such as, DSC, 95% HD and IoU should provide a more optimal representation of model performance. The study's main strength was its ability to accurately and rapidly segment impacted canines with various angulations (horizontal, oblique, vertical) on CBCT images.The inclusion of scans from two CBCT devices with different acquisition parameters and metal artifacts from brackets could enhance the tool's practicality and robustness.Moreover, the segmentation and refinements could be performed on an easily accessible online platform without the need for third-party software, making it more convenient for clinical use. The study also had certain limitations.Firstly, the training was limited to only maxillary impacted canines without inclusion of any other impactions.Secondly, the online tool only provided the segmentation map as an outcome without any additional tools for dimensional and morphometric measurements.Thirdly, the CNN training was based on two CBCT devices.Further studies are warranted to train the model based on the datasets from multiple CBCT devices with different scanning parameters and qualities, as well as images acquired from different institutions, for justifying its applicability for regular clinical tasks. Conclusion The proposed CNN model facilitated a rapid, consistent, and precise segmentation of maxillary impacted canines on CBCT images, which might aid in diagnosis and the planning of orthodontic and oral surgical interventions.The integration of impacted canine segmentation into the online tool could be considered as a significant leap towards achieving a fully AI assisted virtual workflow for planning, surgical guide designing, and follow-up assessment for various dentomaxillofacial procedures. Figure 1 . Figure 1.Dataset used for training and validation. Figure 4 . Figure 4. Time comparison between automated and semi-automated segmentation. Figure 5 . Figure 5.Comparison of automated and semi-automated maxillary impacted canine segmentation.(A) threedimensional surface model.(B) axial view.(C) sagittal view.Green color corresponds to no difference between automated and semi-automated segmentation surfaces, red color corresponds to overestimation of automated segmentation and blue color corresponds to underestimation of automated segmentation. Table 1 . Intra and inter-observer consistency of refinements. Table 2 . Performance metrics based on comparison between automated and semi-automated segmentation.
3,906
2024-01-03T00:00:00.000
[ "Medicine", "Computer Science" ]
MOA-2019-BLG-008Lb: a new microlensing detection of an object at the planet/brown dwarf boundary We report on the observations, analysis and interpretation of the microlensing event MOA-2019- BLG-008. The observed anomaly in the photometric light curve is best described through a binary lens model. In this model, the source did not cross caustics and no finite source effects were observed. Therefore the angular Einstein ring radius cannot be measured from the light curve alone. However, the large event duration, t E about 80 days, allows a precise measurement of the microlensing parallax. In addition to the constraints on the angular radius and the apparent brightness I s of the source, we employ the Besancon and GalMod galactic models to estimate the physical properties of the lens. We find excellent agreement between the predictions of the two Galactic models: the companion is likely a resident of the brown dwarf desert with a mass Mp about 30 MJup and the host is a main sequence dwarf star. The lens lies along the line of sight to the Galactic Bulge, at a distance of less then4 kpc. We estimate that in about 10 years, the lens and source will be separated by 55 mas, and it will be possible to confirm the exact nature of the lensing system by using high-resolution imaging from ground or space-based observatories. INTRODUCTION During the last 20 years, thousands of planets 1 have been detected and it is now clear that planets are abundant in the Milky Way (Cassan et al. 2012;Bonfils et al. 2013;Clanton & Gaudi 2016;Fulton et al. 2021). Conversely, the various methods of detection agree that brown dwarf companions (with a mass ∼13-80 M Jup ) seem much rarer (Grether & Lineweaver 2006;Lafrenière et al. 2007;Kraus et al. 2008;Metchev & Hillenbrand 2009;Kiefer et al. 2019;Nielsen et al. 2019;Carmichael et al. 2020), inspiring the idea of a "brown dwarf desert" (Marcy & Butler 2000), and such disparity raises questions about formation scenarios. Core accretion, disc instability, migration and disc evolution mechanisms are capable of producing planets up to 40M Jup (Pollack et al. 1996;Boss 1997;Alibert et al. 2005;Mordasini et al. 2009), explaining the formation of some brown dwarf companions. Brown dwarfs can also form via gas-collapse (Béjar et al. 2001;Bate et al. 2002) and several processes have been proposed to explain the cessation of gas accretion, such as ejection (see Luhman (2012) and references therein for a more complete review). However, the formation of low-mass binaries remains difficult to explain (Bate et al. 2002;Marks et al. 2017) and more detections are needed to place meaningful constraints on formation models, especially around the brown dwarf desert. Several objects at the planet/brown dwarf mass boundary have been discovered with the microlensing technique, both in binary and single lens events (Bachelet et al. 2012a;Bozza et al. 2012;Ranc et al. 2015;Han et al. 2016;Zhu et al. 2016;Poleski et al. 2017;Shvartzvald et al. 2019;Bachelet et al. 2019;Miyazaki et al. 2020). Microlensing is particularly sensitive to exoplanets and brown dwarfs at or beyond the snow-line of their host stars, which is the region beyond which it is cold enough for water to turn to ice. Planets in this region typically have orbital periods of many years and, as such, are mostly inaccessible to other planet detection methods (Gould et al. 2010;Tsapras et al. 2016). The location of the snow-line plays an important role during planet formation, as the prevalence of ice grains beyond that point is believed to facilitate the formation of sufficiently large planetary cores, able to trigger runaway growth and form giant planets (Ida & Lin 2004;Kley & Nelson 2012). The lensing geometry is typically expressed in terms of the angular Einstein radius of the lens (Einstein 1936) where D L , D S are the distances from the observer to the lens and source respectively, D LS is the lens-source distance, and M L is the mass of the lens. The key observable in microlensing events that provides any connection to the physical properties of the lens is the event timescale where µ rel is the relative proper motion between lens and source, π rel is the lens-source relative parallax and κ ≡ 4G/c 2 au 8.1mas/M (Gould 2000). These two equations reveal a well-known degeneracy in microlensing event parameters. Indeed, the mass of the lens, its distance and the distance to the source are degenerate parameters when only t E is measured and at least two extra pieces of information are required to disentangle them. For binary lenses, θ E = θ * /ρ is often measured from the detection of finite source effects in the event light curve, typically parametrized with ρ. This occurs when an extended source of angular radius θ * closely approaches regions of strong magnification gradients, i.e. around caustics (Witt 1990;Tsapras 2018). Using a color-radius relation (Boyajian et al. 2014), it is then possible to estimate θ E . For sufficiently long events (i.e., t E ≥ 30 days), the microlensing parallax π E = π rel /θ E due to Earth's revolution around the Sun, can be measured. This is referred to as the annual parallax (Gould 1992(Gould , 2004). In addition, if simultaneous observations can be performed from space, as well as from the ground, it is possible to measure the space-based parallax (Refsdal 1966;Calchi Novati et al. 2015;Yee et al. 2015a). Ultimately, by obtaining high-resolution imaging several years after the event has expired, additional constraints on the relative lens-source proper motion µ rel and the lens brightness may be obtained, provided that the lens and source can be resolved (Alcock et al. 2001). See for example Beaulieu (2018) for a complete review of this technique. It is not rare however that only t E and a single other parameter (θ E or π E ) are measured, leaving the physical parameters of the lens system only loosely constrained. As underlined by Penny et al. (2016), this is the case for about 50 % of all published microlensing planetary events. To obtain stronger constraints on these events, Galactic models may be employed to derive the probability densities of lens mass and distance along the line of sight that reproduce the fitted microlensing event parameters. Originally used by Han & Gould (1995), Galactic models are now commonly relied upon to estimate the properties of microlensing planets when no additional information is available to constrain the parameter space (Penny et al. 2016). While they generally come with large uncertainties (10% is a lower limit), Galactic model predictions have proven to be in excellent agreement with results obtained from follow-up studies using high-resolution imaging, especially for OGLE-2005-BLG-169Lb (Gould et al. 2006;Batista et al. 2015;Bennett et al. 2015), MOA-2011-BLG-293Lb (Yee et al. 2012Batista et al. 2014) and OGLE-2014-BLG-0124Lb Beaulieu et al. 2018). Galactic models developed for microlensing analysis are employed to generate distributions of stellar densities and velocities across the Galactic Disk and Bulge (Han & Gould 1995, 2003Dominik 2006;Bennett et al. 2014), and use them to reproduce microlensing observables (i.e. t E , θ E and π E ). These are then compared to the fitted event parameters in order to estimate the probability densities of the lens distance D l and mass M l . In addition to these models, there exist synthetic stellar population models for the Milky Way that have been explicitly developed to reproduce observable Galactic properties with great accuracy. Specifically, the Besançon (Robin et al. 2003) and GalMod (Pasetto et al. 2018) models have been used in many different studies, to explore the structure, kinematics and formation history of the the Milky Way (Czekaj et al. 2014;Robin et al. 2017). In addition, they have also been used to simulate astronomical sky-surveys (Rauer et al. 2014;Penny et al. 2013Penny et al. , 2019Kauffmann et al. 2020), and their predictions have been tested against real observations Bochanski et al. 2007;Pietrukowicz et al. 2012;Schmidt et al. 2020;Terry et al. 2020). For the first time, in this study we employ both the Besançon and GalMod Galactic models to estimate the properties of a binary lens, with a companion likely located in the brown dwarf desert. The microlensing event MOA-2019-BLG-008 was observed by several microlensing teams independently, and we present the different data sets, as well as the data reduction procedures, in Section 2. The modeling of the photometric light curve and the model selection are discussed in Section 3. Section 4 presents the analysis of the properties of the source and of the blend contaminant. The methodology used to derive the physical properties of the lens system is detailed in Section 5, and we conclude in Section 6. Survey and follow-up observations The microlensing event MOA-2019-BLG-008 was first announced on 4 Feb 2019 by the MOA collaboration (Sumi et al. 2003), which operates the 1.8-m MOA survey telescope at Mount John observatory in New Zealand, at equatorial coordinates α = 17 h 51 m 55.89 s , δ = −29 • 59 23.03 (J2000) (l, b = 359.8049 • , −1.7203 • ). The event was also independently identified by the Early Warning System (EWS) 2 of the Optical Gravitational Lensing Experiment (OGLE) survey (Udalski 2003;Udalski et al. 2015) as OGLE-2019-BLG-0011. OGLE observations were carried out with the 1.3-m Warsaw telescope at Las Campanas Observatory in Chile, with the 32-chip mosaic CCD camera. The event occurred in OGLE bulge field BLG501, which was imaged about once per hour when not interrupted by weather or the full Moon, providing good coverage of the light curve when the bulge was visible from Chile. Additional observations were obtained by the ROME/REA survey ) using 6×1m telescopes from the southern ring of the global robotic telescope network of the Las Cumbres Observatory (LCO) (Brown et al. 2013). The LCO telescopes are located at the Cerro Tololo International Observatory (CTIO) in Chile, South African Astronomical Observatory (SAAO) in South Africa and Siding Spring Observatory (SSO) in Australia, and they provided good coverage of the light curve, although the event occurred early in the 2019 ROME/REA microlensing season (i.e. ∼ March to September of each year, when the Galactic Bulge is observable). Obsevrations were acquired in the survey mode. The event lies in fields BLG02 and BLG42 of the Korea Microlensing Telescopes Network (KMTNet) (Kim et al. 2016) and so, was intensely monitored by that survey, although KMTNet did not independently discover the event. Observations were also obtained from the Spitzer satellite as part of an effort to constrain the parallax (Yee et al. 2015b). Spitzer observations will be presented in a companion paper (Han et al. 2022 in prep.). Data reduction procedure This analysis uses all available ground-based observations of MOA-2019-BLG-008. The list of contributing telescopes is given in Table 1. Most data were obtained in the I band (or SDSS-i) but we note that MOA observations were performed with the MOA wide-band red filter, which is specific to that survey (Sako et al. 2008). ROME/REA obtained observations in three different bands (SDSS-i , SDSS-r and SDSS-g ). The KMTNet survey observations were carried out in the I band, with a complementary V band observation every ten I exposures. The photometric analysis of crowded-field observations is a challenging task. Images of the Galactic bulge contain hundred of thousands of stars whose point-spread functions (PSFs) often overlap, therefore aperture and PSF-fitting photometry offer very limited sensitivity to photometric deviations generated by the presence of low-mass planetary companions. For this reason, observers of microlensing events routinely perform difference image analysis (DIA) (Tomaney & Crotts 1996;Alard & Lupton 1998;Bramich 2008a;Bramich et al. 2013a), which offers superior photometric precision under such crowded conditions. Most microlensing teams have developed custom DIA pipelines to reduce their observations. OGLE, MOA and KMT images were reduced using the photometric pipelines described in Udalski (2003), Bond et al. (2001) and Albrow et al. (2009), respectively. The LCO observations were processed using the pyDANDIA pipeline (ROME/REA in prep), a customized re-implementation of the DanDIA pipeline (Bramich 2008b;Bramich et al. 2013b) in Python. The data sets presented in this paper have been carefully reprocessed to achieve greater photometric accuracy, and it is these data that we used as input when modelling the microlensing event. They are available for download from the online version of the paper. We note the presence of a very long-term baseline trend spanning several observing seasons in the OGLE and MOA photometry that can be seen in Figure 1. As described later in this work, we determined that the source star is a red giant. Many red giants exhibit variability at the ∼ 10% level (Wray et al. 2004;Percy et al. 2008;Wyrzykowski et al. 2006;Soszyński et al. 2013;Arnold et al. 2020), and it is possible that this is also the case for this source, despite the apparently very long period P ≥ 1000 days. Because this trend manifests over very long time scales (several years), much longer than the duration of the actual microlensing event (weeks), it does not have any noticeable effect on the determination of the parameters of this event, which are primarily derived from the detailed morphology of the microlensing light curve. The baseline over the duration of the microlensing event is effectively flat. Therefore, to increase the speed of the modeling process, we only used observations with JD ≥ 2457800 and included data sets with more than 10 measurements in total during the course of the microlensing event. The latter constraint applies only to the LCO data and is limited to the observations acquired by the reactive REA mode on a different target in the same field . We verified that our data selection does not impact the overall results, by exploring models with the full-baseline. MODELING THE EVENT LIGHT CURVE This event displays a clear anomaly around HJD ∼ 2458580, implying that it is most likely due to a binary lens (2L1S) or a binary source (1L2S) (Dominik et al. 2019). It is morpholigicaly similar to the event MACHO 99-BLG-47 (Albrow et al. 2002), despite a different lensing geometry, as detailed below. In addition, because the event lasts for ∼ 300 days, the effect of the motion of Earth around the Sun, referred to as the annual parallax (Gould 1992;Alcock et al. 1995), needs to be taken into account. The classical approach to modeling is to first search for static binary models and subsequently gradually introduce additional second-order effects, such as parallax or the orbital motion of the lens (Dominik 1999). To model the event we use the pyLIMA software (Bachelet et al. 2017), which employs the VBBinaryLensing code (Bozza 2010;Bozza et al. 2018) to estimate the binary lens model magnification, and we search for a general solution including the annual parallax, but we also explore the static case for completeness. The first step of modeling involves identifying potential multiple minima in the parameter space, and for this we employ the differential evolution algorithm (Storn & Price 1997;Bachelet et al. 2017). During the modeling process, we rescale the data uncertainties using the same method presented in Bachelet et al. (2019), which introduces the parameters k and e min for each datasets: where σ and σ are the original and rescaled uncertainties (in magnitude units), respectively. The coefficients for each dataset are given in Table 1. Finally, we explore the posterior distribution of the parameters of each minimum that we identify using the emcee software (Foreman-Mackey et al. 2013). (No) finite-source effects In principle, the normalized angular source radius ρ = θ * /θ E has to be considered (Witt & Mao 1994), but preliminary models indicated that this parameter is loosely constrained. This is because the source trajectory does not pass close to caustics, as can be inferred from Figure 2. However, there exists an upper limit ρ m where the finite-source effects would start to be significantly visible in the models. Because θ E ≥ θ * /ρ m , this limit introduces constraints on the mass and distance of the lens that can be used for the analysis presented in Section 5. Indeed, it is straightforward to Table 1. Summary of telescopes and observations used for modeling the event. The number of data points per telescope represents the points used for the modeling step., i.e. JD ≥ 2457800. Lines marked with '*' indicate that this data set was not used during the modeling process, as described in the text. In cases which the rescaling parameters were not constrained, they were fixed to k = 1.0 and emin = 0.0. (Gould 2000): where π l and π s are the parallax of the lens and source, respectively. The constraint on the mass can be written as Therefore, we sample the distribution of ρ around the best model and found that ρ m ≤ 0.01 (with a conservative 10σ limit) and consider the source as a point for the rest of the modeling presented in this analysis. Binary lens A binary lens model involves seven parameters. t 0 is the time when the angular distance u 0 (scaled to θ E between the source and the center of mass of the binary lens is minimal. The event duration is characterised by the angular Einstein ring radius crossing time t E = θ E /µ rel , where µ rel is the lens/source relative proper motion (in the geocentric frame, because pyLIMA follows the geocentric formalism of Gould 2004. The binary separation projected on the plane of the lens is defined as s and the mass ratio between two component as q. The angle between the trajectory and the binary axis (fixed along the x-axis) is defined as α. We also consider a source flux f s and a blend flux f b for each of the datasets, adding 2n parameters where n is the number of datasets (i.e., 29 in this study). As discussed previously, we neglect the last parameter ρ and fit the simple point-source binary lens model. Following Gould (2004), we define the parallax vector π E by its North (π E,N ) and East (π E,E ) components. We set the parallax reference time as t 0,par = 2458570 HJD (Skowron et al. 2011) for all models considered in this analysis. This coincides with the time of the anomaly peak and corresponds to the calendar date 28 March 2019. At this time, Earth's acceleration vector was nearly parallel to the East direction. Therefore, we expect the π EE component to be the better constrained of the two. We found that the lightcurve morphology can only be explained by a single geometry, in agreement with previous results from real time modeling conducted by V. Bozza 3 and C. Han 4 . However, a second solution exists, a consequence of the binary ecliptic degeneracy (Skowron et al. 2011) with (u 0 , α, π E,N ) ⇐⇒ −(u 0 , α, π E,N ). Because the magnification pattern is symmetric relative to the binary axis, it exists two source trajectories that produce identical lightcurves for static binaries, i.e. (u 0 , α) ⇐⇒ −(u 0 , α). Moreover, the projected Earth acceleration can be considered as almost constant during the duration of the event, leading to the π E,N degeneracy for events located towards the Galactic Bulge (Smith et al. 2003;Jiang et al. 2004;Gould 2004;Poindexter et al. 2005;Skowron et al. 2011). This degeneracy is especially severe for events occurring near the equinoxes, because the projected Earth acceleration varies slowly (Skowron et al. 2011). We therefore explore both solution in the following analysis. Note that pyLIMA uses the formalism of Gould (2004) and therefore u 0 ≥ 0 if the lens passes the source on its right. Given the relatively long timescale of the event, we also explored the possibility of orbital motion of the lens (Albrow et al. 2000;Bachelet et al. 2012b) and considered the simplest linear model, parametrized with ds/dt and dα/dt, the linear separation and angular variation over time at time t 0 . For completeness, we explored the parameter space for a static model (i.e., without the annual parallax) and found that the best model converges to a similar geometry. However, the residuals display systematic trends around the event peak that are the clear signature of annual parallax, which is reflected in the high χ 2 value presented in Table 2. Binary source We explored the possibility that the observed light curve was due to a binary source (Gaudi 1998). Following the approach described in Hwang et al. (2013), we added to the single lens model the extra parameters ∆t 0 and ∆u 0 , respectively the shifts in the time of peak and separation of the second source relative to the first. Finally, we also added the flux ratio of the two sources in each observed band q λ . We report our results in Table 2. We also explore models with two different source angular sizes, but did not find any significant improvements in the model likelihood. Table 2. Models parameters are defined as 16,50 and 84 MCMC chains percentile, except for the χ 2 which is reported as the minimum value (i.e., the best model in each case). The angular source radius θ * for each model is also presented, see Section 4. The two static models are denoted 2L1S, the two models with parallax are denoted 2L1SP and models with orbital motion of the lens are 2L1SPOM. + and -indicated positive or negative u0. ANALYSIS OF THE SOURCE AND THE BLEND In the analysis of microlensing events, the color-magnitude diagram (CMD) is used to estimate the angular source radius θ * , and ultimately the angular Einstein ring radius θ E = θ * /ρ (Yoo et al. 2004). Unfortunately, ρ is not measurable in the present case. Nonetheless, the CMD analysis provides useful information about the source and the lens that can be used to place additional constraints to the analysis presented in Section 5. The CMD constraints can also be used to inform observing decisions in the future with complementary high-resolution imaging. We conducted our CMD analysis using different and independently obtained sets of observations from our pool of available data sets. The estimation of θ * below is for the model 2L1S with parallax and u 0 < 0. The source and blend magnitudes for all models are presented in Table 3 and the angular source radius θ * derived for all models is presented in Table 2. ROME/REA Color-Magnitude Diagrams Analysis The ROME strategy consists of regular monitoring of 20 fields in the Galactic bulge in SDSS-g , SDSS-r and SDSS-i , as described in Tsapras et al. (2019), and is designed to improve our understanding of the source and blend properties. The photometry is obtained using the pyDANDIA algorithm (Street et al. in prep 5 ) and calibrated to the VPHAS+ catalog (Drew et al. 2016). For this event, we investigated all combinations of filters and telescope sites and selected LSC A (i.e., LCO dome A in Chile) for the ROME CMD analysis, as it provided the deepest catalog. Figure 4 presents the CMD for stars in a 2'x2' square centered on the target, while Figure 3 presents a composite image of the ROME observations from LSC A. The latter presents the variable extinction in the field of view, as well as the two clusters NGC 6451 and Basel 5. The first step is to estimate the centroid of the Red Giant Clump (RGC). In Figure 4, stars located within 2' from the event location from the ROME/REA and VPHAS catalogs are displayed in (r-i,i) and (g-i,i) CMDs. We note that the location of the RGC is quite uncertain in the g-band for the ROME/REA data. This is due to the high extinction along this line of sight and leads to an accurate g-band calibration of the ROME data. We use the VPHAS magnitudes of these stars to estimate the centroid positions of the RGC in the three bands. We found the magnitudes of the RGC to be ( mag, E(r − i) = 1.41 ± 0.1 mag and extinction A i = 2.3 ± 0.1 mag. The best model returns a source magnitude of (g, r, i) s = (24.2 ± 0.8, 19.47 ± 0.05, 17.31 ± 0.03) mag. Because the event occured at the begining of the season, the event was poorly covered by the ROME data in the g-band and the source brightness is not well constrained. However, we can use the (r-i) color and i magnitude to estimate θ * . As described in Appendix A), we used the same catalog as Boyajian et al. (2014) to construct a new color-radius relation: This relation returns θ * = 9.8 ± 1.2 µas while the second relation using the g-band returns θ * = 22.9 ± 8.4 µas. The latter value is inaccurate due to large uncertainties in the source brightness in the g-band. MOA color-Magnitude Diagram The MOA magnitude system can be transformed to the OGLE-III magnitude system (i.e., the Johnson-Cousins system) by using the relation presented in Appendix B. Using the intrinsic color and magnitude of the RGC ((V − I) 0 , I 0 ) = (1.06, 14.32) mag (Nataf et al. 2013;Bensby et al. 2013) and subtracted to the measured position the RGC centroid in the Figure 5, we could estimate (E(V − I), A I ) = (2.3 ± 0.1, 2.4 ± 0.1) mag, in good agreement with the previous estimation. Knowing that the transformed magnitudes of the source are (V, I) s = (20.8 ± 0.1, 16.94 ± 0.08) mag, we found (V, I) 0,s = (16.1 ± 0.1, 14.54 ± 0.08) and we ultimately estimate θ * = 8.9 ± 1.3 µas (Adams et al. 2018), in relative agreement with the previous estimation. In Figure 5, we also display the source and blend position using the measurements from OGLE-IV in the I band and the transformed MOA V band. We estimate the source to be (V, I) s = ((20.8 ± 0.1, 16.88 ± 0.01) mag and derive θ * = 10.4 ± 1.6 µas. This estimate is likely more accurate than the previous, because it relies on a single color transformation (with the highest color term in Equation B4). Gaia EDR3 The Gaia mission (Gaia Collaboration et al. 2016) recently released their "Early Data Release 3" data set (EDR3,(Gaia Collaboration et al. 2020)), which significantly increases the volume and precision of the Gaia catalog. We queried the Gaia catalog requesting all stars within a 3' radius 6 around the coordinates of the event to generate a Gaia CMD, which we present in Figure 6. We limit our study to stars with a Re-normalised Unit Weight Figure 3. Color composite of g,r,i reference images of the ROME survey. The inset is 2'x2' zoom around MOA-2019-BLG-008. The NGC6451 cluster center is visible, while some of the stars of the Basel5 cluster are also visible (its centre is on the left, outside of the image, see Kharchenko et al. (2013)). As indicated by the white cross, North is up and East is left. Error (RUWE, a statistical criterion of the data quality) better than 1.4 7 . MOA-2019-BLG-008 is in the catalog (at 63 mas, Gaia EDR3 4056394717636682112) and appears in a sparse location of the CMD. The reported parallax is p = 0.39±0.12 mas, which corresponds to a distance of D = 2.56±0.79 kpc. This object is also significantly redder and brighter than the blend discussed in the previous section. Indeed, by using the magnitude transformation from Gaia to the Johnson-Cousins system 8 (Bachelet et al. 2019), we found this object to be ((V − I), I) = (2.25 ± 0.07, 16.17 ± 0.05) mag, and this is very likely the sum of the source and the blend previously discussed ((V − I), I) tot = (2.30, 16.19) mag. This is confirmed by several useful metrics available in the catalog. First, we compute the corrected BP and RP excess flux factor (Evans et al. 2018;Riello et al. 2020) and find 0.28 9 , which corresponds to a blend probability of ∼ 0.3 (see Figure 19 of Riello et al. (2020). Secondly, we note that the fraction R of visits that indicated a significant blend (defined by 'phot bp n blended transits' and 'phot rp n blended transits' for the BP and RP bands respectively) divided by the number of visits used for the astrometric solution ('astrometric matched transits') is very high for both bands R = 36/39 ∼ 90%. Following Mróz et al. (2020), we plot in Figure 6 the distribution of galactic proper motions of the Disk (gray) and Bulge (red) populations. Note that Gaia EDR3 provides proper motion in equatorial coordinates which we transformed using the same method as in Bachelet et al. (2019). The Disk population, approximated by the main sequence population, is estimated from the Gaia CMD by using all stars with G BP −G RP ≤ 2.5. The Bulge population is estimated from the RGC population of the CMD (i.e., 2.8 < G BP − G RP ≤ 3.5 and 17.8 ≤ G ≤ 19). Because this object is blended, it is difficult to extract meaningful constraints from the proper motion distribution. However, it will be possible to do so when the source and lens will be sufficiently separated, as discussed later. Analysis of the blend As can be seen in the different CMD's, there is a significant blend flux in the datasets, which likely belongs to the population of foreground stars of the Galactic Disk. The measurements from MOA/OGLE are (V, I) b = (18.5 ± Figure 4. Color-Magnitude diagrams from the ROME/REA survey. The blue dots represents all stars within 2' around the event location from the VPHAS catalog (Drew et al. 2016), while the aligned ROME/REA stars are in orange. The source and blend are represented in magenta and blue, respectively. The dashed square on the left plot represents the stars that have been used to estimate the RGC centroid for both CMDs. 0.1, 16.88 ± 0.01) mag and are consistent with a late F dwarf located at ∼ 2.5 kpc (Bessell & Brett 1988;Pecaut & Mamajek 2013) (assuming half extinction). Measurements from the ROME survey indicate that the blend brightnesses are (g, r, i) b = (19.43 ± 0.01, 17.91 ± 0.01, 16.97 ± 0.02) mag, consistent with a G dwarf at ∼ 2.0 kpc (Finlator et al. 2000;Schlafly et al. 2010). These results are in agreement with the lens properties derived in the next section. The source and blend have similar brightness in Cousins I band, but the former is much redder. Therefore, the object identified at this location by Gaia is dominated by light from the blend. At the epoch J2016, Gaia measures a total offset of 60 ± 15 mas with respect to the event location measured in 2019 (i.e., during peak magnification). The reported error on the distance has been computed from the North and East components error from ground surveys (of the order of ∼ 15 mas) and neglecting Gaia errors (of the order of ∼ 0.1 mas). Similarly, the magnified source and the baseline object in the KMTNet images are separated by ∼0.068 pixels, which is equivalent to 27 mas. Because the blend and source have similar brightness, this indicates a separation between the blend and the source of ∼ 60 mas. Therefore, the hypothesis that the blend, or a potential companion, is the lens is probable. Assuming that the blend is the lens, π E ∼ 0.2, a source distance D s ∼ 8 kpc and that the light detected by Gaia is solely due to the blend, we can estimate θ E ∼ (0.39 − 0.125)/0.2 ∼ 1.3 mas and M l ∼ 0.8M . So the astrometric solution is also compatible with a G dwarf at ∼ 2.5 kpc, with the notable exception of the relative proper motion. Indeed, we can estimate the geocentric relative proper motion to be µ geo = θ E /t E ∼ 1.3/80 ∼ 6 mas/yr. Because this event peaked in early March, the heliocentric correction v ⊕ π rel /au ∼ (0.2, −0.4) is small and we can therefore assume µ geo = µ hel (Dong et al. 2009). The separation between the lens and the source at the Gaia epoch (J2016) is therefore expected to be ∼ 18 mas, significantly smaller than the previously measured ∼60 mas. However, this argument can not, by itself, rule out the hypothesis that the blend is the lens due to the relatively large errors. Therefore, both photometric and astrometric arguments provide sufficient evidence that the lens represents a significant fraction of the blended light, but only high-resolution imaging in the near future will provide a conclusive answer to this puzzle. Figure 5. Color-Magnitude Diagram from the MOA survey, calibrated to the OGLE III system, for stars located in the square 2'x2' around the event. We also display the position of the source and the blend using the I measurements from the OGLE lightcurve. Table 3. Source and blend magnitudes for the three 2L1S models (results are almost identical for u0 < 0 and u0 > 0). Numbers in brackets represent the 1σ errors. MOA magnitudes have been converted to the OGLE-III system using the transformation in the Appendix B. Because the normalised radius of the source ρ can not be estimated from the fit, inferring the Einstein radius is not possible without extra measurements, such as the microlensing astrometric signal (Dominik & Sahu 2000), the lens flux or the lens and source separation measurements after several years (Alcock et al. 2001;Beaulieu 2019). In order to estimate the physical properties of the lens, prior information from Galactic models can be used. By drawing random source-lens pairs from distributions of stellar physical parameters derived from the galactic models along the line of sight, and calculating the respective microlensing model parameters, the lens mass and distance probability densities can be estimated. This has been done many times in the past with parameterized models specifically designed to study microlensing events (Han & Gould 1995, 2003Dominik 2006;Bennett et al. 2014;Koshimoto et al. 2021). But there are also modern galactic models that have been extensively tested and are publicly accessible. From a theoretical point of view, these elaborate models are of great interest because they are including more relevant quantities such as color, extinction and stellar type, for instance. These quantities can be used to constrain physical parameters, but also to predict properties for follow-up observations in the more distant future. In this work, we performed a parallel analysis using the parametric model of Dominik (2006), the Besançon model and the GalMod model described thereafter. The Besançon Model The first galactic model we use to generate a stellar population is the Besançon Model 10 (Robin et al. 2003), version M1612. This version consists of an ellipsoidal Bulge titled by ∼ 10 • from the Sun-Galactic center direction, and populated with stellar masses drawn from a broken power law initial mass-function (IMF) dN/dM ∝ M α , with α = −1 and α = −2.35 for 0.15 ≤ 0.7M and M > 0.7M respectively (Robin et al. 2012;Penny et al. 2019). The Disk is modelled by a thin disk component with a two-slope power law IMF, with α = 1.6 and α = 3.0 for M ≤ 1M and M > 1M (Robin et al. 2012), while the density distribution is derived from Einasto (1979). The outer part of the disk model has recently been updated and is described in Amôres et al. (2017). The thick disk and halo population are fully described in Robin et al. (2014), while the kinematics of the population are described in (Bienaymé et al. 2015). We select the Marshall et al. (2006) 3D map to estimate the extinction for the simulation. Finally, we note that the Besançon Model has been used in several studies for microlensing predictions. Based on the original work of Kerins et al. (2009), Awiphan et al. (2016 and Specht et al. (2020) developed the MaBµls − 2 software that computes theoretical maps of the distribution of optical depth, event rate and timescales of microlensing events, that are in good agreement with observations. In particular, MaBµls − 2 predictions of event rate and optical depth are excellent agreement with the 8 years of observations from the OGLE survey (Mróz et al. 2019). The Besançon Model has also been used by Penny et al. (2013) to simulate the potential yields of a microlensing exoplanet survey with the Euclid space telescope. More recently, Penny et al. (2019) and Johnson et al. (2020) used an updated version of the Besançon Galactic model to estimate the expected number of detections of bound and unbound planets from the Roman (formerly known as WFIRST) microlensing survey (Spergel et al. 2015). GalMod The second simulation was made using the "Galaxy Model" (GalMod, version 18.21), which is a theoretical stellar population synthesis model (Pasetto et al. 2018) 11 simulating a mock catalog for a given field of view and photometric system. Similarly as for the Besançon model, the parameter range in magnitude and color permits the simulation of faint lens stars down to the dwarf and brown dwarf regime. Briefly, GalMod consists of the sum of several stellar populations including a thin and a thick disk, a stellar halo, and a bulge immersed in a halo of dark matter. Stars are generated using the multiple-stellar population consistency theorem described in Pasetto et al. (2019) with a kinematics model from Pasetto et al. (2016). For our simulation, we used the Rosin-Rammler star formation rate (SFR) (Chiosi 1980) for the Bulge and the tilted bar. The thin disk is a combination of five different stellar populations with various ages and kinematics, with a constant SFR, while the thick disk is drawn from a single population. We used the same IMF for all the different components of the model (Kroupa 2001). Methodology We first requested samples from the two models within a 2 cone along the line of sight to the event, and set the maximum distance to 10 kpc. We then draw samples of lens and source star combinations and apply a sequence of rules. First, the source has to be more distant than the lens. Then, we consider an event only if the angular separation between the source and the lens is below 10". Following the approach described in Shin et al. (2019), we proceed to compute the associated event parameters (i.e., t E , π E , θ * and I s in this case) and compare them with our measured observables derived from modeling. Each such combination contributes to the final derived distribution with a weight i /2), with δ 2 i being the Mahalanobis distance: are the differences between the best fit model parameters and the simulated parameters and C is the covariance matrix. Note that we also reject models with ρ ≥ 0.01, following the discussion presented in the Section 3.1. Because the galactic models return a finite number of stars (168134 for the Besançon model and 64679 for the GalMod model) and the event parameters are slightly unusual (with t E ∼ 80 days and θ * ∼ 10 µas), a large fraction of lens and source combination have a null weight. For instance, the Besançon model contains only ∼ 0.4 % of stars with |θ * − 11.5| < 3σ and about 1% of events are expected to have t E ∼ 80 days (Mróz et al. 2019). Therefore, the vast majority of trials (≥ 99.9 %) have null weights w i and it would therefore require several thousands of billions of trials to obtain meaningful parameter distributions. In light of this, we adjusted our strategy and adopted an MCMC approach. Using the Mahalanobis as the log-likelihood, we adapt the galactic models to define priors on the modeling parameters: the source and lens distances D s and D l , the proper motion of the source and lens, the mass of the lens M l , the angular radius of the source θ * and the magnitude of the source I s . We use a Kernel Density Estimation (KDE) algorithm to derive continuous distributions from the galactic-model samples. This allows a prior estimation across the entire parameter space, but at the cost of a somewhat smoother distribution and the use of extrapolation. Results We present the posterior distribution for the model 2L1S+POM in Figure 7 and the derived results for all models in Table 4. Results from the Galactic model of Dominik (2006) is also presented for comparison. As a supplementary control, we also derived posterior distributions from the recent Galactic model of Koshimoto et al. (2021), especially designed for microlensing studies, and found consistent results. Despite the relatively broad distributions, we find that galactic models are in good agreement for all models. The main differences are seen in the distribution of the source and lens proper motions. The GalMod model has a much narrower distribution of stellar proper motions, as can be seen in the first line of Figure 7, which directly propagates to the source and lens proper motions. However, the relative proper motion of the two galactic models are in 1-σ agreement, at ∼5.5 mas/yr. This is because the relative proper motion is strongly constrained by t E , which is well determined from the models in this case. For all microlensing models, the derived mass and distance of the host is compatible with the measured blend light, with the exception of the 2L1S-P model, which is much fainter with V l ∼ 20.5. The companion is an object at the planet/brown dwarf boundary. While the binary source and static binary models can be safely discarded due to their very high ∆χ 2 values relative to the best-fit model, the selection of the best overall model with/without orbital motion (∆χ 2 ∼ 200) and the sign of u 0 (∆χ 2 ∼ 50) is less trivial. In principle, the ∆χ 2 between the various models is statistically significant. However, despite error-bar rescaling, data set residuals can be affected by low level systematics, leading to errors that are not normally distributed (Bachelet et al. 2017). Because they modify the source trajectory in a similar way, the orbital motion and parallax parameters are often correlated (Batista et al. 2011) which is also the case for this event. The North component π E,N of the parallax-vector is in agreement between all non-static models, suggesting that the parallax signal is strong and real, which is expected for an event duration of t E ∼ 80 days. For models including orbital motion, we can use the results from galactic models to verify if the system is bound or not (Dong et al. 2009;Udalski et al. 2018). The condition for bound systems is (K E + P E ) < 0, where K E and P E are the kinetic and potential energies, and this can be rewritten in terms of (projected) escape velocities ratio (Dong et al. 2009 ∼ 6 for the 2L1S-POM and 2L1S+POM models, respectively. Taken at face values, the ratio of projected velocities indicates that the companion is not bound and that these models are unlikely. However, the relative errors are large, i.e. ≥ 100%, and therefore the models with orbital motion can not be completely ruled out. But given the relatively low improvement in the χ 2 of the orbital motion models, we decided to not explore more sophisticated models, such as the full Keplerian parametrization (Skowron et al. 2011). CONCLUSIONS We presented the analysis of the microlensing event MOA-2019-BLG-008. The modeling of this event supports a binary-lens interpretation with a mass ratio q 0.04 between the two components of the lens. Because the source a The source distance and errors have been fixed for Dominik (2006) trajectory did not approach the caustics of the system, finite-source effects were not detected, so the lens mass and distance could only be weakly constrained. We used the Besançon and GalMod synthetic stellar population models of the Milky Way to estimate the most likely physical parameters of the lens. By using samples generated by these models, in combination with available constraints on the event timescale t E , the microlensing parallax π E , the source magnitude I s and angular radius θ * , we were able to place constraints on the lens mass and distance. We found that all galactic models, including the one from Dominik (2006), converge to similar solutions for the lens mass and distance, despite different hypotheses (especially for stellar proper motions). We explore several microlensing binary lens models and they are all consistent with a main sequence star lens located at ≤ 4 kpc from Earth. The microlensing models also indicate the presence of a bright blend, separated by ∆ ∼ 60 mas from the source, with ∼ (V, I) b = (18.5, 17.0) mag. Assuming that the blend suffers half of the total extinction towards the source, this object is compatible with a late F-dwarf at ∼ 2.5 kpc (Bessell & Brett 1988;Pecaut & Mamajek 2013), consistent with the lens properties derived from the galactic models analysis. The astrometric measurement made by Gaia at this position returns D = 2.56±0.79 kpc. Assuming this object to be the lens, we derived θ E ∼ 1.3 mas and 0.8M , also consistent with the previous estimations. Depending on the exact nature of the host, the lens companion is either a massive Jupiter or a low mass brown dwarf. Given their relative proper motion, µ rel = 5.5 mas/yr, the lens and source should be sufficiently separated to be observed via high-resolution imaging in about 10 years with 10-m class telescopes. This would provide the necessary additional information needed to confirm the exact nature of the lens, including the companion. Even though the physical nature of the host star cannot yet be firmly established, it is almost certain that the companion is located at the brown dwarf/planet mass boundary. The increasing number of reported discoveries of such objects, especially by microlensing surveys (see for example Bachelet et al. (2019) and references therein), provides important observational data which can be used to improve the theoretical framework underpinning planet formation. Indeed, there is more and more evidence that the critical mass to ignite deuterium (i.e., ∼ 13 M Jup ) does not represent a clear-cut limit (Chabrier et al. 2014). While there is compelling evidence that the two classes of objects are produced by different formation processes (Reggiani et al. 2016;Bowler et al. 2020), more observational constraints will be necessary in order to better appreciate the differences between them. Similarly to MOA-2019-BLG-008, it can be expected that a fraction of events detected by the Roman microlensing survey will miss at least one mass-distance relation, i.e. θ E or π E . In this context, the Besançon and GalMod models can be particularly helpful in estimating the most likely parameters for the lens. Indeed, while Penny et al. (2019) and Terry et al. (2020) report some discrepancies between observations and their catalogs, these models are constantly upgraded to refine their predictions. In particular, the high accuracy astrometric measurements from Gaia will offer unique constraints on the proper motions and distances of stars up to the Galactic Bulge population at ∼ 8 kpc. Software RAS and EB gratefully acknowledge support from NASA grant 80NSSC19K0291. YT and JW acknowledge the support of DFG priority program SPP 1992 "Exploring the Diversity of Extrasolar Planets" (WA 1047/11-1). KH acknowledges support from STFC grant ST/R000824/1. J.C.Y. acknowledges support from N.S.F Grant No. AST-2108414. Work by C.H. was supported by the grants of National Research Foundation of Korea (2019R1A2C2085965 and2020R1A4A2002885). This research has made use of NASA's Astrophysics Data System, and the NASA Exoplanet Archive. The work was partly based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 177.D-3023, as part of the VST Photometric Halpha Survey of the Southern Galactic Plane and Bulge (VPHAS+, www.vphas.eu). This work also made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. CITEUC is funded by National Funds through FCT -Foundation for Science and Technology (project: UID/Multi/00611/2013) and FEDER -European Regional Development Fund through COMPETE 2020 -Operational Programme Competitiveness and Internationalization (project: POCI-01-0145-FEDER-006922). DMB acknowledges the support of the NYU Abu Dhabi Research Enhancement Fund under grant RE124. This research uses data obtained through the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences, and the Special Fund for Astronomy from the Ministry of Finance. This work was partly supported by the National Science Foundation of China (Grant No. 11333003, 11390372 and 11761131004 to SM). This research has made use of the KMTNet system operated by the Korea Astronomy and Space Science Institute (KASI) and the data were obtained at three host sites of CTIO in Chile, SAAO in South Africa, and SSO in Australia.The MOA project is supported by JSPS KAKENHI Grant Number JSPS24253004, JSPS26247023, JSPS23340064, JSPS15H00781, JP16H06287, and JP17H02871. APPENDIX A. NEW SDSS COLOR-RADIUS RELATION As discussed in the main text, the source magnitude in the g-band from the ROME survey is not well known, due to the low sampling of the lightcurve. But the source brightness in the r and i bands are well measured. Because Boyajian et al. (2014) does not provide a relation for these bands, we decided to collect the data and estimate these relations. As described in Boyajian et al. (2014) and avalaible on Simbad 12 , we used the magnitudes measruements from Boyajian et al. (2013) and the angular diameter measurements from various interferometers: the CHARA Array (Di Folco et al. 2004;Bigot et al. 2006;Baines et al. 2008Baines et al. , 2012Boyajian et al. 2012;Ligi et al. 2012;Bigot et al. 2011;?;Crepp et al. 2012;Bazot et al. 2011;Huber et al. 2012), the Palomar Testbed Interferometer (van Belle & von Braun 2009), the Very Large Telescope Interferometer (Kervella et al. 2003a(Kervella et al. ,b, 2004Di Folco et al. 2004;Thévenin et al. 2005;Chiavassa et al. 2012), the Sydney university Stellar Interformeter (?), the Narrabri Intensity Interferometer (Hanbury Brown et al. 1974), Mark III (Mozurkewich et al. 2003) and the Navy Prototype Oprical Interfereometer (Nordgren et al. 1999(Nordgren et al. , 2001. Then, we fitted the color-radius relation as: where X i is the considered color and i 0 is the de-reddened magnitude in the i-band. For stars that display several brightness measurements in the g, r and i bands, we used the mean as our final values, and the error was estimated from the sample variance with the (quadratic) addition of a 0.005 mag minimum error. We also used the error on the measured radius if available, and added quadratically the error on the observed i o magnitude. We explored several solutions using polynomials of different degrees and stopped as soon as the relative error on a i reached 1. Ultimately, we obtained: The data and best fit relations can be seen in Figure 8. As expected, the (r − i) o relation is less accurate (rms ∼ 0.05) than the (g − i) o relation (rms ∼ 0.04), especially for the coolest stars with (r − i) o ≥ 1 mag. But the accuracy is similar for a MOA-2019-BLG-009 source with (r − i) o ∼ 0.75 mag.
12,028.8
2022-05-16T00:00:00.000
[ "Physics", "Geology" ]
Protection against Mucosal SHIV Challenge by Peptide and Helper-Dependent Adenovirus Vaccines Groups of rhesus macaques that had previously been immunized with HIV-1 envelope (env) peptides and first generation adenovirus serotype 5 (FG-Ad5) vaccines expressing the same peptides were immunized intramuscularly three times with helper-dependent adenovirus (HD-Ad) vaccines expressing only the HIV-1 envelope from JRFL. No gag, pol, or other SHIV genes were used for vaccination. One group of the FG-Ad5-immune animals was immunized three times with HD-Ad5 expressing env. One group was immunized by serotype-switching with HD-Ad6, HD-Ad1, and HD-Ad2 expressing env. Previous work demonstrated that serum antibody levels against env were significantly higher in the serotype-switched group than in the HD-Ad5 group. In this study, neutralizing antibody and T cell responses were compared between the groups before and after rectal challenge with CCR5-tropic SHIV-SF162P3. When serum samples were assayed for neutralizing antibodies, only weak activity was observed. T cell responses against env epitopes were higher in the serotype-switched group. When these animals were challenged rectally with SHIV-SF162P3, both the Ad5 and serotype-switch groups significantly reduced peak viral loads 2 to 10-fold 2 weeks after infection. Peak viral loads were significantly lower for the serotype-switched group as compared to the HD-Ad5-immunized group. Viral loads declined over 18 weeks after infection with some animals viremia reducing nearly 4 logs from the peak. These data demonstrate significant mucosal vaccine effects after immunization with only env antigens. These data also demonstrate HD-Ad vectors are a robust platform for vaccination. It has been estimated that as much as 90% of HIV-1 infections occur by sexual transmission. In these cases, infection is thought to occur in most cases at vaginal, rectal, and urethral mucosal surfaces (reviewed in [17]). Given that the mucosal surface is the predominant entry route for HIV-1, there has been increasing interest in the development of vaccines that can generate robust antibody and cellular responses at mucosal surfaces (reviewed in [18]). Despite the recognized need for mucosal protection, most non-human primate challenge models involve intravenous injection of SIV or SHIV into animals. While this is appropriate to test the quality of systemic vaccination, this vaccine-challenge may not address whether mucosal protection is produced. Adenoviral (Ad) vectors are one of the most robust gene-based vaccine vectors available [19][20][21][22][23][24]. Until recently, most adenoviral vaccine experiments have utilized the well-studied human adenovirus serotype 5 Ad (Ad5). While this virus is one of the most robust at generating anti-HIV immune responses, the majority of the human population has been exposed to this virus and have pre-existing neutralizing antibodies that can attenuate vaccine delivery [25]. While pre-existing antibodies are a problem, once an Ad vaccine is introduced into a non-immune host, this itself will provoke an antivector response that will quench subsequent use of this vaccine. One approach to evade neutralizing antibodies is to "serotype switch" the vector by changing the serotype of the Ad vaccine at each administration [26,27]. When applied for HIV vaccines, serotype-switching evades vector-induced immunity allowing robust prime-boost vaccination with different adenoviruses [28][29][30][31][32]. In most cases, Ad serotype-switching has been performed using first generation adenoviral (FG-Ad) vectors. We recently demonstrated proof of principle for the use of helper-dependent adenoviral (HD-Ad) vectors for serotype-switching in mice and non-human primates [33]. In HD-Ad vectors, all viral sequences are deleted from the vector with the exception of the inverted terminal repeats (ITRs) and packaging signal needed to replicate and package the vector. This allows sequences as large as 35 kilobase pairs to be packaged [34,35]. Because all adenoviral genes have been removed from the vector, no Ad proteins are expressed after vector delivery. Therefore, HD-Ad vectors generate lower vector-specific immune responses [36][37][38]. The HD-Ad system easily allows serotype switching, since Ads in the same species can crosspackage each other's genomes. We recently utilized species C Ad helper viruses from serotypes 1, 2, 5, and 6 to cross-package HD-Ad5 vectors expressing reporter genes or HIV-1 env [33]. By this approach, we demonstrated the HD-Ad vectors generated lower anti-vector immune responses and allowed multiple rounds of prime-boost against HIV-1 env in mice and in FG-Ad5-immune rhesus macaques [33]. In this work, we have mucosally challenged these HD-Ad-immunized macaques by rectal administration of the CCR5-tropic virus SHIV-SF162P3 [39]. We provide data on T cell and neutralizing antibody immune responses to complement our previous report on ELISA antibody responses against env. We also provide data on the effects on viral loads in the animals by repeated HD-Ad5 vaccination versus serotype-switch HD-Ad6, 1, and 2 vaccination. Immunizations Prior to HD-Ad Vaccinations Eight macaques from previous studies (Table 1) were used in these experiments to conserve animals and for their prior immunizations with FG-Ad5 vectors. These animals had originally been immunized with various formulations of a synthetic peptide vaccine consisting of six conserved epitopes in the envelope (env) protein that have previously been shown to be effective at priming HIVspecific cellular immune responses in multiple animal models and humans [11,[40][41][42][43][44]. These six peptides shown aligned to env in Figure 1 generate CD4 and CD8 responses without generating antibody responses. These peptides in various formulations mediate protection in macaques vs. SHIV-KU2 and SHIV-89.6P [43,44]. Prior to the HD-Ad study, macaques Rh51, Rh55, Rh62, and Rh63 had been vaccinated with the six synthetic peptides adjuvanted with FLT-3 ligand, CpG and by loading on dendritic cells (Table 1). Macaques Rh52, Rh61, Rh66, and Rh67 received a similar course with the exception of receiving an inactivated cholera toxin adjuvant (CT2*) rather than FLT-3 ligand and CpG (Table 1). These animals were selected for study with HD-Ad, since they had all previously been vaccinated twice by the nasal route with 10 11 virus particles (v.p.) of FG-Ad5 expressing a fusion protein of the six peptides (vector described in [45]). Therefore, these animals represented an Ad5 pre-immune population on which to test the utility of HD-Ad serotype-switching. While this was advantageous, prior immunizations with the six env peptides could affect T cell responses against these six epitopes that might be generated by the HD-Ad vaccines, but should not affect T cell responses outside these regions ( Figure 1). Likewise, since the peptide vaccines do not generate antibodies against env, they would not be expected to confuse antibody effects of the HD-Ad vaccines. Figure 1. Protein sequence alignment of envelope antigens used in this study. The JRFL gp140 immunogen expressed by the HD-Ad vectors was aligned to the SF162P3 env protein of the challenge virus. Boxes indicate the locations of the six env peptides that were used to vaccinate the macaques prior to HD-Ad vaccination. HD-Ad Vaccinations The macaques in this study were only vaccinated with env immunogens. No gag, pol, or other SHIV sequences were used. The JRFL gp140 env antigen in the Ad vaccines was generated by deletion of the furin cleavage site between gp120 and gp41 and deletion of the transmembrane domain. This immunogen therefore does not immunize against epitopes that are present in the cleavage and transmembrane domains in the SHIV-SF162P3 challenge virus. Alignment of the JRFL immunogen with the SF162P3 antigen shows 545 identical amino acids and 47 divergent amino acids within the common peptide sequences ( Figure 1). Therefore, JRFL immunogen has 89% identity with the challenge virus. Macaques Rh51 and Rh55 from the FLT group and animals Rh52 and Rh61 from the CT2* group were utilized for HD-Ad5 vaccination (Table 1). Monkeys Rh62 and Rh63 from the FLT group and macaques Rh66 and Rh67 from the CT2* group were used for HD-Ad6, 1, and 2 vaccination. Each group of four macaques were immunized at days 0, 24, and 67 with 10 11 vp of the indicated HD-Ads expressing the JRFL gp140 form of env ( Figure 1) by i.m. injection. Group 1 received HD-Ad5 three times. Group 2 received HD-Ad6, then HD-Ad1, then HD-Ad2 at the same time points. Neutralizing Antibodies Generated Against HIV-1 Envelope We previously reported on the antibody responses against env by ELISA [33]. This work revealed that FG-Ad5-immune animals that were immunized with only HD-Ad5-Env generated only minimal responses. In contrast, immunization with HD-Ad6-Env, HD-Ad1-Env, and HD-Ad2-Env generated detectable anti-env antibodies at each immunization with final antibody levels being 10-fold higher than in the HD-Ad5 group (p < 0.01) [33]. Given the high ELISA antibody responses, the samples were sent to the Immune Monitoring Core supervised by Dr. David Montefiori at Duke University to assess if these antibodies could neutralize SHIV or HIV viruses in vitro (Table 2). By this assay, only slight neutralization titers were observed when the samples were tested against SHIV-SF162P4 viruses and 89.6P.18, but not against other test viruses. Other field isolates tested were: SHIV-SF162P3. 5 Neutralizing Antibodies Against Adenovirus Ad5 neutralizing antibody levels were monitored in the animals after each immunization ( Figure 2). Before first HD-Ad immunization, Ad5 neutralizing titers were 28 for the HD-Ad5 group and 52 for the serotype-switch group. This demonstrated that the prior intranasal FG-Ad5 immunizations had produced anti-Ad5 immunity in the animals. After first HD-Ad immunization, HD-Ad5 and HD-Ad6 boosted Ad5 neutralization titers to 500 in both groups. Two more immunizations with HD-Ad5 increased final titers to 800. One immunization with HD-Ad1 and then one with HD-Ad2 produced declining anti-Ad5 antibody levels that were three-fold lower than those generated by three HD-Ad5 immunizations. These data indicate that other viruses in species C can boost common neutralizing antibody levels (i.e. HD-Ad6), but that serotype-switching ultimately reduces the level of neutralizing antibodies after three immunizations. Figure 2. Neutralizing Antibody Responses Against Ad. Plasma samples taken at the indicated times were incubated with Ad5 expressing luciferase for 1 hour at 37°C prior to addition to A549 cells. 24 hours later, luciferase activity was measured and gene delivery was compared to untreated Ad5 vector. Data is expressed as geometric mean titers that reduced Ad luciferase activity 50%. T Cell Responses Generated by the HD-Ad Vaccines PBMCs were harvested before and after each vaccination to monitor T cell responses against the env antigen by ELISPOT. PBMCs were stimulated either with the six epitopes of the peptide vaccine that was delivered prior to Ad vaccination or with overlapping 15-mer peptide pools from HIV-1 SF162P3 env covering the gp140 region in the HD-Ad vectors. Alignment of the JRFL gp140 immunogen with SF162P3 peptide pools shows 89% identity with the peptides used for ELISPOT. Alignment with of JRFL with the peptide vaccine shows amino acid mismatches in four of the six peptides ( Figure 1). ELISPOT testing before HD-Ad vaccination revealed responses below background for two macaques in the HD-Ad6/1/2 group and three in the HD-Ad5 group (Figure 3). The three other macaques had weak ELISPOT signals of 200 or less SFCs per 10 6 cells ( Figure 3). With each HD-Ad immunization, CD8-IFN- SFCs generally increased in both groups when after stimulation with the SF162P3 env overlapping peptide pools. Responses were higher against the SF162P3 peptides in all of the serotype-switched animals and were less variable than in the HD-Ad5 group. T cell responses peaked after one or two immunizations in the HD-Ad5 group with peaks from 200 to 800 SFCs per 10 6 cells. In contrast, T cell responses peaked in most serotype-switched animals after third immunization with highest SFCs ranging from 700 to 2,000 SFCs (Table 3). When the six peptides of the peptide vaccine were used to stimulate the PBMCs, SFC responses in both groups were substantially lower and less frequent (Figure 3), suggesting that most of the T cell responses were directed at epitopes outside those covered by the peptide vaccine ( Figure 1). Stimulation of the PBMCs with Ad5 or Ad6 produced largely undetectable T cell responses suggesting responses were predominantly against the env immunogen rather than against the Ad vectors. Values represent the combined ELISPOT responses to all 3 pools of overlapping peptides with pre-HD-Ad immune responses subtracted. Bold values indicate peak cellular anti-SF162P3 immune responses. Mucosal SHIV Challenge To mimic sexual transmission of HIV, macaques were challenged by atraumatic administration of 1,000 TCID 50 of the CCR5-tropic virus SHIV-SF162P3 ( Figure 4). Challenge of three control macaques produced peak viremia within 2 weeks with viral loads above 2x10 7 viral genomes per ml of plasma ( Figure 4B). Viral loads remained above 10 6 copies/ml for 6 months at which time two of the animals were sacrificed due to weight loss and AIDS-like symptoms. The HD-Ad-immunized animals were challenged 4 months after first immunization. This SHIV-SF162P3 challenge produced lower peak viremia and viral set points (Figure 4). At peak, viral loads were 2 to 10-fold lower in the HD-Ad vaccinated group than in control animals. One animal Rh51 in the HD-Ad5 group had viral RNA levels below detection and so appeared to have not had a "take" of the challenge virus. When Rh51 was censored, peak viremia at 2 weeks for both vaccine groups was significantly lower than controls (p = 0.04, Figure 4B). Notably, peak viremia for the HD-Ad6/1/2 group was significantly lower than the HD-Ad5/5/5 group (p < 0.05). By 18 weeks, viral set points were below 3,000 copies for all of the immunized macaques. This was notable, since all of these animals were only vaccinated with env antigens. No gag or other SHIV antigens were used. While the HD-Ad5/5/5 and HD-Ad6/1/2 groups were not significantly different from each other at this time, it was interesting that that the viral RNA levels for Rh55 from the HD-Ad5 group and Rh63 and Rh67 from the HD-Ad6/1/2 group were down to 30-60 eq./ml or 4 orders of magnitude down from their peak viremia. Discussion We previously reported the use of HD-Ad vectors for HIV vaccination [33]. In this earlier work, we were able to utilize eight macaques that had previously been immunized nasally with FG-Ad5 to test our ability to vaccinate in Ad5-immune macaques. We demonstrated that serotype-switching did indeed provide robust circumvention of pre-existing immunity in these non-human primates and allowed the production of anti-env antibody responses that were 9 times higher than those generated by HD-Ad5 vectors [33]. In this work, we have analyzed the production of neutralizing antibodies against the env transgene protein and against Ad5 itself. This work shows that the strong anti-env ELISA titers that we observed after serotype-switching unfortunately did not translate into the production of robust neutralizing antibodies against HIV or SHIV. These data suggest that protection was mediated by T cell responses or by other antibody mechanisms (i.e. antibody-dependent cellular cytotoxicity (ADCC) [46], etc.). This is consistent with previous observations that SHIV-SF162P3 is notoriously hard to neutralize with antibodies [47,48]. While these Ad5 pre-immune animals provided a good model to test for antibody production alone, they had also been previously been vaccinated with the six env peptides in various formats (Table 1). Since these peptide vaccines do not generate antibody responses, this did not affect comparison of env antibody production by the HD-Ad vaccines, but could affect the production of T cell responses by acting as priming vaccines for the HD-Ad vaccines. To test this, we compared PBMC ELISPOT responses against the cognate six epitopes used in the previous vaccinations and against overlapping 15-mer peptides from SF162P3 env spanning the vaccine's gp140 region. This comparison revealed that there was little cross-reactivity generated by the HD-Ad vaccines against the six peptides, but stronger T cell responses were generated against the peptide pools. These data suggest that the HD-Ad vaccines are generating much of the detectable T cell responses observed in the macaques. While prior immunization with the peptides complicated data analysis, in the interest of the strong responses we observed and to minimize future animal use, we opted to challenge these animals with SHIV. We performed mucosal challenge by the rectal route with the CCR5-tropic virus SHIV-SF162P3 to mimic sexual transmission of the virus. While SIV is arguably a more suitable mucosal challenge virus than SHIV, our challenge virus had to express an HIV-1 env to assess the HIV-1 envdirected immunity that the HD-Ad vaccines had established. This challenge demonstrated that control animals had severe peak viremia after mucosal challenge and that this viremia persisted for six months until AIDS-like symptoms necessitated euthanasia of two of the animals. In contrast to controls, the HD-Ad vaccinated animals had 2 to 10-fold lower peak viremia and viral loads generally trended downward over the next 4 months. For three of the animals, viral loads approached the limits of detection by 18 weeks. These data suggest that the peptide vaccines, the HD-Ad vaccines, or both lead to lower viral loads in the animals after mucosal challenge. Given the observed ELISPOT responses, we speculate that much of this protection was mediated by the Ad vaccines. Comparison of the HD-Ad5/5/5-immunized animals and the serotype-switched HD-Ad6/1/2 group demonstrated that animals vaccinated with the different serotypes had statistically lower peak viremia than those immunized with only HD-Ad5. This confirms the utility of serotype-switching that has previously been observed using FG-Ad vectors [28][29][30][31][32]. This also suggests that some level of the protection against SHIV challenge was actually mediated by the Ad vectors rather than the earlier peptide vaccines, since the serotype-switched vaccine generated more robust immune responses that may have resulted in the lower peak and set point viral loads. Peak cellular responses in the HD-Ad6/1/2 serotype-switched group were observed after the third immunization for three of the four immunized animals. This indicates that serotype-switching was driving anamnestic responses while the HD-Ad5/5/5 group immune responses may have become senescent due to increasing anti-Ad5 neutralizing antibodies (Fig. 2 and Table 3). This comparison is based on censoring Rh51 from the analysis, since it had undetectable viral loads throughout the study. Censoring this animal is based on the assumption that the undetectable viral loads in HD-Ad5/5/5 group monkey Rh51 were due to poor "take" of the challenge virus. If Rh51 is included, the two groups are equal to each other by statistical comparison. While it is formally possible that the vaccine fully protected Rh51, we are unaware of an example of sterilizing immunity being generated by any vaccine in this model. In addition, T cell and antibody responses in Rh51 were comparable to those in other macaques. Therefore, the most likely explanation is that Rh51 merely was not robustly infected by the challenge virus. Adenoviruses HD-Ad1, 2, 5, and 6 viruses expressing the gp140 form of HIV-1 JRFL were produced as previously described [33]. HD-Ad5-env vector was transfected into a 60-mm dish of Cre-expressing 116 cells expressing Cre recombinase as in [49]. The transfected cells were infected a day later with the E1-deleted Ad5 helper virus AdNG163 whose packaging signal is flanked by loxP sites [49] for deletion in the Cre cells. Lysates were subsequently amplified by serial infections with AdNG163 in 116 cells. CsCl-banded HD-Ad were then produced from 3 liters of 116 cells producing HD-Ad preps with E1 -deleted helper contamination less than 0.02% [49]. HD-Ad1, 2, and 6 vectors were generated with helper viruses Ad1LC8cCEVS-1, Ad2LC8cCARP [26], and Ad6LC8cCEVS-6, respectively that were generously provided by Carole Evelegh and Frank L. Graham (McMaster University). Animals All animal experiments were carried out according to the provisions of the Animal Welfare Act, PHS Animal Welfare Policy, and the principles of the NIH Guide for the Care and Use of Laboratory Animals, and the policies and procedures of the University of Texas MD Anderson Cancer Center. Eleven adult male rhesus macaques (Macaca mulatta) of Indian origin were maintained in the specific pathogen-free breeding colony at the Michael Keeling Center for Comparative Medicine and Research of The University of Texas MD Anderson Cancer Center, Bastrop TX. The animals were anesthetized during procedures to minimize discomfort. The animals were not screened for Mamu genotype prior to study, but were randomized into the two HD-Ad vaccine groups based to equally segregate animals previously treated into both groups. HD-Ad Vaccination These peptide and FG-Ad5-immunized macaques were immunized at days 0, 24, and 67 with 10 11 vp of the indicated HD-Ads by i.m. injection (Table 1). Collection of Samples Samples were collected at each time point indicated before any immunization or procedure. Peripheral venous blood samples were collected in EDTA or sodium heparin. Before the separation of peripheral blood mononuclear cells (PBMC) from the blood samples, plasma was separated and stored immediately at -80°C. Peripheral Blood Mononuclear Cells (PBMCs) were prepared from the blood on Ficoll-Hypaque density-gradients. Assay for Neutralization of HIV and SHIV Neutralization was measured as a reduction in luciferase reporter gene expression after a single round of infection in TZM-bl cells as described [50,51]. TZM-bl cells were obtained from the NIH AIDS Research and Reference Reagent Program, as contributed by John Kappes and Xiaoyun Wu. Briefly, 200 TCID 50 of virus was incubated with serial 3-fold dilutions of test sample in duplicate in a total volume of 150 l for 1 hr at 37 o C in 96-well flat-bottom culture plates. Freshly trypsinized cells (10,000 cells in 100 l of growth medium containing 75 g/ml DEAE dextran) were added to each well. One set of control wells received cells + virus (virus control) and another set received cells only (background control). After a 48 hour incubation, 100 l of cells was transferred to a 96-well black solid plates (Costar) for measurements of luminescence using the Britelite Luminescence Reporter Gene Assay System (PerkinElmer Life Sciences). Neutralization titers are the dilution at which relative luminescence units (RLU) were reduced by 50% compared to virus control wells after subtraction of background RLUs. Assay stocks of molecularly cloned Env-pseudotyped viruses were prepared by transfection in 293T cells and were titrated in TZM-bl cells as described [50]. The clade B reference Env clones were described previously [50]. Assay for Neutralization of Ad5 Ad5 neutralization was performed as described previously [45]. Briefly, serial dilutions of plasma were incubated in triplicate for 1 hour at 37°C with Ad5 vector expressing luciferase. The resulting solution was added to A549 cells for 24 hours and luciferase activity was measured. Data is expressed as geometric mean titers that reduced Ad luciferase activity 50%. ELISPOT assay for detecting antigen-specific IFN- producing cells Freshly prepared PBMC were used for the IFN- ELISPOT assay as described previously [52]. PBMCs were either stimulated with synthetic peptides pools, with Ad expressing env, or with Con A (5 g/ml) as positive control reagent. For the six peptide vaccine cocktail, the six epitopes ( Figure 1) were mixed as a pool. For overlapping envelope peptides, the SF162P3 env 15-mer peptide set (NIH AIDS Reagent Program) was used as 3 pools of 50 to 70 peptides spanning the gp140 region. Alignment of the JRFL immunogen encoded in the Ad vectors with SF162P3 peptide pool shows 89% identity with the peptide pool used for ELISPOT. PBMCs (1 x 10 5 ) were seeded in duplicate wells of 96-well plates (polyvinylidene difluoride backed plates, MAIP S 45, Millipore, Bedford, MA) coated with anti-IFN-The cells were incubated in the presence of the various antigens for 36 h at 37C. The cells were then removed, the wells washed , and then incubated with 100 l of biotinylated anti-IFN- for 3 h at 37C followed by avidin-HRP for another 30 minutes. Spots representing individual cells secreting IFN- were developed using 0.3 mg/ml of 3-amino-9-ethyl-carbazole in 0.1 M sodium acetate buffer, containing 0.015% hydrogen peroxide. The plates were washed to stop development and the spots were counted by an independent agency (Zellnet Consulting, New Jersey, NJ). The responses in terms of IFN- spot forming cells (SFC) for 10 5 total input CD8 + T cells were determined for individual monkeys after subtracting background values of cells cultured in the medium. The cut off value for determining the positive response in the assay is defined as a minimum of 10 spots that is twice the number observed in cells cultured in the medium. Data is represented as SFCs per 10 6 PBMCs for comparison to previous reports in the literature. Virus Challenge Macaques were challenged macaques by intrarectal inoculation of 1,000 TCID 50 of SHIV-SF162P3 from the NIH AIDS Reagent Program. Viral Load Determination SHIV viral loads from the blood were determined by determining viral RNA copy numbers by realtime RT-PCR analyses. These assays were performed at the NIH Core Facility by Dr. Jeff Lifson's group. The threshold sensitivity of the assay is 30 viral RNA copy-equivalents/ml of plasma, and the inter-assay variation is <25% (coefficient of variation). Statistical Analyses Data was evaluated using GraphPad Prism 4 software. P values ≤ 0.05 were considered statistically significant. Conclusions This study demonstrates that serotype-switched HD-Ad vaccines generate higher immune responses and lower viral loads after mucosal challenge with a CCR5-tropic SHIV. This provides proof of principle for applying these vaccines systemically or mucosally to repel mucosal entry by SIV or HIV-1. These data are also notable given the fact that these non-human primates were only immunized with the envelope immunogen. No gag, pol, nef, or other SIV or HIV proteins we used for vaccination. This suggests that delivery of these missing lentiviral antigens by HD-Ad vaccines may well provide even more substantial protection against mucosal challenge.
5,818.4
2009-11-10T00:00:00.000
[ "Biology", "Medicine" ]
Multi-Attribute Multi-Perception Decision-Making Based on Generalized T-Spherical Fuzzy Weighted Aggregation Operators on Neutrosophic Sets The framework of the T-spherical fuzzy set is a recent development in fuzzy set theory that can describe imprecise events using four types of membership grades with no restrictions. The purpose of this manuscript is to point out the limitations of the existing intuitionistic fuzzy Einstein averaging and geometric operators and to develop some improved Einstein aggregation operators. To do so, first some new operational laws were developed for T-spherical fuzzy sets and their properties were investigated. Based on these new operations, two types of Einstein aggregation operators are proposed namely the Einstein interactive averaging aggregation operators and the Einstein interactive geometric aggregation operators. The properties of the newly developed aggregation operators were then investigated and verified. The T-spherical fuzzy aggregation operators were then applied to a multi-attribute decision making (MADM) problem related to the degree of pollution of five major cities in China. Actual datasets sourced from the UCI Machine Learning Repository were used for this purpose. A detailed study was done to determine the most and least polluted city for different perceptions for different situations. Several compliance tests were then outlined to test and verify the accuracy of the results obtained via our proposed decisionmaking algorithm. It was proved that the results obtained via our proposed decision-making algorithm was fully compliant with all the tests that were outlined, thereby confirming the accuracy of the results obtained via our proposed method. Introduction Zadeh [1] first introduced a formal tool to deal with the uncertainties and imprecision that occurs in real-life situations and called this as a fuzzy set (FS). A FS assigns a value in the interval of [0, 1] called a membership grade to every object whereby this membership grade indicates the degree of belongingness of the object to the fuzzy set that is being studied. Since its inception, fuzzy set theory has proven to be highly useful in many areas such as decision making, pattern recognition, automation, and the development of fuzzy logic inference systems. Atanassov [2,3] introduced a generalization of FSs called an intuitionistic fuzzy set (IFS). The IFS model is characterized by two grades of membership, namely a membership function and a non-membership function. As the name indicates, both these functions represent the degree of belongingness and degree of nonbelongingness of an object to the fuzzy set that is being studied. One of the limitations of the IFS model is that the sum of the membership and non-membership value must lie within the closed unit interval of [0, 1]. To overcome this limitation, Yager [4,5] introduced a concept of a Pythagorean fuzzy set (PyFS) in which he relaxed this limitation by defining the sum of the squares of the membership function and non-membership function must lie within the interval [0, 1]. This presents the decision makers with wider options when modelling a situation using PyFS, yet there are still restrictions as the decision makers are only free to assign values that fit a certain condition. To overcome this issue, Yager [6] introduced the notion of q-rung ortho pair fuzzy set (q-ROPFS) in which there are no limitations as to the type of membership functions that can be assigned to the objects. The concept of picture fuzzy sets (PFSs) were introduced by Cuong [7,8]. In the PFS model, there are three functions called the membership function, the abstinence function, and the nonmembership function. The PFS model has a similar restriction to the IFS model, in which the sum of the membership, abstinence, and non-membership functions must lie within the interval of [0, 1]. To further overcome this issue, Mahmood et al. [9] introduced the concept of spherical fuzzy sets (SFSs) in which they relaxed this condition so that the sum of the squares of these three membership values must lie within the interval of [0, 1]. In the same paper, the authors also went on to introduce the concept of T-spherical fuzzy sets (T-SFSs) in which there were no limitations or conditions on the values that are allowed for the membership grades. All of the tools used to handle uncertainties that have been discussed above have proven to have many applications in multi-attribute decision making (MADM) problems related to pattern recognition, similarity measures, and information measures. Many authors have proposed various methods based on aggregation operators for solving MADM problems. These include Xu [10] who proposed a decision-making method based on weighted averaging operators to solve MADM problems based on intuitionistic fuzzy information (IFI). Garg [11,12] introduced interactive aggregation operators for IFI, whereas He et al. [13] proposed the use of interactive geometric operators for IFSs to solve MADM problems based on IFI. Zhao and Wei [14] proposed a decisionmaking method based on Einstein hybrid aggregation operators for IFI, whereas Liu [15] introduced frank aggregation operators for MADM problems based on the interval-valued IFI framework. Garg [16,17] introduced Einstein norms to solve MADM problems for PyFSs, Peng et al. [18] proposed an exponential operation and aggregation operator for q-rung ortho pair fuzzy information, while Wei [19] introduced geometric aggregation operators for PFSs. Garg [20] proposed picture fuzzy aggregation operators, whereas Garg et al. [21] proposed interactive geometric operators for T-SFSs, and applied these in solving MADM problems in various areas. Li and Deng [22] introduced a generalized ordered proposition fusion based on belief entropy, whereas Fei et al. [23] introduced a new vector valued similarity measure for intuitionistic fuzzy sets based on OWA operators. We refer the readers to for a comprehensive, overall view of the many different methods that are available in literature pertaining to the IFS, PyFS and PFS models. There are shortcomings and inaccuracies in the existing Einstein operations that have been introduced previously in literature. The existing Einstein operations for IFSs that were introduced in [13] fail under certain circumstances. For example, if = ( , 0) and = (0, ) are intuitionistic fuzzy numbers (IFNs), then intuitionistic fuzzy Einstein weighted averaging operator (IFEWAO) aggregates these IFNs as ( , 0) and intuitionistic fuzzy Einstein weighted geometric operator (IFEWGO) aggregates these IFNs as (0, ). From this simple example, it can be clearly observed that the IFEWAO will not aggregate the whole non-membership value if one IFN happens to have a non-membership value of zero, and similarly the IFEWGO will not aggregate the whole membership value if one IFN happens to have a membership value of zero, which are clearly inaccurate. This and other similar problems served as the motivation for us to propose some new Einstein aggregation operators for the T-SFS model that will be able to overcome this and similar shortcomings in existing structures. In this paper, we developed some new operational laws for T-spherical fuzzy sets with their properties. Based on these new operations, two types of Einstein aggregation operators are proposed, namely, the Einstein interactive averaging aggregation operators and the Einstein interactive geometric aggregation operators. The properties of the newly developed aggregation operators were then investigated and verified. The T-spherical fuzzy aggregation operators were then applied to a multi-attribute decision making (MADM) problem related to the degree of pollution of five major cities in China. Actual datasets sourced from the UCI Machine Learning Repository were used. A detailed study was done to determine the most and least polluted city for different perceptions for different situations. Several compliance tests were then outlined to test and verify the accuracy of the results obtained via our proposed decision-making algorithm. The rest of the paper is organized as follows. A brief but comprehensive background study of the important concepts related to this paper is recapitulated in Section 2. In Section 3, we present a detailed study of the important properties of aggregation operators, namely, the boundedness, monotonicity, idempotency and commutativity. In Section 4, we introduce two operators, namely, the generalized t-spherical fuzzy w-weighted geometric and arithmetic interaction functions, and study the properties of these operators. In Section 5, two decision making algorithms are introduced for the newly introduced operators. These algorithms are subsequently applied to solve a multiattribute multi-perception decision making problem related to the ranking of the pollution level of five major Chinese cities using real-life datasets of the concentration of PM2.5 pollutant in five major cities in China. Concluding remarks are presented in Section 7, followed by the acknowledgements and list of references. Preliminaries Some basic notions over a universal set are defined and these notions will help us in our proposed work. , , are respectively called the membership function, the abstinence function, and the non-membership function of . (iii) ( ) = 1 − ( ( ) + ( ) + ( )) is called the degree of refusal of in . We however do not agree with such definitions defined by the previous authors. We found that the algorithm works for all 〈 , , 〉 as long as , , are real numbers in [0,1] (see [46]). In fact it is clear that a t-spherical fuzzy number will also be a f-spherical fuzzy number for all > , simply because the satisfaction of 0 ≤ + + ≤ 1 for all ≥ will have included all with ≥ as well. Moreover, by such definition, if , , all < 1, then 〈 , , 〉 will always be a t-spherical fuzzy number for all that is large enough. Not to mention that, in the context of 0 ≤ + + ≤ 1, the choices of in the existing literatures of t-spherical fuzzy number is limited to natural numbers. This hinders the flexibility of the structure making it incapable of being fine-tuned to suit a situation. , , are called the truth-membership function, the indeterminacy-membership function, and the falsitymembership function of , respectively. Remark 3. It is straightforward that ⊆ . In the literature of intuitionistic fuzzy numbers, there has been some well-established operations defined for them. One well known group of operations are called the Einstein operations and these are defined as follows. Definition 5. ([14]) Let = 〈 , 〉 and = 〈 , 〉 be two intuitionistic fuzzy numbers. The Einstein operations are defined as given below: There has also been a set of operations for SVNn as proposed by Wang et al. [47] which are defined as follows: The operations for all elements can be defined as follows: Monotonicity, Boundedness, Idempotency, and Commutativity of Operations In order to use an operation in the procedure to aggregate a group of data, it is desirable that the operation satisfies the following properties. Furthermore, the higher the extent to which the operation satisfies these properties, the higher the extent to which the proposed operation is able to resemble human intuition effectively. In this section, we define the properties of monotonicity, boundedness, idempotency, and commutativity for the SF n. (1) Boundedness: ℱ is said to be bounded if all of the following conditions hold: for all being an SVNn (2) Monotonicity: ℱ is said to be monotone if the following condition holds: It is therefore necessary to generalize such a theorem for functions that map a -tuple of SVNn (i.e., an element of ) to a single SVNn, where may not be 1. Furthermore, for the definition of boundedness, emphasis should be given to 〈1,0,0〉 instead of 〈1,0,1〉. This is because 〈1,0,0〉 is an indication of perfect membership in any generalizations from the classical literature of sets. This leads us to the following definitions. Otherwise, ℳ is said to be loosely bounded. (3) Idempotency: ℳ is said to be idempotent if the following condition holds for all ( , , … , ) ∈ , and for all ∈ : Moreover, ℳ is said to be strictly idempotent, if the following condition holds too for all ( , , … , ) ∈ , for all ∈ [0,1] and for all ∈ {m, i, n}: Otherwise, ℳ is said to be loosely idempotent. Moreover, ℳ is said to be strictly commutative if the following condition holds for all ( , , … , ) ∈ and for all ∈ {m, i, n}. Otherwise, ℳ is said to be loosely commutative. Otherwise, is said to be loosely bounded. Otherwise, is said to be loosely monotone. As there are now more than one SVNn that act as inputs, we further define the following. (3) Idempotency: is said to be idempotent if the following condition holds for all = ( , , … , ) ∈ , for all ∈ , and for all ∈ . Otherwise, is said to be loosely idempotent. Otherwise, is said to be loosely commutative. Generalized T-Spherical Fuzzy Subjectively Weighted Interaction Operators In this section, we introduce the concepts of the generalized t-spherical fuzzy subjectively weighted interaction operators and study some of its properties. (1) With being a positive real number, = ( , , … , ) ∈ , and , satisfying 0 ≤ ≤ ≤ 1 for all = 1,2, ⋯ , , it follows that: As a result, This further implies that We now obtain and therefore holds for all positive real numbers . (i) The Generalized t-Spherical Fuzzy Weighted Geometric Interaction Function, is defined as In such a case, In such a case, Theorem 4. and are strictly commutative, regardless of the value of . Proof. This theorem is a direct consequence of Definition 11 and Definition 10. □ As the aggregation will involve comparison of SVNn . A way of assessment is needed to determine the superiority of the choices based on the contents of the SVNn. The following properties hold for spherical fuzzy numbers and SVNn. Definition 12. [21] Let = 〈 , , 〉 be a spherical fuzzy number. Then is said to be the accuracy value of . The following properties holds for all SVNn. is said to be the accuracy function of . In light of the nature of our work, we shall adopt the following for the remaining sections of this paper. is said to be the Generalized t-Spherical score value (abbr. -score value) of . Then is said to be superior to , denoted as ≻ , if any one of the following statements holds. On the other hand, is said to be similar to , denoted as ∼ , if both S( ) = S( ) and A( ) = A( ) are true. Furthermore, we denote ≽ , if either ≻ or ∼ holds. The set of all relative maxima of is denoted by rmax( ). A is said to be a relative minimum of if: The set of all relative minima of is denoted by rmin( ). Remark 6. Let , ∈ ( ). Then ∼ . The relationships ≽, ≻ and ∼ as defined in Definition 15 can therefore be extended to relate among ( ) with any . However, that does not mean that = . Remark 7. If | ( )| = 1, then its sole element is also said to be the absolute maximum of . Algorithms for Multi-Attribute Multi-Perception Decision-Making Based on and Consider a set of different alternatives = { , , … , }, where each one of them are judged on a set of different attributes = { , , … , }. For each combination of , ∈ × , the outcome of the judgement is characterized by an SF n , = 〈 , , , , , 〉. The subjective weight that each of the attribute carries, which is set by the user, are in accordance with a weight vector = ( , , … , ) ∈ , where corresponds to the weights of for all . The strictness of accessing an attribute is addressed by a real number which is chosen by the user subject to his perception on the data he is investigating (see Section 6.2 for examples on our case study). This decision-making method is called the multi-attribute multi-perception decision making (MAMPDM). Prologue: The Derivation of from a Raw Dataset To justify the practical usefulness of our algorithm, we are employing the use of raw actual datasets that are obtained from various real-life situations, in which the entities in such datasets can potentially be in many different formats as shown in Table 1. Moreover, a given dataset may even contain more than one of such type of entities. Thus, it is extremely unlikely to have the raw entities from a given data set resembling the characteristics of SVNn (or any structure in the literature of fuzzy theory). It is for this reason that whenever we need to deal with a given dataset, there must always be a dedicated method for converting the raw data to SVNn, as decided by the investigators. Such methods of conversion will certainly depend on the type of entities being studied in a dataset. For example, a given method (which can involve the use of formulae and algorithms) of converting one single entity of "daily mean temperature of a city" into an SVNn, will be totally inadequate and would not be contextually accurate for converting "the number of customers visiting a restaurant at a given point in time" into an SVNn. But as a real-life dataset may contain a large amount of data, there may be more than one reading presented for a given entity. For example, even in the case of "daily mean temperature of a city", there could be multiple readings taken at different stations within that particular city, all of which are presented in the dataset. Therefore, the method of conversion in this case may involve the conversion of multiple entities into one SVNn. Nirmal and Bhatt [48], suggested four different methods of such conversion in the context of selecting automated guided vehicles, all of which involved the conversion of a single quantitative, continuous entity into an SVNn. However, such methods only provide a formula for obtaining the value of in a SVNn = 〈 , , 〉, where both and are simply taken to be 1 − . As a result, the three membership values 〈 , , 〉 obtained in such a way, only possess one degree of freedom. It is therefore evident that such methods of conversion radically contradicts the purpose of establishing a SVNn with three independent entities representing the truth, indeterminacy, and falsity membership values. Not to mention that, in most existing literature about fuzzy-based decision making, the authors simply use a very small amount of data made by the authors themselves. Such practice, though avoiding the needs of such conversion, severely hinders the establishment of the application of fuzzy theory in real life scenarios. Therefore, it is evident that a faithful generation of SVNn will necessitate a dataset with a significant caliber, so that the conversion of multiple entities into one SVNn can take place. Only then we can possibly generate SVNn that stays true to the concept of SVNSs where the values of , , are independent of one another. On the other hand, there may even be cases where the data is mentioned to be completely absent during some part of the dataset. In such a case, the approaches of dealing with the dataset will again depend on the nature of the problem being investigated, as well as the personality of the investigator (e.g., stock buyers).As a result, such approaches can very possibly range from "complete ignorance" (e.g., if the stock buyer is a conservative investor who is fearful of the unknown) all the way to "utmost importance" (e.g., if the stock buyer is very curious of the unknown). We refer the readers to Section 6.6 for such a method of obtaining SVNn from our dataset of investigation. Algorithm for Based Multi-Attribute Multi-Perception Decision Making Step 1. For each of the attributes under each of the alternatives, derive an SF n from the raw data using a suitable method, as explained in Section 5.1. This forms a matrix where , is the SF n value for the alternative on the attribute , for all and . Denote = -th row of = , , , , … , , , for all . Remark 8: The method of obtaining SVNn from the raw dataset is presented in Section 6.6. Step 5. Determine the superiority of each using Definition 17. Algorithm for Based Multi-Attribute Multi-Perception Decision-Making Step 1-Step 3. Same as Step 1-Step 3 in Section 5.3. Remark 9: The method used to derive the SF n may differ from the one used in Section 5.3, even in the case where both this algorithm and the algorithm in Section 5.3 are used to deal with the same raw dataset. Step 5. Same as Step 5 of the algorithm in Section 5.3. An Overview of the Scenario-Air Pollution in China The air pollution in China has long been a worldwide health concern ever since China's industrial boom. In China's capital Beijing in particular, the concentration of PM2.5 had even reached nearly 1000 μg m around the year 2013, a historic high in China at that time. The severity of the smog is evident from the two satellite images captured by NASA which are given below in Figures 1 and 2. The burning of coal during winter season by itself is yet to be a main constituent of air pollution. Burning coal during the period where the air is still causes massive pollution, as the still air is incapable of dispersing the pollutant. Another major constituent of pollution in China is the immense traffic flow, more during holiday seasons. Traffic congestion has already become an issue in many major cities in China. One much notable case is the 110 h traffic jam that occurred in the China National Highway on 13 August 2010, which lasted for about two weeks continuously. Furthermore, the concentration of pollutants may experience a sudden increase during the Lunar New Year season as millions of people start igniting fireworks across all the regions in China, causing a sudden appearance of thick smog in cities which has been called "the spring smog". It is plain to see that each constituents of pollution take place according to a certain pattern within a year. For example, the burning of coal takes place during winter season, causing the PM2.5 concentration to be generally higher during the winter season. Such pattern can even be observed from the values of PM2.5 concentrations contained in our dataset of choice (see Section 6.3). Actions Taken to Combat Pollution in China In response to the pollution, China has revised the method of assessment of air quality in 2012. The old method of assessment of air quality, in accordance with the standard GB 3095-1996, was by measuring the concentration of only three pollutants: SO2, NO2, and PM10 in the air. Those readings were the inputs to calculate the API (air pollution index). Such procedures were repeated daily. The revised method of accessing air quality that is in accordance with the revised standard GB 3095-2012, not only imposes stricter thresholds on the concentration of the previous 3 pollutants SO2, NO2, and PM10, but also measures the concentration of PM2.5, O3, and CO in the air. Moreover, the value of concentration of all the six pollutants are used to calculate the AQI (air quality index). Such procedures are repeated hourly, thus providing a much more frequent update compared to the previously used method. In this information age, the value of the hourly updated AQI for many major cities in China, are readily accessible to the public and can be found by visiting the relevant websites. Some websites even show the concentration of each of the six pollutants individually. The following Figures 3 shows the addresses and the interface. The Chinese government has also controlled the use of fireworks during the Lunar New Year seasons to address these pollution concerns. Many cities have now outlawed fireworks, or only allow the use of fireworks in specific locations at specific times and dates. Shanghai Nonetheless, when compared with other countries, there is still some difference on how the output value (whether AQI or API or anything similar) is deduced from the concentration of the pollutants. Notably in the aspect of PM2.5 concentration, different nations have different ways of classifying the concentration of PM2.5 and other pollutants in the air that is they have different methods of calculating the API or AQI. As a result, it is more objective to describe the degree of pollution by referring to the concentration of the pollutants, rather than solely relying on the AQI value output that is provided by a country. For China in particular, among the six pollutants, the main constituent used in the calculation of the API and AQI indexes has been identified as the concentration of PM2.5. Thus in this paper, we will be emphasizing on the concentration of PM2.5 in five selected major cities in China. The Multiple Perception of Comparing the Severity of Pollution When the severity of air pollution between two cities is compared, there are multiple perceptions to judge the severity. One can consider the pollution in an "overview" manner by averaging the readings of PM2.5 concentrations on all cities across all days, or one can judge in a "pinpointing" manner by considering which city registered the most extreme daily PM2.5 concentrations recorded. With regards to the way of pinpointing, this further gives rise to two ways of doing so: (i) by pinpointing which city registered the highest daily PM2.5 concentrations recorded (ii) by pinpointing which city registered the lowest daily PM2.5 concentrations recorded These ways correspond to the way two different sectors of a country deals with pollution, namely, environmental management and tourism marketing, both of which are sectors that are indispensable in the development of a nation. From the View of Environmental Management Suppose that some regional governments of China are investigating which of their cities are the most polluted and decide to take action to combat the pollution. Some of the actions can be drastic and risky and can only be done on few days of a year (e.g., forced shutting down of factories and power stations, or evacuations). Some other actions are safer and can be done on many days of a year, but are not as effective (e.g., giving away face masks to residents). As a result, the environmental management sector of China will pinpoint the city with very high daily PM2.5 concentrations, no matter how few the days are. As an illustration, consider an example of three cities as given below. If we were to judge solely by the maximum PM2.5 concentration reached (i.e., in a "pinpointing" manner), then City A will be deemed the most polluted. On the other hand, a PM2.5 concentration of 75 μg m is deemed "unhealthy" by Chinese AQI standards. So, if we were to judge solely by the number of days with PM2.5 concentration surpassing 75 μg m (i.e., in an "overviewing" manner), then City B will be deemed the most polluted. Moreover, if "burning coal during winter" is the main concern (i.e., not so "pinpointing" nor "overviewing"), then City C will be deemed the most polluted. From the View of Tourism Marketing Suppose a tourism company wishes to promote China by taking some beautiful pictures of a city. The company will be pinpointing just a few days with clear skies (i.e., low PM2.5 concentration) to have the photographs taken. To accomplish this, the company will dispatch a photographic team to be stationed in a city, preparing to take picture of the scenery whenever the clear sky appears. As a result, the photographic team will pinpoint the city with very low daily PM2.5 concentrations, no matter how few the days are. This is in contrast with the environmental management sector. Nonetheless, the choice of the best city is again subject to different situations faced by the company, i.e., whether the company need the photographs very urgently, or the company is willing to wait long enough and invest enough money for the best scene to occur. Again as an illustration, consider an example of three cities as given below. City P With PM2.5 concentration reaching less than 50 μg m for 10 random days of a year, but on all the other 355 days (or 356 days if it is a leap year), the PM2.5 concentrations are above 300 μg m . Among these three cities, if the company is willing to wait long enough for the best scenes to occur ("quality" is of concern), then City P will be the best choice, even if that city is overall very polluted. On the other hand, if the company needs the photographs very urgently ("speed" is of concern), then City Q will be the best choice. Remark 10. Obviously, City A in the previous section will be an even better choice than City P, Q, and R as it has 355 days in a year with PM2.5 concentrations of below 50 . On Dealing with the Complete Absence of Data As far as two different sectors are concerned, if the data on PM2.5 reading is completely absent for a city, then the common practice is to ignore that city altogether. Thus, a city whose data are totally absent will be assigned a low membership value by the environmental management sector (not worth spending money to combat "pollution" and therefore presume clean air). On the other hand, that city will be assigned a high membership value by the tourism marketing sector (not worth sending crew to wait for a "clear sky" and therefore presume dirty air). In this paper we shall adopt such an approach of dealing with the absence of data. Application of Our Proposed Method Using a Real Life Dataset In this section, we apply our proposed decision-making algorithm to a real-life data set of the pollution data for five Chinese cities. This data set was obtained from the UCI Machine Learning Repository. A Brief Description of the Dataset The dataset contains the hourly reading of PM2.5 (in μg m ) of five cities in China, namely Beijing, Chengdu, Guangzhou, Shanghai, and Shenyang. The data ranges from 00:00 on 1 January 2010 to 23:00 on 31 December 2015, i.e., six years in total. In each of the five cities, the PM2.5 concentrations are measured hourly from several stations within the city. In particular, there were four stations for Beijing, two stations for Guangzhou, and three stations for each of the other three cities. Each of those stations, however, may or may not produce a PM2.5 concentration given a onehour interval of any year. Remark 11: Although the dataset provides readings for three stations in Guangzhou, it was found that two of the stations, namely "City Station" and "US post" share identical readings throughout the entire 6-year interval. Based on the knowledge of measurement, it is very unlikely that "City Station" and "US post" stations independently obtain their own readings. Thus "City Station" and "US post" in Guangzhou are counted as one single station in our investigation. Notations Used in the Dataset The following notations shall be used for all of the remaining parts of this paper: 3. Denote all the PM2.5 readings within that year by a matrix whose elements are ordered sets of the following form: 5. The columns of ( ) from the 1st to the -th represents the readings from the hour ( ), , ( ), , … , ( ), , respectively. 6. For each , , : if the reading exists in the dataset for the th station in the city of ( ), during the hour of ( ), , then ( ), , , ∈ ℝ is taken to be that reading, otherwise, ( ), , , is assigned to be −1. 3. Denote ( ), , to be the population variance of PM2.5 concentration for city ( ), during the hour ( ), of the year , which we can never know. 4. Denote var ( ), , to be the unbiased estimate of ( ), , using elements of ( ), , . The Objectives For each of the six years, the five cities are to be sorted from the most polluted to the least polluted. That value of t is decided by the user based on whichever perception he is investigating, as mentioned in Section 6.2. Motive behind the Choices of Formulas In this scenario, the number of stations within one city is already quite few from a statistical perspective (i.e., less than 30 in accordance with most literature). In view of this matter, we took the liberty of assuming that the PM2.5 concentration within any city during any hour of a year is normally distributed. Thus it follows that = 0.7107230213973241044476521 The number 500 is involved in our formulas because 500 g m is the upper limit of PM2.5 concentration that corresponds to the upper bound of AQI level 6 (i.e., "severely polluted") in China. In actual measurement, the PM2.5 concentration can still potentially exceed 500 g m indeed, it is for this reason we allocate 0.0 to 0.9 that corresponds to the PM2.5 concentration reading from 0 g m to 500 g m . On the other hand, 0.9 to 1.0 are dedicated for the extreme cases where the PM2.5 concentration exceeds 500 g m with no upper bound imposed, as seen by the reciprocal relationship of the formulas in Section 6.5.1. This applies for all the three values of ( ), , , ( ), , , and ( ), , . Results for Some Values of t As may theoretically take infinite number of values, here the results for both the GSF G and GSF A approaches are given for the instances of = , , 1, 4, 20 representing five different perceptions of making decision. As there are six years in our dataset, the sorting for all the six years are given accordingly in Table 2. The Criteria of Compliance For all 10 < < 10 , and for both GSF A and GSF G, city (1), should be classified as the least polluted, followed immediately by city (1), . Thus, it can be seen in Figure 4 that both of our algorithms GSF A and GSF A fully comply with Test 1. [46] In the second test we are testing on how GSF A and GSF G handles a case where the subjective weights prioritize over the objective weights. The Test Inputs There are two cities (2) = (2), , (2), to be accessed during the interval of two days, (2) = (2), , (2), for the severity of pollution on a given perception characterized by the value of t. The Criteria of Compliance For all 10 < < 10 , and for both GSF A and GSF G, city (2), should be classified as more polluted than city (2), . Thus it can be seen in Figure 5 that both of our algorithms GSF A and GSF A fully comply with Test 2. [46] In the third test we are testing on how GSF A and GSF G handles a case where the objective weights prioritize over the subjective weights. The Test Inputs There are 20 cities (3) = (3), , (3), , ⋯ , (3), to be accessed during the interval of two days, Thus it can be seen in Figure 6 that both of our algorithms GSF Aand GSF A fully comply with Test 3. Test 4: t-Dependence Test In the fourth test we are testing on the effectiveness of the choices of at influencing the sorting of the cities, for both GSF Aand GSF G. The Criteria of Compliance There should exist , , , , , , with 10 < < < , < 10 and < , < 10 . nature. This perception consequently results in the selection of City R as the most polluted city, for medium values of . The Results of Our Algorithm Figure 7. The results of our algorithms GSF A and GSF A for compliance of t-dependence test of type-A. Thus, it can be seen in Figure 7 that our algorithm GSF A fully complies with the -dependence test of type-A. The Criteria of Compliance There should exist 10 < < < < < < 10 . ≽ ≽ whenever ∈ , , 10 . This is because: 1. All the 10 days of City W are "slightly polluted": 〈0.20,0.10,0.40〉. Thus, the low values of should produce an "urgent" perception, where time is at stake and therefore the photographic team must quickly take photographs of a city. This perception consequently results in the deduction of City W as the least polluted city for low values of . 2. City T and City U contain one day that is "good": 〈0.01,0.01,0.99〉. Thus, the high values of should produce a "quality" perception, where the photographic team must wait for the clearest possible sky to produce the best possible photographs to market China tourism. This perception consequently results in the deduction of City T and City U as the two least polluted cities, for high values of . Remark 14: The least polluted city, out of City T and City U depends on the objective weight which was already dealt by Test 3 from Section 7.4. 3. City V contains three days that is "okay": 〈0.50,0.05,0.80〉. Thus, the medium values of should produce a perception that is between "urgent" and "quality" in nature. This perception consequently results in the deduction of City V as the least polluted city for medium values of . The Results of Our Algorithm Thus, it can be seen in Figure 8 that our algorithm GSF G fully complies with the -dependence test of type-B. It can be clearly observed that our algorithms comply with all of the tests that were outlined above, hence proving the accuracy of our algorithms and the corresponding formulas. Conclusions In this paper, we introduced two operators, namely, the generalized -spherical fuzzyweighted geometric and arithmetic interaction functions. The structural properties of these operators were thoroughly studied and it was proven that the two newly introduced operators satisfy these properties. The highlight of this work is the development of two decision making algorithms based on these two operators, and the application of these algorithms in a multi-attribute multi-perception decision-making problem related to the ranking of the pollution level of five major Chinese cities. Further, we also presented a novel method to convert the values in the raw dataset into single-valued neutrosophic numbers, something which has not been done in existing literature. In addition to this, we have also outlined several tests to investigate the accuracy of the results yielded by our algorithm, and it was proven that our algorithm has demonstrated compliance with all of the tests that were outlined, thereby proving the accuracy of the results. Hence this work is definitely an important addition to the body of knowledge in this area of study.
9,073.4
2019-08-23T00:00:00.000
[ "Computer Science", "Mathematics" ]
Adaptive finite-time event-triggered command filtered control for nonlinear systems with unknown control directions In this article, we study an adaptive finite-time event-triggered command filtered control problem based on output feedback for a class of nonstrict-feedback nonlinear systems with unknown control directions. Firstly, the unknown nonlinear continuous functions in the systems are approached by fuzzy logic systems, and the coordinate transformations and Nussbaum function technique are introduced to settle the problem caused by the unknown control directions in the systems. Then, in order to reduce the communication burden between the controller and the actuator, a finite-time adaptive event-triggered control algorithm is designed by means of backstepping technique. It is proved that the tracking and observer errors are adjusted around zero with a small neighborhood in a finite time and all the signals in the closed-loop systems are bounded. In addition, the command filtering technology based on error compensation system with fractional power is constructed to avoid the complexity explosion issue in the backstepping design process. Finally, the simulation examples are included to verify the availability and superiority of the control approach. Introduction In modern industrial control production, nonlinear systems are ubiquitous, such as mechanical systems [1], electrical systems [2], robot systems [3], aerospace systems [4] and so on. Therefore, the research on nonlinear systems has received increasing attention in recent decades. At present, the backstepping technique is one of the most resultful instruments to construct controllers for high-order nonlinear systems. Thus, the combination of backstepping technique and adaptive control is frequently utilized to settle the control issue of nonlinear systems with uncertain parameters [5][6][7]. In addition, fuzzy logic systems (FLSs) [8] and neural networks (NNs) [9] as universal approximators can approximate or identify the nonlinear specialities of the systems. On the strength of this property, FLSs and NNs have been diffusely applied in the adaptive backstepping control design process of nonlinear systems, and many research achievements have been acquired. To mention a few, an observer-based fuzzy adaptive control strategy was introduced in [10] for switched nonlinear systems with a nonstrict-feedback structure. In [11], an adaptive control framework was constructed for a category of high-order stochastic systems with time-varying delay by combining NNs with backstep-ping technique. In [12], a robust fuzzy adaptive control scheme was established for multi-input multi-output (MIMO) nonlinear stochastic Poisson jump diffusion systems with the aid of H ∞ control theory and backstepping technique. It is important to note that the repetitive differentiations of the virtual control signals in the procedures of above backstepping control design can lead to the issue of complexity explosion. To settle this matter effectively, dynamic surface control (DSC) technology has been developed in [13], in which the first-order filters have been introduced in the process of backstepping design. Subsequently, a DSC technology-based robust adaptive neural network control framework was proposed in [14] for pure-feedback time-delay systems with quantized input. An adaptive fuzzy tracking control way was established in [15] for an uncertain nonlinear system with input saturation by combining backstepping method with DSC technology, and the designed controller was applied to missile-target interception systems. However, the errors caused by the filters are not taken into account in the DSC technology, which reduces the control performance of the systems. Fortunately, the command filtering technology [16] has also settled the complexity explosion issue in backstepping procedure, in which the error compensation system has been constructed to make up for the shortcomings of DSC technology. For instance, both output feedback and state feedback adaptive fuzzy control approaches were presented in [17] for uncertain MIMO nonlinear arbitrary switched systems with a nonstrictfeedback form with the help of backstepping method and command filtering technology. In [18], an adaptive neural tracking control scheme was introduced for a MIMO system with input saturation and unknown control directions by utilizing command filtering technology. However, the authors in [16][17][18] only considered the asymptotic convergence rate. In the control of practical engineering systems, the convergence is a key index to reflect control performance. The asymptotic stability of the systems can only ensure the convergence of the systems when the time tends to infinity. Compared with the infinite-time control strategy, the finite-time control method has the merits of higher tracking precision, better anti-disturbance performance and faster convergence. Therefore, the finite-time control has drawn more and more attention with interesting achievements [19][20][21][22][23] in recent years. Specifically, the authors in [19,20] constructed two finite-time control frameworks for nonlinear systems by employing the backstepping technique and FLSs. Subsequently, the state observers were designed in [21][22][23], and the output feedback finite-time control schemes were established based on the backstepping technique. Furthermore, the effects of complexity explosion in backstepping procedures were eliminated by means of DSC or command filtering technologies. On the flip side, event-triggered control (ETC) has important theoretical and practical significance because it can reduce the heavy communication burden and save communication resources in network control systems. In recent years, ETC has become one of the hot research subjects in the field of control, and a series of important research findings have been obtained. For instance, two ETC schemes based on input-to-state stability (ISS) hypothesis were proposed in [24,25]. However, it is not easy to test the ISS assumption in uncertain nonlinear systems. In order to avoid the ISS assumption, the controllers and event-triggered mechanisms were designed cooperatively in [26][27][28], and several adaptive ETC methods were constructed. Unfortunately, there are few research achievements on the problem of adaptive finite-time output feedback eventtriggered command filtered control for nonlinear systems with unknown control directions at present. This motivates the research of this article. The main contributions in this article are listed as follows: (1) In this paper, the FLSs are utilized to approximate the unknown nonlinear functions in the systems, which eliminates the linear growth conditions required by the nonlinear terms in [29,30]. Thus, the conservatism of the control algorithm is reduced. Furthermore, different from the error compensation systems designed in [17,27], a novel fractional power-based error compensation system is constructed in this paper to decrease the errors caused by the command filter in finite time. The advantage of this design is that the control performance of the whole closed-loop system can be improved by adjusting the parameters of the error compensation system more appropriately while reducing the computational complexity of the systems. (2) The collaborative design of finite-time adaptive controller and event-triggered mechanism avoids the ISS assumption needed in the existing literature [24,25]. The finite-time event-triggered controller designed in this article cannot merely save communication resources and reduce communication burden, but increase control efficiency in actual control systems compared with the nonfinite-time controller designed in [31]. (3) The control schemes designed in [18,26,27] are only suitable for system models with available states, while the adaptive output feedback finitetime control approach constructed in this article can be applied for controlled systems with unmeasurable states. In addition, a class of nonstrictfeedback nonlinear systems with unknown control directions is considered in this paper, which is more general than the systems considered in [19,23]. Therefore, the control strategy developed in this article can be more applicable for practical system models. System descriptions Take the following nonlinear systems with a nonstrictfeedback structure into consideration: . , x n ] T ∈ R n denotes the system state; u ∈ R and y ∈ R represent the control input and the system output, respectively; f i (x) are unknown smooth nonlinear functions; r i are unknown constants; d i denote the unknown external disturbances. In this article, it is supposed that the system state variables x i (t), i = 2, . . . , n are not available, only the system output y is measurable. The control goal of this article is to establish an adaptive finite-time event-triggered controller, which can guarantee the boundedness of all the signals and ensure that the output y follows the appointed signal y d in a finite time. To accomplish the control goal, we put forward the following assumptions and lemmas. Assumption 2.1 [22] The desired signal y d and its first derivativeẏ d are considered to be bounded. Assumption 2.2 [32] There are unknown constants D i > 0, i = 1, 2, . . . , n, such that |d i (t)| ≤ D i . Assumption 2.3 [33] It is supposed that the sign of r i is unknown, and there are two constants r > 0 and r > 0 such that r < |r i | <r . Assumption 2.4 [34] There exist constants μ j , such that Remark 2.1 In [5,28,35], the tracking signal y d and its ith order derivatives y (i) d , i = 1, 2, . . . , n are required to be bounded, while it is only necessary to know the boundedness of y d andẏ d in Assumption 2.1 of this paper. Therefore, Assumption 2.1 reduces the conservatism of the control algorithm in practical applications. In addition, Assumptions 2.2-2.4 are also common constraint conditions, which makes it possible to devise an adaptive tracking controller. Lemma 2.2 (Young's inequality) For any x ∈ R, y ∈ R and ς > 0, the following relationship holds: Lemma 2.3 [37] For p > 0, q > 0 and υ(w, ζ ), the following relationship holds: Now, the first-order Levant differentiator [38] is proposeḋ where 1 and 2 denote command filter states, α r represents the input signal, L 1 and L 2 are parameters to be designed. [38] In the absence of input noises, select the appropriate parameters L 1 and L 2 , the following equations are true after a finite time of a transient process Lemma 2.5 [38] If input noise satisfies |α r −α r 0 | ≤ ε, for scalars φ 1 > 0 and ν 1 > 0, the following relationships hold in finite time Lemma 2.4 (2.4) For the controlled system (2.1), we introduce the coordinate transformations as follows (2.5) in which i = n j=i r j . From (2.1) and (2.5), it can be deduced that the new states η i , i = 1, 2, . . . , n satisfy ⎧ ⎨ According to Assumptions 2.2 and 2.3, it is easy to conclude that there are scalarsd i > 0, i = 1, 2, . . . , n such that |d i (t)| ≤d i . Fuzzy logic systems Unknown nonlinear continuous functions in the systems are usually handled via the fuzzy logic systems, which can be described as follows. IF-THEN rules: R l : If x 1 is B l 1 and x 2 is B l 2 and . . . and x n is B l n , then y is W l , l = 1, 2, . . . , K , where x = [x 1 , x 2 , . . . , x n ] T is the FLSs input, y denotes the FLSs output, μ B l j (x j ) and μ W l (y) represent the fuzzy membership functions on fuzzy sets B l j and W l , respectively, and K denotes the number of fuzzy rules. Based on the fuzzy rules of the system, the FLS is formulated by Then, the FLS is redescribed as follows (2.9) Lemma 2.6 [35] Let ψ(x) be a continuous function on a compact set , then for ∀ε > 0, there is an FLS (2.9) such that (2.10) Nussbaum function At present, Nussbaum function is often utilized to solve the problem of unknown control directions. For an even continuous function N (ς ), if the following properties are satisfied: then N (ς ) is a Nussbaum function. For instance, the continuous functions e ς 2 cos( π 2 ς) and ς 2 cos ς fall into the category of Nussbaum functions. The properties of the Nussbaum function are described by the following lemma. Lemma 2.7 [39] Define smooth functions V (·) and is a smooth Nussbaum function. If the following relationship is satisfied: in which b is a positive constant, g is the suitable scalar, and p(t) represents a bounded continuous function, Controller design and stability analysis In this section, an observer-based adaptive finite-time event-triggered command filtered control scheme is developed with the help of backstepping method and command filtering technology. State observer design The novel system (2.6) is obtained from the original system (2.1) by coordinate transformations (2.5). Since the states x i in the original system (2.1) are assumed to be unmeasurable, it is obvious that the state variables η i , i = 1, 2, . . . , n in the novel system (2.6) are also unmeasurable. Thus, it is necessary to establish an observer to estimate the unavailable states in the nonlinear systems. Before designing the state observer, the systems (2.6) are redescribed aṡ Select the vector k, such that A belongs to the strict Hurwitz matrix category. Thus, for any given matrix Q = Q T > 0, there always exists a symmetric matrix P > 0 such that According to Lemma 2.6, the nonlinear terms i (η) in the system (3.1) are handled by the following FLSs where θ * i denote the optimal parameter vectors, and ε i represent the minimum approximation errors, which satisfy To estimate the unavailable states in the system, we design the following fuzzy state observeṙ whereθ i , i = 1, 2, . . . , n are the estimations of θ * i . Event-triggered controller design In this part, the virtual control signals and the finitetime event-triggered controller based on state observer and command filtering technology will be given. To facilitate the establishment of subsequent controller, the following transformations are utilized where χ 1 is the tracking error, η i,c denote the outputs of the command filter. Then, the command filter can be constructed aṡ represent the outputs. The error compensating signals ϒ i , i = 1, 2, . . . , n are designed aṡ (3.16) in which c i > 0, s i > 0 and 0 < β < 1 are design constants, ϒ(0) = 0, and¯ 1 is a known constant that satisfies | 1 | ≤¯ 1 . Then, the compensated tracking errors are defined as follows (3.17) By adopting backstepping technique, the virtual control signals are established as follows which the adaptive law of parameter ς is constructed aṡ The adaptive laws ofθ j are designed as followṡ where κ j > 0 and σ j > 0 are design parameters. The finite-time event-triggered controller can be established as with the event-triggered mechanism is selected as are design parameters. t q , q ∈ z + denotes the controller update time. Remark 3.1 Whenever (3.23) is triggered, the time is flagged as t q+1 and the control signal u(t q+1 ) is used in the closed-loop system. During t ∈ [t q , t q+1 ), the control value remains a constant, i.e., ξ(t q ). Stability analysis of control system In this part, based on the virtual control signals, actual controller and parameter adaptive laws designed above, we will analyze the stability of the whole closed-loop system. The following theorem is provided to summarize the main result of this paper. 2) The observer and tracking errors are adjusted around zero with a small neighborhood in a finite time. 3) There is a time t > 0 such that t q+1 −t q ≥ t , ∀q ∈ z + . That is, the Zeno behavior is avoided. Proof The desired conclusions can be achieved recursively by the following steps. 60) where R is a positive design constant. Next, we will demonstrate Zeno behavior does not occur. That is, for any q ∈ z + , there exists a time t > 0 such that t q+1 − t q ≥ t . For this purpose, according to From (3.18) and (3.21), it can be obtained that ξ is differentiable. The relationship |ξ | ≤ ι holds, where ι > 0 is a constant. By noting that o(t q ) = 0 and lim t→t q+1 |o(t)| = π |u(t)| + a, one can be concluded that t ≥ π |u(t)|+a ι . Thus, the Zeno behavior is successfully avoided. This completes the proof. The Zeno phenomenon means that the trigger event may be triggered countless times in a limited time, which leads to the instability of the systems. A common method to avert the Zeno phenomenon is to assure that the time interval between any two adjacent trigger events is positive. That is, for any q ∈ z + , t q+1 − t q > 0 holds. Based on the above proof, it follows that t q+1 − t q ≥ π |u(t)|+a ι > 0. Hence, the Zeno phenomenon does not occur. Remark 3.3 According to |χ 1 | ≤ 2 min{((2 3 )/((1 − θ 0 ) 1 )) 1/2 , √ 2( 3 /((1−θ 0 ) 2 )) 1/(β+1) } and the definitions of 1 , 2 and 3 , it can be concluded that the tracking error in the system mainly depends on the control parameters c i , s i , σ i and β. The system tracking error can be adjusted to be smaller by increasing c i , s i , σ i and reducing β. However, the values of c i and s i should not be too large; otherwise, the control value will be too large, which is not suitable for practical application. Therefore, we need to adjust the control design parameters properly to acquire better control action in practical engineering. The block diagram of the above-mentioned adaptive finite-time event-triggered control scheme is displayed in Fig. 1. Simulation example In this section, we provide two simulation examples to verify the feasibility and superiority of the constructed control framework. By utilizing the coordinate transformations η 1 = x 1 r 1 r 2 and η 2 = x 2 r 2 , we can derive the following nonlinear system which is equivalent to system (4.1): Next, we will conduct simulation research on the system (4.2). To deal with the nonlinear terms, we choose the following fuzzy membership functions The fuzzy state observer is constructed as The adaptive finite-time event-triggered controller is designed as with the adaptation laws of the parameters are Table 1. Figures 2-6 depict the simulation results. By utilizing the developed control design scheme in this paper and the one in [41], the responses of the outputs y and the appointed signal y d are exhibited in Fig. 2, and the comparison curves of the tracking errors are plotted in Fig. 3. From Figs. 2 and 3, it can be seen that the system output y can follow the specified signal y d , and the finite-time control approach proposed in this article has higher tracking precision than the nonfinite-time control method developed in [41]. Figures 4 and 5 describe the responses of fuzzy adaptive parametersθ i , i = 1, 2, parameter ς and Nussbaum gain N (ς ), respectively. The time intervals t q+1 − t q of triggering events are indicated in Fig. 6. From Fig. 6, we can observe that the Zeno behavior is successfully avoided. Based on the above simulation results, it can be obtained that the adaptive finite-time control scheme developed in this article is effective. Practical Example 2: Consider the pendulum model in [42], which is described by the following dynamic equations: where φ denotes the acute angle between the vertical axis and the rod;φ stands for the angular velocity of the rob; m is the mass of the bob; l is the length of the rod; k denotes an unknown frictional coefficient; and The tracking errors for the developed control approach in this article and the one in [41] g represents the acceleration of gravity. Define transformations η 1 = mlφ and η 2 = mlφ, then the system (4.3) is redescribed as To deal with the nonlinear terms, we choose the following fuzzy membership functions The appointed reference signal is y d = 0.5(sin t + sin(0.5t)). The adaptive finite-time event-triggered controller, the fuzzy state observer, the parameter adaptive laws and Nussbaum function are the same as those in Numerical Example 1. The initial conditions are chosen as η 1 Table 2. Figures 7-14 depict the simulation results. By utilizing the developed control design scheme in this article and the one in [41], Fig. 7 exhibits the responses of the system outputs y and the desired signal y d , and Fig. 8 plots the comparison curves of the tracking errors. From Figs. 7 and 8, it can be seen that the output y of the system (4.4) can track the specified signal y d , and the finite-time control approach developed in this article has higher tracking precision than the nonfinitetime control approach developed in [41]. Figures 9 and 10 indicate the responses of state variables η 1 , η 2 in the system (4.4) and their estimationsη 1 ,η 2 , respectively. According to Figs. 9 and 10, it is concluded that the observer errorsη 1 andη 2 are adjusted around zero with a small neighborhood in a finite time. Figures 11 and 12 describe the responses of adaptive parametersθ 1 ,θ 2 , ς and N (ς ). Figure 13 provides the response of the finitetime event-triggered controller u. Figure 14 indicates the time intervals t q+1 − t q of triggering events. From Figs. 13 and 14, it is observed that the event-triggered control method can lighten the communication burden and save communication resources. With the help of the above simulation results, the validity and superiority of the developed control scheme in this article can be explained. that the finite-time control method has the merits of faster transient performance and higher tracking precision compared with the nonfinite-time control method in [41]. Conclusion In this article, an adaptive finite-time output feedback event-triggered command filtered control approach has been established for a category of nonstrict-feedback nonlinear systems with unknown control directions. In the procedure of control design, the event-triggered mechanism was adopted to reduce the communication burden between the controller and the actuator. The finite-time command filter was applied to avoid the 'explosion of complexity' caused by the backstepping method, and an error compensation system with fraction power was constructed to compensate for the filtering errors between the command filter and the virtual control signals. By means of the command filtering technology and the event-triggered mechanism, an adaptive finite-time event-triggered controller was designed. The controller can ensure that the tracking and observer errors can be adjusted around zero with a small neighborhood in a finite time and all the signals in the closed-loop system are bounded. Finally, two simulation examples have been provided to explain the effectiveness and superiority of the design approach. In this paper, the approximation characteristic of the FLSs was utilized to deal with the unknown nonlinear terms in the systems, which can only guarantee the semi-global stability of the closed-loop system, but cannot achieve global stabilization. In future work, we will consider the stochastic disturbances in the systems and attempt to design a fixed-time event-triggered control strategy to solve the global stabilization problem of stochastic nonlinear systems.
5,430.6
2022-06-21T00:00:00.000
[ "Engineering", "Computer Science" ]
Assessing the Hands-on Usability of the Healthy Jeart App Specifically Tailored to Young Users Background: The widespread adoption of mobile devices by adolescents underscores the potential to harness these tools to instill healthy habits into their daily lives. An exemplary manifestation of this initiative is the Healthy Jeart app, crafted with the explicit goal of fostering well-being. Methodology: This study, framed within an applied investigation, adopts an exploratory and descriptive approach, specifically delving into the realm of user experience analysis. The focus of this research is a preliminary examination aimed at understanding users’ perceived usability of the application. To glean insights, a comprehensive questionnaire was administered to 101 teenagers, seeking their evaluations on various usability attributes. The study took place during 2022. Results: The findings reveal a considerable consensus among users regarding the evaluated usability aspects. However, the areas for improvement predominantly revolve around managing the information density, particularly for a subset of end users grappling with overwhelming content. Additionally, recommendations are put forth to streamline the confirmation process for user suggestions and comments. Conclusion: This analysis illuminates both the strengths of the app and areas ripe for refinement, paving the way for a more user-centric and efficacious Healthy Jeart application. Introduction 1.Healthy Behaviors among Young People Adolescence is a key stage in the promotion of a healthy life [1].The family's influence is essential in promoting healthy behaviors, yet schools, serving as both a social environment for peer interaction and a formal learning setting, undeniably serve as pivotal scenery for acquiring healthy habits.These habits will significantly shape the future health of young individuals [2]. Several studies revealed the relationship between unhealthy lifestyle habits such as dietary imbalances and the onset of non-communicable diseases such as diabetes, cancer, or cardiovascular pathologies including high blood pressure and hypercholesterolemia, among others [3,4].Recent data seem to indicate that in Spain, the young population frequently fails to follow a balanced diet and the recommended levels of physical activity [5][6][7][8].Hence, it is essential to gradually devise interventions that prioritize health enhancement using technology, specifically mobile applications, serving as a highly effective platform for implementing these initiatives.In this regard, the World Health Organization [9] recently published a guide on their design, development, and implementation. In this scenario, it appears crucial to delve into the relationship between technology and health, recognized as a promising combination for fostering healthy habits during these formative years [10].Presently, there are a vast array of health-related informatic applications available [11,12].Yet, there remains room for improvement in the overall quality of the available apps designed to enhance dietary choices and physical activity and reduce sedentary behavior among children and adolescents [13,14]. In this context, the "Healthy Jeart" project emerged.It is an application specifically created to be used by children and young people, both inside and outside of school.The aim is to foster the knowledge, attitudes, and healthy habits that should proliferate in a young population of 8-16-year-olds.To achieve this, it employs communication styles suitable for specific ages and incorporates enjoyable components [15].Additionally, it delivers straightforward and easily comprehensible messages offering advice and tips across various health domains and fostering healthy habits (Figures 1 and 2).Healthy Jeart is tailored to address the needs and interests of young individuals.Its creation was initiated by forming nominal groups, pinpointing the most pertinent subjects that resonated with them [16]. Healthcare 2024, 12, x FOR PEER REVIEW 2 of 14 [5][6][7][8].Hence, it is essential to gradually devise interventions that prioritize health enhancement using technology, specifically mobile applications, serving as a highly effective platform for implementing these initiatives.In this regard, the World Health Organization [9] recently published a guide on their design, development, and implementation. In this scenario, it appears crucial to delve into the relationship between technology and health, recognized as a promising combination for fostering healthy habits during these formative years [10].Presently, there are a vast array of health-related informatic applications available [11,12].Yet, there remains room for improvement in the overall quality of the available apps designed to enhance dietary choices and physical activity and reduce sedentary behavior among children and adolescents [13,14]. In this context, the "Healthy Jeart" project emerged.It is an application specifically created to be used by children and young people, both inside and outside of school.The aim is to foster the knowledge, attitudes, and healthy habits that should proliferate in a young population of 8-16-year-olds.To achieve this, it employs communication styles suitable for specific ages and incorporates enjoyable components [15].Additionally, it delivers straightforward and easily comprehensible messages offering advice and tips across various health domains and fostering healthy habits (Figures 1 and 2).Healthy Jeart is tailored to address the needs and interests of young individuals.Its creation was initiated by forming nominal groups, pinpointing the most pertinent subjects that resonated with them [16].The Healthy Jeart app has been acknowledged by the regional state Agency of Health Quality of Andalusia with the "Healthy App" distinction [16] and is also endorsed by the Association of Community Nursing (AEC) in the Spanish national scope.Likewise, its associated website has received the "Advanced Level Healthcare Website" accreditation seal, granted by the Andalusian Regional Agency for Healthcare Quality [17].Additional details can be accessed on the application's website: https://www.healthyjeart.com(accessed on 7 January 2024). Digital Applications and the Importance of Their Usability Mobile applications, as a type of software, need to meet users' explicit and implicit needs for acceptance.In software engineering, which covers the entire lifecycle of computer application development, the central emphasis is on creating high-quality products regardless of the device.This highlights the crucial role of design and production quality. In software development, quality entails meeting essential characteristics, with usability being a pivotal feature.Usability examines how effectively users can achieve goals, considering factors like effectiveness, efficiency, and satisfaction.Evaluation includes aspects such as cognitive load, learnability, and software portability across platforms [18][19][20].The Healthy Jeart app has been acknowledged by the regional state Agency of Health Quality of Andalusia with the "Healthy App" distinction [16] and is also endorsed by the Association of Community Nursing (AEC) in the Spanish national scope.Likewise, its associated website has received the "Advanced Level Healthcare Website" accreditation seal, granted by the Andalusian Regional Agency for Healthcare Quality [17].Additional details can be accessed on the application's website: https://www.healthyjeart.com(accessed on 7 January 2024). Digital Applications and the Importance of Their Usability Mobile applications, as a type of software, need to meet users' explicit and implicit needs for acceptance.In software engineering, which covers the entire lifecycle of computer application development, the central emphasis is on creating high-quality products regardless of the device.This highlights the crucial role of design and production quality. In software development, quality entails meeting essential characteristics, with usability being a pivotal feature.Usability examines how effectively users can achieve goals, considering factors like effectiveness, efficiency, and satisfaction.Evaluation includes aspects such as cognitive load, learnability, and software portability across platforms [18][19][20]. Evaluating usability encompasses testing how easily and effectively apps can be used in both real-world settings and controlled lab environments, and these conditions significantly impact the outcomes [21].Some authors opt to conduct usability assessments consistently across the entire software product's lifecycle.They evaluate it using reports at each stage of development rather than solely at the conclusion of the development phase [22].Specialists analyzing final products [23,24] conduct tests in controlled environments using questionnaires with closed or open-ended questions.A scoring scale is used to assess user perceptions.Notably, there is a growing focus on researching the usability of applications designed for educational settings within this domain [25]. It is along these lines that Zhou and col.[26] worked to measure the usability of mhealth applications.While there exist various methods for conducting usability studies [27], it is crucial to highlight the work of Kumar, Goundar, and Chand [28].Their contribution offers a contemporary, design-focused framework for assessing and studying usability in mobile learning applications, encompassing elements spanning from content structuring to navigation.This source contributed as a cornerstone in our theoretical framework. Briefly put, the creation of educational apps necessitates a usability analysis to ensure their alignment with both pedagogical and technological standards, merging criteria outlined by software engineering and pedagogy [29]. Usability is thus an essential element that ensures that users can make proper use of applications to achieve their objectives without problems.Hence, this project focuses on refining the usability of the Healthy Jeart application.Its primary aim is to pinpoint any usability concerns, grasp the user requirements, and ultimately enhance the overall user-friendliness of the product.Evaluation of the usability of Healthy Jeart was carried out empirically, with real users (school-age adolescents). Study Design, Setting, and Participants This is an exploratory and descriptive study that is part of an applied investigation, specifically within the domain of user experience analysis.The population consisted of 190 primary and secondary school students from a public-private school in the province of Andalusia (Spain), who installed the Healthy Jeart app on their electronic devices.Convenience sampling was selected based on the practical considerations arising from constraints on time and resources.Given the nature of our study, which necessitated the involvement of individuals with specific experiences-such as using the app over a defined period-convenience sampling provided a straightforward method to access participants meeting these criteria, as they were readily available within our immediate surroundings. Based on the information provided by the Regional Ministry of Education and Sport for the academic year 2019-2020 [30], Andalusia had a total of 1,500,265 students enrolled in non-university general education.Among them, 21.9% (328,636) attended subsidized centers, which is the focus of this study.Additionally, during this period, the region had 658,084 young individuals aged between 11 and 17, with 51.3% of them being boys. Prior to the commencement of the study, the necessary official approval was diligently acquired.The ethical considerations and protocols were formally reviewed and approved by the Research Ethics Committee of the Province of Huelva, under the protocol code PI047/16.This ethical clearance ensured that the study adhered to the highest standards of ethical conduct in research.Following the receipt of ethical approval, an extensive demonstration of the Healthy Jeart application was conducted.This involved presenting the application to the teachers at the school where the project was conducted.Additionally, parents of the young participants were provided with a comprehensive overview of the application's features.Parents were given ample opportunity to seek clarifications and ask questions regarding the application.Once fully informed, their voluntary and written consent was sought for the participation of their children in the study.The informed consent process aimed to ensure that parents were fully aware of the study's objectives, procedures, potential benefits, and any associated risks.Upon obtaining parental consent, the Healthy Jeart app was then introduced to the young participants.They were encouraged to actively engage with the application by installing it on their personal devices, such as mobile phones or iPads.This participatory approach aimed to foster a sense of ownership over and familiarity with the application, promoting genuine and voluntary involvement. Throughout the study, ongoing ethical considerations were paramount.The privacy and confidentiality of the participants were rigorously maintained, and any concerns or queries raised by participants, or their parents, were promptly addressed. Thus, the app was first introduced and utilized within the school premises.Subsequently, the young participants were allotted a two-month period to independently use the app either within or outside the school setting.Succeeding this period, the students were re-engaged, and the usability evaluation tool was implemented within the classroom environment.Participants were prompted to complete a questionnaire designed to gather insights into their views on facets concerning the app's content, usefulness, browsing experience, and feedback mechanisms.The questionnaire encompassed an evaluation of the app's overall ease of use and clarity and how it contributed to their understanding of healthy behaviors.This stage of the research occurred during the opening semester of 2022. To meet the inclusion criteria, the young individuals were required to possess an electronic device capable of installing the app, secure informed consent from their parents, willingly participate in the research, and provide responses to more than 90% of the questions.Five students declined participation in the research, while 84 students completed less than 90% of the questionnaire, resulting in a final count of 101 participants. The Usability Evaluation Tool Before conducting the usability analysis of the Healthy Jeart mobile application, preparatory steps were taken to identify the characteristics linked to usability.This involved tailoring the metrics typically linked to these attributes, as outlined in ISO 9241-11 [31].Additionally, a questionnaire was developed specifically for this study's purposes. When formulating the questionnaire, the content validity was ensured using a rigorous process.This involved selecting items derived from literature research, the researchers' expertise, and consultation with field experts, resulting in a pool of 30 items.This set of items underwent scrutiny by a panel of three university professors specializing in education, computer science, and nursing.Their task was to evaluate the items' quality, eliminating any ambiguities or any deemed inappropriate and determining their alignment within pre-established facets (Content, Navigation, Utility, Feedback, and Overall appraisal). From this assessment, facets for which there was no consensus among the jury members regarding the distribution of items were indicated.As a result, 8 items were eliminated due to a consensus of more than 50% between the panel members.The study left 22 items (C1-C22) distributed among the following facets: Content (6), Navigation (6), Utility (4), Feedback (4), and Overall Appraisal (2) on a 6-level Likert scale, from strongly disagree to strongly agree, except for question C18, which was dichotomous (Yes/No).The answer to question C19 depends on the answer to the previous question and is only considered for participants who answered "Yes" to question C18.A score of 4 marks the cut-off point, distinguishing satisfaction from dissatisfaction.Note that for questions C6 and C9, due to their wording, the rating scale was inverted. We chose a six-point Likert scale due to its increased sensitivity compared to five points, its balanced midpoint to reduce bias, and its ideal balance of detail and simplicity, making it easier for respondents to use compared to a seven-point scale. Throughout this stage, our primary emphasis was on evaluating the user friendliness of the Healthy Jeart app.Simultaneously, we meticulously scrutinized various aspects related to the tool's accuracy, specifically its capability to precisely measure the targeted concept-in this instance, the health behavior endorsed by the app.As previously mentioned, our tasks encompassed defining the concepts, conducting an exhaustive literature review, and collaborating with experts to ensure the concepts' representation was adequate.By applying the tool to a subset of young participants, as evidenced by the presented data, we anticipate identifying potential issues and guiding subsequent adjustments to enhance both clarity and precision.Finally, employing Cronbach's alpha yielded a value of 0.703 for the overall scale (due to their distinct characteristics, the C18 and C19 items were omitted from the reliability analysis).Furthermore, our analysis confirmed that the removal of individual items did not contribute to a further improvement in the alpha value, signifying the stability of the scale. Table 1 presents the description of the usability questionnaire. C1 When the Healthy Jeart app opens, I find the options highlighted in the main menu sufficient and I don't notice anything missing. C2 After using the app, I can now easily distinguish between healthy and unhealthy habits. C3 The distribution of the contents of the app (texts, images, test. ..) seems good to me. C4 The texts used to access the contents are sufficiently descriptive of what is offered through them. C5 When I access the information in Healthy Jeart, it is presented clearly so that it is easy to understand and remember. C6 The information presented in the app is too extensive and hard to assimilate. C7 You can easily and clearly see the options you are browsing through in Healthy Jeart. C8 There are elements that let you go back in a clear and simple manner. C9 You remember seeing some type of advertisement in the app. C10 The operating speed of the app is good. C11 In the app, the tasks of navigating or moving around the app, clicking on buttons, selecting options, etc., are done in the same way throughout the app. C12 I know who funded the Healthy Jeart app. C13 After first contact, the objectives of Healthy Jeart were clear to me. C14 You can identify what content/information and services the app provides so that you could list some of them. C15 The content/information provided by the app is useful to me. C16 Overall, I was positively surprised by the app. C17 It is easy to find out how to make suggestions for improvement or comments to the Healthy programmers. C18 Did you send any suggestions or comments about the application? C19 If so, you received a message confirming that it had been received successfully; how satisfied were you with the answer? C20 The tutorial easily resolves any possible doubts you may have about how to use the Healthy Jeart app. Overall appraisal C21 My overall rating of the app's ease of use and clarity is: C22 Interacting with Healthy Jeart enabled me to discover aspects of healthy behaviors that were previously unknown to me. Alongside the 22 questions mentioned earlier, the data collection tool encompassed an initial section aimed at profiling the participants.This section gathered information regarding their age, gender, education, experience level, and typical usage patterns with similar applications. Sociodemographic and Electronic Device Usage Characterization A total of 101 students, representing both primary and secondary school levels, participated in evaluating the Healthy Jeart app.The gender distribution was nearly equal, with 49.5% girls and 50.5% boys.The group's average age was 13.27 years, ranging from the youngest participant at 11 to the oldest at 17.The majority (80.2%) of participants were enrolled in secondary school. Regarding electronic device usage, our findings revealed that a significant majority (52.5%) spend between 2 to 4 h daily using screens.Their screen time serves both entertainment and study purposes, although 33.7% of participants exclusively use these devices for entertainment.In Spain, particularly at the school where this study occurred, students are usually not permitted to use mobile phones during regular classes, with exceptions being infrequent and primarily for educational reasons.Consequently, when we refer to time in this context, it pertains to periods outside of regular school hours. Table 2 displays the sample's sociodemographic traits along with their patterns of electronic device utilization. Usability Analysis We scrutinized the data collected from the questionnaire administered to 101 students, categorizing them across the various facets (Table 3).Our focus was on highlighting the percentage of responses surpassing 4 (the satisfaction threshold) for each item in the tool, along with providing descriptive statistics like mean, standard deviation (SD), mode, minimum, and maximum values.Notably, questions C18 and C19 underwent a distinct analysis due to their unique answer formats.It is important to highlight that among the 20 examined items, 17 (85%) showcase over 90% of responses above 4, indicating widespread satisfaction within the majority of the sample.Only items C6 and C12 demonstrate comparatively lower satisfaction percentages, with the latter being the sole item where less than 70% of students scored above 4. Therefore, a considerable proportion of young individuals find the information within the app to be overly extensive and difficult to comprehend (C6).Additionally, they encounter challenges in identifying the entity that funded the Healthy Jeart app (C12).Notably, question C21 exhibits 100% of responses surpassing the cut-off value.Questions C2, C5, C13, and C20 were also very close (99%) to unanimity in terms of response. Analyzing the different facets globally, we see that the average response is higher than 5 in all of them, which would correspond to above the moderately satisfied level, with the feedback facet showing the best values (x = 5.56), followed by the Utility (x = 5.46), Overall Appraisal (x = 5.44) Content (x = 5.34), and finally Navigation (x = 5.23) facets. Regarding question C18 within the Feedback facet, we found that 27.7% of respondents asked questions or made some kind of comment directed at the technical team, and of these, only 46.43% received feedback (C19). Discussion Educators and healthcare practitioners working with young individuals need to embrace fresh roles and perspectives to effectively address modern challenges.This includes adapting to the impacts of globalization and the increasing technological advancements shaping our world. The potential of m-health apps to attract young users is now acknowledged, given their tech-savvy nature and extensive smartphone usage.When creating and launching m-health apps, it is crucial to take into account the requirements and choices of young users.Presently, experts stress the significance of education and awareness campaigns directed toward the younger demographic, aiming to boost the adoption of m-health apps and foster healthy behaviors.[32] Educational institutions serve as crucial hubs for preventing health issues and fostering healthy lifestyle habits.Both schools and families play active roles in averting the health problems tied to poor dietary choices and a lack of physical activity, habits often ingrained from early years.Within the action plans of educational establishments, the influence of social and digital media on learning processes is an undeniable factor that cannot be overlooked [17]. In this scenario, the principal aim of the Healthy Jeart app is to spread awareness about healthy dietary behaviors and encourage participation in physical activity, all geared toward enhancing the well-being of primary and secondary school students [17]. Once a mobile health application is crafted, evaluating its usability before its public release becomes crucial.The evolution of numerous smartphone apps in response to this advancement has significantly impacted health-related concerns.To incite user motivation for app adoption, prioritizing usability from the outset and continuously assessing it during the development phase is imperative.This approach aims to mitigate potential usability issues upon the applications' release [33]. In this setting, to address the necessity of assessing the usability of the Healthy Jeart app, we designed a questionnaire comprising 22 questions distributed across five different facets.This questionnaire was introduced to parents and teachers initially and subsequently administered to a cohort of 190 primary and secondary school students from an Andalusian school. The sample consisted of 101 youths, with an almost equal representation of both genders, averaging around 13 years old, and predominantly attending secondary school. Upon scrutinizing the electronic device usage patterns, it was observed that most young individuals utilized them for approximately 2 to 4 h daily.Among these, one-third dedicated this time solely to entertainment activities, while nearly 60 percent employed such technology for a combination of school-related tasks and leisure pursuits.These findings are consistent with the data found internationally, as can be seen in the OECD reports in particular [34]. In OECD nations, there has been a consistent rise in the number of children with both Internet access at home and access to various digital devices.Computers were initially the preferred tool for young individuals to use to access the Internet; nevertheless, the trend has shifted gradually, with devices like tablets and smartphones gaining more popularity for online activities than computers.Observing the habits of 15-year-olds in OECD countries, it is evident that they spend roughly two and a half hours online outside of school on an average weekday.However, this duration increases to over three hours on a typical weekend day [34].Additionally, it is evident that their connectivity is not limited to the home environment alone, as children also make use of mobile technologies while on the move and during school hours [35].The usability assessment questionnaire for the Healthy Jeart app was created by tailoring its content to the goals and intended audience while drawing upon the principles outlined in the ISO 9241-11 standard [31].The foundation for deriving usability measures, as detailed in ISO 9241-11, involves specifying the intended goals, describing the context of use, and defining the target values of effectiveness, efficiency, and satisfaction.All these elements were previously established for the Healthy Jeart app.In accordance with ISO 9241-11, by taking into consideration these components, organizations can create usability measures customized to specific goals, context of use, and user requirements.This, in turn, streamlines the evaluation and enhancement of product usability in practical work settings.Given that usability pertains to how well a system, product, or service can be utilized by designated users to achieve particular goals effectively, efficiently, and satisfactorily within a defined context, it can be affirmed that the app under evaluation in this study meets the principles outlined in the standard [31].On a satisfaction scale ranging from 1 to 6, only questions C6, concerning the depth and volume of information within the app, and C12, regarding the ease of locating information about the project's funding organization, scored below 5.However, they both remained above the satisfaction cut-off value of 4. The overall average satisfaction score across all questions is 5.4, indicating a highly satisfactory level.These elements of usability, essential for user adherence to health apps, have been recognized by other authors as well [36][37][38].Examining the various aspects of and corresponding queries within the tool, it is apparent that the measure of effectiveness, evaluating the precision and comprehensiveness with which users accomplish specific objectives, was satisfactorily met.This is notably evident, for example, in the satisfaction level associated with question C22 (96.1%), indicating that engaging with the Healthy Jeart app enables the exploration of previously unknown aspects of healthy behaviors.This aligns with the World Health Organization's [39] assertion that m-health solutions have the potential to enhance the health and well-being of teenagers by offering accessible and convenient healthcare information. Lastly, when assessing efficiency-measured by the relationship between the resources expended and outcomes gained-remarkably favorable values were attained.This is notably apparent, especially in question C5 (99%), which focuses on the clarity of information delivery across the app and its ease of retention. It should be noted that questions C18 and C19 need to be reworded so that they can be assessed using the same measure as the other parameters.We also think that the inverse wording of items C6 (76.2%) and C9 (86%) may have made it difficult to interpret and answer, so they will also be rewritten. To enhance the Healthy Jeart app, we will revamp the information structure for a more user-friendly experience.We will add detailed explanations and revise the FAQs and guides, along with links to external resources.The search feature will be improved for easier navigation.A dedicated section will be created for Project Funding Information, offering comprehensive details about the funding organization, including the mission, vision, and contact information, with direct links to its official website for additional information. We will implement an improved feedback system within the application to facilitate users in expressing their opinions on the provided information and proposing enhancements.Our commitment extends to regular updates, ensuring the currency and relevance of information.Users will be promptly notified of updates and improvements through push notifications or in-app messages.Additionally, we intend to conduct comprehensive usability testing involving a diverse user group from various contexts to identify and rectify any potential issues related to navigation or information accessibility.By addressing these components, we are confident in our ability to significantly enhance the user-friendliness and informativeness of the mobile application, thereby elevating the overall user experience and satisfaction. Conclusions In response to recent events like the pandemic and widespread home confinement, there is a growing interest in developing innovative methods to help teachers and other educators engage students using virtual activities.This includes promoting health initiatives among the younger population by enhancing skills, aligning with the Digital Spain 2025 plan's [40] commitment to widespread digital transformation.Society's focus on youth-related issues in schools, crucial influencers of students' lives, emphasizes health concerns like unhealthy habits leading to diseases.The Healthy Jeart app, with its motivating features, serves as a tool to foster positive health habits in adolescents, benefiting both educators and healthcare practitioners.To delve more profoundly into assessing the application's quality, this paper highlights the results derived from a comprehensive usability analysis.This examination delves into specific attributes tied to the tool's content, user experience and utility and the feedback it offers.Using the gathered data, we can precisely identify the application's strengths and weaknesses. In terms of strengths, we include the ability to convey which healthy habits are advisable and which are not, the clarity of the content, the accessibility during navigation, the quality of the installation tutorial, and the clear identification of the objectives to be achieved using Healthy Jeart.Among its possible weaknesses, which therefore show a path for improvement, we highlight that the volume of information provided by the application can be very dense and the mechanisms for confirming receipt of messages and suggestions should be more successful and faster.The first of these aspects will be addressed in a future update.Since this result came to light, the second has already been taken into consideration, as it mainly affects the support team, tasked with responding more quickly, as practically all the suggestions and comments have been answered since the application was rolled out.While our study provides valuable insights into health-related applications, it is essential to recognize and address certain limitations inherent in our methodology.Convenience sampling, which we employed, entails limited control over variables that could potentially influence the study outcomes.This lack of control poses challenges in isolating specific factors and establishing definitive causal relationships. Another critical consideration pertains to the extrapolation of our findings to other health-related applications.We acknowledge the significance of this concern.While the questionnaire was specifically crafted for a particular application, certain aspects of its structure and questions may be adaptable to more widespread use.However, we advise exercising caution in direct extrapolation, as the questionnaire's effectiveness could vary based on the specific context, features, and objectives of other health-related applications.Further research and validation would be imperative to assess its applicability beyond the scope of our current study. Contributions and Future Directions This research has significant implications for various stakeholders, including health professionals, teachers, parents, and other educators working with teenagers.Health practitioners can leverage insights from the Healthy Jeart app's usability analysis to recommend effective digital tools for promoting positive health habits among adolescents.Teachers can use the findings to enhance virtual engagement and integrate digital health applications into educational strategies.Parents and educators gain clarity on app strengths and weaknesses, aiding informed decisions on incorporating digital health tools into daily routines. The study lays a foundation for future research on digital health applications, offering a framework for evaluating their usability and effectiveness.The commitment to addressing app weaknesses reflects a culture of continuous improvement, setting a precedent for developers to prioritize user feedback. This cross-disciplinary research fosters collaboration between technology, health, and education, addressing multifaceted challenges in promoting adolescent health.In conclusion, the study provides valuable insights and a practical framework and sets the stage for future research at the intersection of technology, health, and education. Figure 1 . Figure 1.A game to work on healthy habits playfully. Figure 1 . Figure 1.A game to work on healthy habits playfully. Figure 2 . Figure 2. Healthy tips and didactic resources."Before, during and after physical exercise, excellent hydration helps you a lot". Figure 2 . Figure 2. Healthy tips and didactic resources."Before, during and after physical exercise, excellent hydration helps you a lot". Table 2 . Sociodemographic and electronic device usage description. * Statistical analysis differs based on the question's nature.
7,172.2
2024-02-01T00:00:00.000
[ "Medicine", "Psychology", "Computer Science" ]
Revising the DIKW Pyramid and the Real Relationship Between Data, Information, Knowledge, and Wisdom This paper offers a critique and reformulation of the data-information-knowledge-wisdom (DIKW) pyramid. Today, collection of personal, business, industrial, and other types of data has never been more pervasive and invasive. Data storage now is measured in yottabytes (56 septillion bits of data) and beyond. This collected data is interrogated, monetized, hacked, and otherwise handled regarding inductivism versus rationalism (a debate without end)-the ancient and ongoing inquiry regarding the relationship between data, information, and wisdom-has new importance with the massive data collection taking place. 10 This paper analyzes each aspect of the DIKW pyramid and ultimately proposes a reconceived diagram of data, information, knowledge, and wisdom as a Venn diagram rather than a vertical pyramid to more accurately conceptualize these relationships. The intended purpose of this work is to catalyze discussion among academics and within business and industry, including the technology industry and "Big Data," regarding the necessity and value of collecting large amounts of data and whether society is benefiting from this act. An audience of legal and data science scholars as well as technology industry executives and actors should consider this paper as beginning a conversation about the purpose, necessity, scale, and consequences of our current massive data collection in this so-called "Information Age." Currently, data seems to be collected on a massive but largely mindless basis, without consideration for downstream consequences and unintended results. Indeed, the collection of data itself has widely been questioned for over a decade. 11 Further, this same audience should consider whether and how massive data collection could better be utilized to increase useful information, and how more useful information could increase society's knowledge and wisdom. Finally, the audience should consider whether the simplistic yet widely accepted DIKW pyramid is in fact overly simplified and fails to reflect the actual sources of knowledge and wisdom, thus, reconsidering whether and how today's massive data collection contributes (or fails to contribute) to society's increased knowledge and wisdom. Data Collection Data is commonly defined as facts and statistics collected together for reference or analysis. 12 More specific for the computer age, data is the collected sequence of signs, symbols, and facts that have meaning within a specified representational or organizational system. 13 Prior to the electronics age, data previously came from surveys, health records, bank and financial records, business records (e.g., sales and customer information), census and tax records, observational data (e.g., cars going through an intersection), and experimental data (e.g., scientific experiments). However, computers brought with them a massive new source of data collection and storage. This is currently stored in zettabytes (one sextillion bytes), where one byte is a group of eight binary digits such as zeroes and ones. Electronically stored data is collected from countless sources, ranging from phone calls, emails, and Twitter feeds, to internet clicks and routine pings on a cellular phone for a person's location. Based on common technology-including one's cell phone and Apple Pay or bank card usage-the entirety of a person's daily physical actions can easily be tracked. For example, this readily available data can tell us that a user: According to Wired, the breadth and depth of personal data collection goes well beyond what most consumers see when an ad follows them while browsing online, even beyond presumptive tracking of a cell phone user's location, to include where on a website a computer user hovers their cursor and the unique user tapping of a smartphone keyboard. 14 For example, the popular genomics company 23andMe collects DNA samples and provides customers with personalized DNA information linking their 10 Ahsan, "Data, Information, Knowledge and Wisdom." 11 Spence, "Information, Knowledge and Wisdom." 12 Oxford Dictionary, "Data." 13 Zins, "Conceptual Approaches." 14 Matsakis, "WIRED Guide." genetics and ancestry. However, most customers do not know that their DNA information may be sold to pharmaceutical or other companies eager for verified and uniquely identifiable personal DNA data for marketing and commercial purposes. 15 23andMe's website reports over 10 million DNA test kits sold to date. 16 As Professor Englezos's paper in this volume indicates, the pervasive collection of data on individuals to attempt to create a digital "copy" of an individual can never be completely accurate. It is neither just individual companies collecting a user's data for internal purposes. Rather, governments are collecting data on people using legal and sometimes illegal means. For example, in 2016, the United States (US) National Security Agency alone obtained 534 million records of phone calls and text messages from two telecommunications companies, AT&T and Verizon. 17 Unknown numbers of records were obtained from other telecommunications companies and other sources. The collected data is also combined, recombined, packaged, sold, and redistributed. Data brokers around the world collect all types of data, including public records reflecting driver's licenses and addresses; data on marriages, births and deaths; DNA information; internet browser history, social media usage and posts; credit card purchases; and online surfing and shopping behavior-virtually any data point that can be obtained and stored, all to be resold to commercial companies or others for a fee. 18 American companies alone spent approximately USD$19 billion in 2018 for acquiring and analyzing consumer data. 19 Indeed, one of the biggest economic growth opportunities in developed and developing countries is building data farms to store the exponentially growing collected data, including giant data farms in India, Norway, the US, and four of the world's largest data farms in China. 20 Collected data is stored, interrogated, monetized, hacked, and otherwise handled and mishandled around the world at an increasingly rapid pace thanks to improvements in technology and computer processing speeds. The infamous WikiLeaks scandals, outing names of spies and confidential diplomatic communiqués and war activities, highlights some of the dangers of misuse of collected data. Founder Julian Assange was recently arrested after leaving his safe haven in Ecuador's embassy in London. 21 However, data collection can appear in more benign-seeming forms. A recent ad for J.Crew, a once-popular preppy clothing company, touts the collection of personal data for an "enhanced customer experience," with an elaborate data-collecting loyalty rewards program. 22 Additional dangers include revealing private phone numbers, addresses, personal photos, bank account and credit card numbers, or other personal information for theft or extortion or other criminal acts by co-workers, IT hackers, third-party contractors, government employees, and even law enforcement. Data is not only subject to misuse and abuse, but is also often inaccurate, or unintentionally or intentionally false. Hospitals, insurance companies, banks, and other companies are constantly requesting current and accurate information for customers, because the information they have is outdated. Online purchases require updated, correct mailing addresses for delivery. Grocery stores, restaurant chains, and other brick-and-mortar companies request phone numbers, addresses, and other personal information, ostensibly for a customer loyalty program but in reality as part of the company's larger customer data collection for marketing, sales, and possibly other purposes. Indeed, the data may be outdated or even intentionally inaccurate. Wellknown consumer organization Consumer Reports recommends lying to protect personal data. 23 The author uses a relative's account for grocery store loyalty program discounts, leading the grocery store to collect purchasing data on and offer coupons to the wrong person. Further, people routinely lie during data collection. They lie on surveys such as health or business surveys, and regularly lie to pollsters and telemarketers, and even on restaurant reviews. 24 In fact, for two weeks in 2017 a nonexistent restaurant was the most sought-after dinner reservation in London, all based on fake posts and reviews. 25 Consumers' increasing distrust of companies or institutions may be both a cause and result of inaccurate data. 26 One may hypothesize that an increased distrust in companies and institutions might lead people to more frequently provide intentionally false data, but there is no known study regarding consumer trust and an increased or decreased willingness to provide accurate data. Inaccurate data goes far beyond simple consumer purchasing. A recent US News article highlighted the data inaccuracies in school shootings, desegregation mandates, and civil rights information just for US primary and secondary schools, including 15 Matsakis, "WIRED Guide." 16 See https://www.23andme.com. 17 Savage, "N.S.A." 18 Matsakis, "WIRED Guide." 19 Matsakis, "WIRED Guide." 20 Menear, "Top 10." 21 BBC News, "Wikileaks." 22 Pearson, "J. Crew Rewards." 23 St. John, "Facebook Data Breach." 24 BBC News, "Small Data"; Brenner, "Lies." 25 Morris, "Fake Restaurant." 26 Edelman, "2020 Edelman Trust Barometer." massive overreporting of school shootings. 27 The inaccuracies were caused by a combination of human errors in inputs and possibly intentionally falsified data to obtain more federal government funds. 28 Thus, zettabytes of data are being collected-data from nearly every person, company, employer, and government on the planet. This data may be outdated, unintentionally or intentionally inaccurate, or accurate but misused or abused. As the early computer-age adage says, garbage in means garbage out. If the data inputs are stale or faulty or false, any information outputs from that bad data may also be faulty or false. There is no known study on whether the massive amounts of data-zettabytes of data-being collected in the computer age are more or less accurate than pre-computer data collection methods. Thus, data does not necessarily directly lead to information, and it is impossible to conclude that the zettabytes of data being collected and stored to create "information" today are actually creating more or better information. Information Information is commonly defined as the "[f]acts provided or learned about something or someone." 29 Facts previously were provided orally and in primary and secondary written and recorded sources, such as the dates and descriptions of historical events in letters or documents (Magna Carta and Federalist Papers); births, marriages, and deaths recorded in government records or with religious institutions; and notes or publications of observations or scientific experiment results. 30 These sources were not always accurate. In fact, the victor in a war was arguably entitled to write the history of the war. 31 Additionally, the "history" passed down of events was not always accurate by today's standards, but strict accuracy was not always the primary goal. Often, the aim was increasing the power or legend of a leader or increasing social cohesion. 32 Indeed, oral storytelling was practiced in ancient times without strict adherence to accuracy as opposed to the goals of moral teaching, cultural, and historical understanding, or other goals. 33 In the industrial age, with newspapers, photographs and communications devices, the standards and expectations for accuracy and "truth" in information increased. However, newspapers were famously the source of false information as well, often for the purpose of increasing sales. "Yellow journalism" or the "yellow press" was a term given to American journalism in the 1890s, when sensationalism and eye-catching headlines were used to boost newspaper sales, all based on stories that were exaggerated if not outright false. 34 As Samuel's article points out, today's internet-era "fake news" is perhaps just an updated version of the "yellow press" from over a century ago. In the computer age, information can be defined as the meaning given by humans to collected data or selected subsets of data, typically accompanied by a presumption of truth or fact. Thus, interrogated data can become information, but whether the information is useful or valuable entirely depends on the manner of interrogation and the accuracy of the underlying data. Accurate data certainly can lead to good and useful information-from a challenging human versus computer chess match to the aforementioned Kaiser Permanente's identifying potential causes of diseases with life-saving results. Britain's Medical Research Council has conducted a decades-long series of health and social studies using medical records, tests, and survey results that has undoubtedly saved countless lives. 35 As noted, information is only as good as the source or the data being used to gain information. If the collected data is faulty or false, intentionally or unintentionally, any interrogation of that data will produce incorrect results. A database of US school shootings that contains incorrect data inputs due to human error-such as lack of reporting or double-reporting, or incorrect dates and locations-may lead to incorrect information unless the database errors are identified and corrected. Additionally, the means and methods of interrogating data are important-in other words, asking the right questions in the right way to gain valid and useful information from the data. The fields of statistics and now data science are dedicated to these tasks. 36 Data science is considered to incorporate statistical methods and include data acquisition, data storage and access, data exploration and analysis, modeling and model validation, and results reporting and usage. 37 As with any field of science, obtaining valid and useful information using statistical methods and data science is a trial and error process. A famous 1936 Literary Digest "scientific" poll for the US presidential election between Franklin D. Roosevelt and Alf Landon incorrectly predicted Landon to win with 57% of the vote, but Roosevelt won with 62% of the popular vote. 38 This statistical blunder was likely due in part to whom the magazine chose to mail the poll (selection bias), and the fact that there was only a 25% response rate and possibly that a limited demographic chose to respond (response bias). This is but one example of a situation in which the data (the returned poll responses) was likely accurate (i.e., the responders probably answered honestly), but the collected data did not produce valid and useful information. Misinterpretation of information is also common. For example, in human health matters one often reads of "causation" between an input and an outcome when no such causal connection can be shown. Take a human psychology study connecting watching television and body dissatisfaction among teenage girls being reported as a potential causal connection with eating disorders. 39 As it turns out, diet drinks are not killing you, broccoli still is good for you, eating French fries does not double one's risk of death (which is at 100% anyway), and eating ice cream does not cause drowning. 40 As well, information can easily be misinformation-inaccurate information intended to deceive. Urban legends, stories such as alligators living in sewers, or "dangerous cosmic rays passing by Earth" and keeping "electronic devices away from you," are so commonplace that whole websites such as Snopes are dedicated to investigating these stories and reporting on their falsity or, rarely, their truth. 41 "Fake news" is a term popularized by US President Trump to ridicule negative stories about him, regardless their truth or falsity. 42 However, "fake news" is more commonly defined as intentionally false or grossly exaggerated stories, which, as Samuel suggests, is essentially today's "yellow journalism." 43 Professor Chris Dent's research in this volume on the internet age and sources of knowledge or "truth" is readily applicable here. Truthful information also can be incorrectly mistrusted to the point of being outright ignored. Case in point, a large group of people challenge whether the US lunar landing actually occurred or was faked, despite filming of the event, humans going to the moon and returning with rocks and stories, and the space capsule itself on display. 44 As the former chief historian of NASA said of the lunar landing deniers, "[t]he reality is, the internet has made it possible for people to say whatever the hell they like to a broader number of people than ever before." 45 This is not to mention the outrage created by Holocaust deniers. Even a "flat Earth" society has emerged, ostensibly challenging whether the Earth is flat and not round, to the point of holding international conventions and planning a trip to the "edge of the Earth" in Antarctica. 46 Some of these groups are of course just seeking attention through outlandish statements. However, this mistrust of truthful information may be a natural reaction to an age of disinformation: people perhaps know that they should be skeptical of much of the information surrounding them and overreact to distrust even demonstrably truthful information. Information, then, can exist but be inaccurate, misinterpreted, misused, outright false, intentionally false to deceive people, and completely ignored despite its obvious truth. Taking this as a given, information does not necessarily lead to knowledge, and more information does not necessarily increase knowledge. In fact, more information in the internet era may mean more disinformation and actually reduce knowledge. Therefore, information could become knowledge, but not necessarily so, and not necessarily useful knowledge. Instead, there are qualifiers: information must be both accurate and combined with experience to form the basis for knowledge. Knowledge Knowledge is sometimes generally described as learned facts and theories. Plato defined knowledge as "justified true knowledge"; although, this definition leaves much room for interpretation. Knowledge also is broken into subcategories: there is (1) empirical knowledge based on experience and use of the senses; and (2) rational knowledge based on reason, collected information with intuition and deduction, and/or innate knowledge. Whole university degree programs are dedicated to the study of knowledge or epistemology such that a truly complete discussion is beyond the scope of this paper, but a working definition of knowledge is necessary to proceed. 38 Lusinchi, "Landon." 39 Goldin, "Causation vs Correlation." 40 Mayerowitz-Katz, "Health More Complicated Than Correlations." 41 Kasprak, "Dangerous Cosmic Rays." 42 Ellis, "Fake News," 398. 43 Graham, "Some Real News." 44 Godwin, "One Giant Lie?" 45 Godwin, "One Giant Lie?" 46 Dobson, "Flat Earth Supporters." Knowledge is commonly defined as the "[f]acts, information, and skills acquired … through experience or education." 47 In other words, combining rationalism and empiricism, knowledge can be defined as human understanding of some concept or thing based on synthesizing accumulated information and experience. Several key points should be made regarding this definition. First, knowledge is a collective concept, meaning it is based on general human understanding, and not just one person's view or belief. Second, knowledge is a synthesis of both gathered information and life experience. It is the combination of these two pathways-gathering information and experiencing lifethat creates knowledge. Third, gathered information that is not confirmed by life experience may not constitute knowledge. This is especially true if the gathered information is contradicted by life experience. Fourth, knowledge is topic-or subjectspecific. In other words, one can acquire knowledge on a topic through gathering information and life experience, but knowledge on one topic does not by itself lead to knowledge on other topics; instead, the process of gaining knowledge must be repeated for each discrete topic or subject. If a person is shown how to use a hammer by another knowledgeable person and then uses a hammer and materials to build a wall, this person may be knowledgeable about using a hammer to build a wall, but it does not mean the same person is knowledgeable about making a hammer, or engineering a wall, or homebuilding. Knowledge is not an inherent human characteristic, and not necessarily even widespread or widely dispersed among humans. Proverbs 20:15, attributed to King Solomon (Jedidiah), says "gold there is, and rubies in abundance, but lips that speak knowledge are a rare jewel." Plato's The Apology attributes to Socrates an investigation of politicians, poets, and craftsmen who claim to be knowledgeable but who actually know far less than what they claim. 48 Bruno Latour's meandering thought process in Laboratory Life to Down to Earth, considering scientists' role in the climate change science debate, is perhaps a modern extension of Socrates' investigation of knowledgeable people in ancient time. 49 As famously inscribed on New York City's Metropolitan Museum of Art, "knowledge is power." Knowledge is said to separate humans from other animal species. More importantly, acquisition and use of knowledge allows humans to succeed in work and life. As described in the "hierarchy of needs" (see Figure 2), Maslow's famous conceptualization of human motivation-the basic physiological survival needs of food, shelter, sleep, water, air, and clothing-can be met with or without knowledge; although, it is likely easier to have these basic needs met with some knowledge. 50 That said, needs beyond basic physiological needs require some knowledge to attain and retain. Source: Wikimedia Commons Certainly, safety needs require some knowledge. Economic safety needs can be met through work opportunities or ability to access other income resources, either of which requires some knowledge. Personal and commercial success at work also is increased with increased knowledge of the work being performed; that is, a construction worker is more efficient and effective and can earn more money if they are more knowledgeable about the work. Physical safety needs, including acquiring freedom from violence and natural disasters, is aided by knowledge to avoid or flee and recover from such safety threats. Health safety needs, including securing physical and mental health for one's self, eating healthy and getting exercise, avoiding unhealthy 47 Oxford Dictionary, "Knowledge." 48 Plato, "Apology." 49 Latour, Laboratory Life; Latour, Down to Earth. 50 Maslow, Motivation and Personality. habits, and avoiding or securing treatment for mental health problems, all require knowledge. Beyond basic physiological and safety needs, obtaining love and belonging, esteem and confidence and respect, and reaching the pinnacle of self-actualization, all require knowledge. Thus, knowledge even without wisdom is both valuable and necessary for most human activity. The more knowledge a person has, the more likely they are to have more human needs met and further experience economic and social success in his or her life. However, knowledge does not necessarily improve humanity or society as a whole. Knowledge and wisdom are closely related, but wisdom contains both a larger volume and longevity of collected knowledge as well as an application of thoughtful intelligence, therefore, combining knowledge and intelligent thought to extrapolate and apply to future events and conduct. Wisdom alone has the capacity to improve not only one's own life but also society and humanity. Wisdom The sources and existence of wisdom is a centuries-old question generating countless philosophical books and essays, a summary of which is beyond the scope of this paper. Aristotle, in Nicomachean Ethics Book VI, distinguished between theoretical and practical (life) wisdom, and considered theoretical wisdom to require knowledge of certain scientific principles and propositions. 51 However, knowledge, even extensive scientific knowledge, does not necessarily beget wisdom. First, a working definition of wisdom is necessary. Wisdom is commonly defined as "[t]he quality of having experience, knowledge, and good judgment." 52 To this definition one might add the adjective "extensive," because limited amounts of these are insufficient to create wisdom. More expansively today, wisdom can be defined as the application of collected knowledge to generate an understanding of humanity and human society and its environs to guide one's actions and improve one's life. One might also include an aspect of wisdom that involves more broadly improving upon human society and environs (which likely also improves one's own life). Regardless the broader societal benefits, wisdom requires but is more than mere collected knowledge. Importantly, wisdom does not mean totality or perfection in knowledge or thought. Indeed, Plato's discussion of Socrates' wisdom in The Apology indicates that perhaps a wise person knows when he or she does not know something and accepts being incorrect. As with knowledge, several observations of the definition of wisdom can be made. First, wisdom is defined as a human quality or trait, person-specific, and not universally or even widely held. Wisdom as a human quality can be acquired, but also lost. Second, wisdom is created through information, experience, and knowledge plus the human input of analysis and extrapolation. Intelligent use of the information, experience, and knowledge is vital to wisdom. Third, wisdom as a quality must be observed and recognized by others-one person cannot simply declare himself or herself wise. Fourth, wisdom is a personally held human trait but used for improving at least one's own life if not also improving human society and environs. Finally, wisdom involves guiding one's future actions and is, thus, predictive and forward-looking. E. O. Wilson aptly summarized today's challenge: "we are drowning in information while starving for wisdom." 53 According to I Kings in the Bible, King Solomon (Jedidiah) was given wisdom at an early age by God. Today, wisdom is commonly accepted as an attribute of Socrates, Maimonides, Galilei, da Vinci, Einstein, Gandhi, and others, regardless of divine intervention. Wisdom is often attributed to famous mathematicians and scientists, such that Aryabhatta, Galileo Galilei, Sir Isaac Newton, Thomas Edison, Albert Einstein, and others who have excelled in the realm of math, science and astronomy, were all geniuses, and may also have been wise. Thus, we now consider wisdom not necessarily as divinely granted but rather or also acquired through decades of life experience combined with intelligence and use of the available data, information, and knowledge. Regardless the source, it is widely accepted that wisdom is a rare thing. Confusingly, the term "wisdom" is often used in a manner that is inaccurate or inappropriate. For example, in one article discussing the issue of women and work-life balance, the binary choice of "wisdom or whining" is presented, when neither wisdom nor whining is an accurate depiction of the situation. 54 Similarly, in the field of social work, discussion of using knowledge effectively is referred to as wisdom, when again this is an inaccurate use of the term. 55 Another article, in the field or architecture, refers to "wisdom" of the use of wind-catcher technology for domestic structure cooling, when the correct term would be knowledge. 56 Wise is not to be confused with knowledgeable, smart, important, "successful," famous or infamous, 51 Aristotle, Nicomachean Ethics, Book VI. 52 Oxford Dictionary, "Wisdom." 53 Wilson, Consilience, 294. 54 Greenblatt, "Work/Life Balance." 55 Klein, "Practice Wisdom." 56 Suleiman, "Direct Comfort Ventilation." philosophical, a "historical figure," or even the modern "social media influencer." Kim Kardashian is famous-but possibly not wise. Wisdom is much more complex, uncommon, and not entirely achievable based on mere personal effort. Even if an agreed-upon definition of wisdom is reached, an agreed-upon comprehensive list of wise persons is impossible. Two shortcomings of wisdom are that it is a rare human quality and not always fully recognized or utilized by society. Leonardo da Vinci and Galileo Galilei are now widely considered to have been wise, but both were also persecuted by the Catholic Church. Wisdom is also a distinct concept from religious or spiritual enlightenment. Buddha, Moses, Confucius, Jesus, and Mohammed are all generally recognized in their respective religious faiths as having been enlightened, or, in other words, divinely touched at birth or later in life with a rare clarity of understanding and acceptance of the human condition. As noted, the purpose and use of wisdom is applying one's experience, intellect, information, and knowledge in combination for improvement. Wisdom is the only part of the DIKW framework that involves an improvement of one's life, or possibly a broader improvement of human life and environs. However, wisdom is not merely built on accumulation of data and information; thus, its place atop a vertical-linear pyramid is certainly inaccurate. A Vertical DIKW Pyramid is Inaccurate Given the foregoing, a vertical-linear DIKW pyramid does not make sense. A direct line cannot be drawn from data to wisdom. Indeed, a direct line cannot even be drawn from data to information. Data as a category is certainly large and growing, but it does not necessarily form the "base" for information let alone knowledge or wisdom. As discussed, data can be inaccurate or false. Thus, the collection of large and ever-growing amounts of data also necessarily include collection of large and ever-growing amounts of inaccurate or false data. To date, there is no known study of whether the ratio of inaccurate or false data has remained constant with the overall growth in data collection. It may be that the ratio has decreased or remained the same, but perhaps the ratio of inaccurate and false data has increased, perhaps as a human response or backlash to the collection of data itself. Therefore, the collection of data is not useful. What is actually valuable and needed if data is to be mass collected is the better collection of accurate and truthful data. The foregoing of course assumes that a goal, if not the intellectual goal, is gaining knowledge and wisdom. If mere data collection is the goal, as it is for companies earning profits based on creation and maintenance of large data farms, then data collection for those companies should and must proceed. Information, if accurate, is a more correctly identified input toward knowledge. Yet, inaccurate, misunderstood, misused, or false information is a real problem, and detracts from rather than increases knowledge. Knowledge is acquired information, skills, and education in a particular field or area, but is limited to that field or area. Knowledge requires information but also requires skills or education. Wisdom is intelligence applied with accumulated knowledge for the benefit of humanity and human environs, and further requires knowledge along with application of intelligence. With the above-discussed loose relationships between data, information, knowledge, and wisdom, the real relationship between data, information, knowledge, and wisdom can best be pictorially represented as in Figure 3. Figure 3. Reconfigured DIKW diagram Based on the foregoing, this proposed nonlinear diagram involving data, information, knowledge, and wisdom better represents the relationship between data and wisdom. In coming years, the data box likely will grow exponentially, but it remains to be seen whether any other boxes increase in size. Perhaps more importantly, this more accurate diagram of the relationship of data, information, knowledge, and wisdom could better guide educators and others toward increasing knowledge and wisdom, with the personal and societal benefits that entails, rather than simply increasing data collection and information
6,889.4
2020-11-21T00:00:00.000
[ "Computer Science" ]
Dielectric Loss and Electrical Conductivity Behaviors of Epoxy Composites Containing Semiconducting ZnO Varistor Particles Polymer nanodielectrics render a great material platform for exhibiting the intrinsic nature of incorporated particles, particularly semiconducting types, and their interfaces with the polymer matrix. Incorporating the oxide fillers with higher loading percentages (>40 vol%) encounters particular challenges in terms of dispersion, homogeneous distribution, and porosity from the process. This work investigated the dielectric loss and electrical conduction behaviors of composites containing semiconducting ZnO varistor particles of various concentrations using the epoxy impregnation method. The ZnO varistor particles increased the dielectric permittivity, loss, and electrical conductivity of the epoxy composites into three different regimes (0–50 vol%, 50–70 vol%, 70–100 vol%), particularly under an electric bias field or at higher temperatures. For lower loading fractions below 50 vol%, the dielectric responses are dominated by the insulating epoxy matrix. When loading fractions are between 50 and 70 vol%, the dielectric and electric responses are mostly associated with the semiconducting interfaces of ZnO varistor particles and ZnO–epoxy. At above 70 vol%, the apparent increase in the dielectric loss and conductivity is primarily associated with the conducting ZnO core forming the interconnected channels of electric conduction. The foam-agent-assisted ZnO varistor particle framework appears to be a better way of fabricating composites of filler loading above 80 vol%. A physical model using an equivalent capacitor, diode, and resistor in the epoxy composites was proposed to explain the different property behaviors. Introduction A nanodielectric polymer is a composite with a polymer serving as the matrix and inorganic particles as the fillers. It was intensively explored for the purpose of application in energy storage capacitors, electrical insulation, and voltage surge suppressing such as in metal oxide varistors. Designing polymer composites containing a higher volume percentage of the fillers can expand the metal oxide varistor product and improve the nonlinear electrical behavior of composite materials for electrical system protection. Since 2000, it has been extensively investigated in the dielectric arena because of its potential for better dielectric materials with higher dielectric permittivity, lower dielectric loss, higher dielectric strength, and miniaturization. It is designed to synergize the advantages of both inorganic fillers, such as high dielectric permittivity or thermal conductivity, and organic matrices, such as high dielectric strength or flexibility for electrical insulation or film capacitor purposes. By changing the types, aspect ratio, morphologies, or volume percentage of fillers, one may probe the filler's contribution, matrix-filler interface, and filler distribution. Amongst all, the effect of filler loading percentage has been widely researched at a high-volume fraction (>10 vol%) of high-permittivity fillers (K > 1000) to raise the dielectric permittivity of a polymer matrix [1][2][3]. On the opposite side, the effects of very low filler volume loading (<10 vol%) in increasing the composite's energy density are also explored [4][5][6]. However, it is not easy work to do when the loading fraction reaches percolation level or exceeds 30 vol% of fillers because of the mixing difficulty, porosity elimination, and filler interconnection. One can barely find where the filler's volume loading exceeds 50 vol% in the publication. Alternative methods such as the impregnation of polymer solution into particle network present to be effective in fulfilling the composite fabrication and properties [6]. In addition, the dielectric permittivity and electrical resistivity closely depend on the electrical nature of the fillers. Semiconducting and conducting fillers generally cause a significant increase in dielectric permittivity, loss, and electrical conductivity (reduced resistivity), as shown in Figure 1. Unlike insulating and conducting fillers, semiconducting fillers cause not only certain leakage currents but also form semiconducting interfaces subjecting to the electrical tunneling-like behavior of diodes [7][8][9]. It requires well-designed composite materials, in-depth and comprehensive models, new characterization tools, and measuring methods [10,11]. A varistor is a material structure sensitive to the applied electrical field and exhibits a nonlinear increase in electrical conduction with an increasing electric field. The nonlinear relationship between the voltage and electrical current is a crucial phenomenon. It has been widely utilized in metal oxide varistors to protect against high voltage or current transients in modern electrical power systems and electronics [12,13]. Compared with a metal oxide varistor (MOV), the polymer-based nanocomposites provide another design dimension and advantage in terms of high operation voltage excellence, smaller size, higher flexibility, and equivalent nonlinearity [11][12][13]. Designing polymer composites containing a higher volume percentage of the fillers can expand the metal oxide varistor product and readily change the nonlinear behavior for desired over-voltage protection needs. This composite type requires a higher filler fraction close to the percolation level to trigger the nonlinear responses. percentage of fillers, one may probe the filler's contribution, matrix-filler interface, and filler distribution. Amongst all, the effect of filler loading percentage has been widely researched at a high-volume fraction (>10 vol%) of high-permittivity fillers (K > 1000) to raise the dielectric permittivity of a polymer matrix [1][2][3]. On the opposite side, the effects of very low filler volume loading (<10 vol%) in increasing the composite's energy density are also explored [4][5][6]. However, it is not easy work to do when the loading fraction reaches percolation level or exceeds 30 vol% of fillers because of the mixing difficulty, porosity elimination, and filler interconnection. One can barely find where the filler's volume loading exceeds 50 vol% in the publication. Alternative methods such as the impregnation of polymer solution into particle network present to be effective in fulfilling the composite fabrication and properties [6]. In addition, the dielectric permittivity and electrical resistivity closely depend on the electrical nature of the fillers. Semiconducting and conducting fillers generally cause a significant increase in dielectric permittivity, loss, and electrical conductivity (reduced resistivity), as shown in Figure 1. Unlike insulating and conducting fillers, semiconducting fillers cause not only certain leakage currents but also form semiconducting interfaces subjecting to the electrical tunneling-like behavior of diodes [7][8][9]. It requires well-designed composite materials, in-depth and comprehensive models, new characterization tools, and measuring methods [10,11]. A varistor is a material structure sensitive to the applied electrical field and exhibits a nonlinear increase in electrical conduction with an increasing electric field. The nonlinear relationship between the voltage and electrical current is a crucial phenomenon. It has been widely utilized in metal oxide varistors to protect against high voltage or current transients in modern electrical power systems and electronics [12,13]. Compared with a metal oxide varistor (MOV), the polymer-based nanocomposites provide another design dimension and advantage in terms of high operation voltage excellence, smaller size, higher flexibility, and equivalent nonlinearity [11][12][13]. Designing polymer composites containing a higher volume percentage of the fillers can expand the metal oxide varistor product and readily change the nonlinear behavior for desired over-voltage protection needs. This composite type requires a higher filler fraction close to the percolation level to trigger the nonlinear responses. Furthermore, polymer-based variable resistor behavior is one of the interesting categories to examine, but its investigation at high filler concentration is limited because of the difficulty of composite processing. Despite a great deal of work addressing the insulating fillers for dielectric capacitor application and highly conducting fillers for thermistor application, there is limited knowledge about the detailed dielectric and electrical responses of composites containing semiconducting fillers such as ZnO particles in the middle part of Figure 1 [13][14][15][16][17][18][19][20]. Despite some composites filled with ZnO particles, no publications have used ZnO varistor particles (ZnO core with additive oxide shell) as the fillers so far. Among many polymer and filler choices, the insulating epoxy and semiconducting ZnO become excellent model materials for their wide electrical power system applications [15][16][17]. Both conducting phase (ZnO single crystal or particle) and ZnO-ZnO interfaces and ZnO varistor-epoxy interfaces of the semiconducting phase would exist. Yet, the two phases behave under electric fields at various temperatures, and their underlined mechanisms are rarely known. In this work, we designed sub-micron ZnO varistor particles by calcining nano-sized ZnO particles with other additive oxides desirable for varistor behavior. In addition, how semiconducting fillers such as ZnO interact with the host polymer is rarely understood. We developed the method of composite fabrication and made it possible to understand the nonlinear electrical behavior. The dielectric properties of epoxy-ZnO varistor composites with different higher vol% ZnO varistor particles exhibited nonlinear characteristics under biased electric fields or at higher temperatures. One may determine the percolation threshold by measuring the dielectric and electrical responses of the nanocomposites fabricated using the disclosed method. Synthesis and Sintering of ZnO Varistor Particles All precursor powders are oxides of nanometers ranging from 10 to 100 nm purchased from Aladdin, China. The 30 nm ZnO powders are used as the core particles mixed with 5.24 wt% of Bi 2 O 3 , 4.90 wt% of Sb 2 O 3, and other additives, which are used to form the satellite particles around ZnO to realize the nonlinear varistor behaviors [3,9]. Detailed compositions are given in Table 1. The amount of MnO 2 , Co 3 O 4 , Cr 2 O 3 , NiO, and SiO 2 are adjustable to improve the grain growth and nonlinearity of the sintered ceramics. The powder was then milled, dried, and de-agglomerated through a sieve of 120 mesh. Then, the calcination was conducted at 750 • C for 2 h in the air, followed by a second round of ball milling, drying, and de-agglomeration. The dense ceramic samples are produced by sintering the particle-varistor-based compacts at 1150 • C for 3 h, which are pasted with silver as reference varistor samples. Processing of Epoxy-Based Nanocomposites To produce highly loaded nanoparticles into a polymer matrix, we used two methods to process nanocomposites: the conventional mixing and the epoxy impregnation of ceramic foams ( Figure 2). Before nanocomposite fabrication, the epoxy EPON 828 was degassed at 70 • C for 12 h in a vacuum oven to remove the trapped moisture. After that, the methyl-tetrahydro phthalic anhydride curing agent and 2-ethyl-4-methylimidazole (2E4MZ) catalytic agent were poured into the EPON 828 and stirred vigorously at 70 • C for 0.5 h. The weight ratio used in the present study for EPON 828, the curing agent, and the catalyst was 40:32:0.4. Then, the ZnO varistor powder from calcination was ball milled with the EPON 828 mixture to achieve uniform distribution. The composites with different volume loadings are produced by adjusting the weight ratio between the calcined powder and EPON 828 mixture (20, 30, 45, 50 vol%). They are then cured in 180 • C ovens for 2 h. Finally, the partially finished bulk samples were polished to 1 mm thick discs. The composite discs were then sputter-coated with gold, followed by silver paste coating for subsequent tests. for 2 h. Finally, the partially finished bulk samples were polished to 1 mm thick discs. The composite discs were then sputter-coated with gold, followed by silver paste coating for subsequent tests. On the other hand, the porous ceramic matrices are fabricated as the epoxy/nanoparticle composite skeleton by mixing the calcined varistor powders with a forming agent PMMA (Polymethyl Methacrylate). Two different-size PMMA particles, PL-20 and PL-100, enable the pore diameters of 20 μm and 100 μm, respectively. These calcined powders, PL-20/PL-100 particles, and 5 wt% binder agent (PVA) were thoroughly mixed by ball milling using deionized (DI) water, followed by drying, grinding, and slight compacting. The compacted discs were then heated to 750 °C to completely remove the PVA binder, moisture, and PMMA foaming agent particles. By adjusting the weight ratio between the calcined varistor powder and PMMA particles, foam ceramics with 45, 50, 60, 70, and 80 vol% ZnO varistor particles can be fabricated. The prepared ceramic foams were then immersed into the EPON 828 mixture and impregnated with epoxy in a vacuum oven at 100 °C for 12 h. Then, the oven temperature was raised to 180 °C for 2 h to ensure the complete reaction. The two-step process enables degasification, epoxy impregnation, and curing simultaneously for dense composites. Lastly, the partially finished samples are polished to 1 mm thick discs for electrodes with gold and silver paste for subsequent dielectric tests. Characterizations of Ceramics and Composites An X-ray diffractometer examined the calcined varistor powders. In addition, the microstructures of the powders and the foam structure were imaged by a scanning electron microscope (SEM, Gemini SEM 450, Zeiss, Montabaur, Germany). The dielectric constant (permittivity), dielectric loss as the unitless intrinsic property, and electrical conductivity of the samples were measured and automatically calculated by a broadband dielectric spectrometer (Concept 41, Novocontrol Technologies GmbH & Co. KG, Montabaur, Germany) using a frequency ranging from 1 Hz to 10^4Hz. Moreover, a high-voltage generator unit (HVB4000, Novocontrol Technologies GmbH & Co., Montabaur, Germany.) and a temperature control system (Novocool, Novocontrol Technologies GmbH & Co., Montabaur, Germany) were used to measure the dielectric responses under different DC bias voltages and different temperatures, respectively. This work's electric conductivity and dielectric properties are sensitive to electrical fields and temperatures due to the semiconducting fillers. They are measured under six bias fields (1, 100, 500, 1000, 1500, and 1900 V/mm) and seven temperatures (25, 50, 75, 100, 125, 150, and 175 °C ). On the other hand, the porous ceramic matrices are fabricated as the epoxy/nanoparticle composite skeleton by mixing the calcined varistor powders with a forming agent PMMA (Polymethyl Methacrylate). Two different-size PMMA particles, PL-20 and PL-100, enable the pore diameters of 20 µm and 100 µm, respectively. These calcined powders, PL-20/PL-100 particles, and 5 wt% binder agent (PVA) were thoroughly mixed by ball milling using deionized (DI) water, followed by drying, grinding, and slight compacting. The compacted discs were then heated to 750 • C to completely remove the PVA binder, moisture, and PMMA foaming agent particles. By adjusting the weight ratio between the calcined varistor powder and PMMA particles, foam ceramics with 45, 50, 60, 70, and 80 vol% ZnO varistor particles can be fabricated. The prepared ceramic foams were then immersed into the EPON 828 mixture and impregnated with epoxy in a vacuum oven at 100 • C for 12 h. Then, the oven temperature was raised to 180 • C for 2 h to ensure the complete reaction. The two-step process enables degasification, epoxy impregnation, and curing simultaneously for dense composites. Lastly, the partially finished samples are polished to 1 mm thick discs for electrodes with gold and silver paste for subsequent dielectric tests. Characterizations of Ceramics and Composites An X-ray diffractometer examined the calcined varistor powders. In addition, the microstructures of the powders and the foam structure were imaged by a scanning electron microscope (SEM, Gemini SEM 450, Zeiss, Montabaur, Germany). The dielectric constant (permittivity), dielectric loss as the unitless intrinsic property, and electrical conductivity of the samples were measured and automatically calculated by a broadband dielectric spectrometer (Concept 41, Novocontrol Technologies GmbH & Co. KG, Montabaur, Germany) using a frequency ranging from 1 Hz to 10ˆ4 Hz. Moreover, a high-voltage generator unit (HVB4000, Novocontrol Technologies GmbH & Co., Montabaur, Germany.) and a temperature control system (Novocool, Novocontrol Technologies GmbH & Co., Montabaur, Germany) were used to measure the dielectric responses under different DC bias voltages and different temperatures, respectively. This work's electric conductivity and dielectric properties are sensitive to electrical fields and temperatures due to the semiconducting fillers. They are measured under six bias fields (1, 100, 500, 1000, 1500, and 1900 V/mm) and seven temperatures (25, 50, 75, 100, 125, 150, and 175 • C). Synthesized ZnO Varistor Powders The first step of composite fabrication is the synthesis of well-designed ZnO varistor particles (not regular ZnO particles). The ZnO core, the additive oxides, and the reacted phases are determined by the XRD spectra (Figure 3a). It can be confirmed that the calcined ZnO-based powders comprise the Spinel phase (mainly Zn 7 Sb 2 O 12 ) and Pyrochlore phase (mainly Zn 2 Bi 3 Sb 3 O 14 ) and the ZnO core phase. In addition, a small amount of the Bi 2 O 3 remains unreacted with other additives and acts as the sintering agent due to its lower melting temperature. All of this is observable on XRD spectra even after sintering at 1150 • C (data not shown here). ZnO core. The core's shell-like particles and compositions utilized in this work give rise to typical ZnO varistor properties. Nanoparticles are desirable for homogeneous composite mixing and nonlinear property enhancement. However, it is not easy to control the ZnO particle size after calcination that converts the raw nanomaterials to ZnO varistor particles. These particles in the size range of 200 to 500 nm are well accepted in this investigation. Figure 3c shows the current-voltage relation of the sintered ZnO varistor. The newly developed ZnO varistor indicates a significantly higher varistor field (~630 kV/mm) than the commercial ZnO varistor (~430 kV/mm), rendering an advantage of a thinner device. These results confirm the ZnO varistor particles developed using the calcination profile have the useful varistor feature that can be used for polymer composite fabrication. The morphology of the additive oxides added to the ZnO particles after calcination and ball milling is inspected using an SEM technique. The majority of additive nanoparticles (30-100 nm) are distributed around the surface of the grown ZnO particles (300-500 nm), as shown in Figure 3b. Point 1 in Figure 3b marks the ZnO grain (single particle, single crystal) confirmed with the EDS. In contrast, Point 2 represents the complex combination of the Spinel and Pyrochlore phases, which are desirable interfaces or barriers for electrical charge to overcome while in an electric field. They will also evolve to be the primary grain boundary phase in a sintered ceramic required to exhibit the characteristics of a metal oxide ceramic varistor. The image demonstrates the essential feature of varistor-like particle morphology in the mixed powder after calcination at 750 • C. The larger ZnO particles act as the 'core', and Spinel and Pyrochlore particles act as the decoration on the ZnO core. The core's shell-like particles and compositions utilized in this work give rise to typical ZnO varistor properties. Nanoparticles are desirable for homogeneous composite mixing and nonlinear property enhancement. However, it is not easy to control the ZnO particle size after calcination that converts the raw nanomaterials to ZnO varistor particles. These particles in the size range of 200 to 500 nm are well accepted in this investigation. Figure 3c shows the current-voltage relation of the sintered ZnO varistor. The newly developed ZnO varistor indicates a significantly higher varistor field (~630 kV/mm) than the commercial ZnO varistor (~430 kV/mm), rendering an advantage of a thinner device. These results confirm the ZnO varistor particles developed using the calcination profile have the useful varistor feature that can be used for polymer composite fabrication. Microstructures of Nanodielectric Composites The microstructures of the ZnO varistor-filled composite that use a foam agent are examined using an SEM technique. Figure 4a shows the random and uniform distributions of pores in the ceramic compacts before epoxy impregnation. The pore size varies from 20 µm to 100 µm. The observation means the processing of the foam agent is under reasonable control. This variation can be attributed to the high-speed ball milling, which results in the breakup of the big PMMA foam agent particles. appears in the epoxy matrix (45 vol% loading). Since the grinding force makes the topographic change undetectable by the SE signals, a back-scattered electron signal was detected to show the atomic weight contrast between the epoxy-filled area and the ZnO area in the foamed ceramic matrix. EDS inspection confirms the compositional difference between the epoxy and ceramic matrix. Figure 4c,d show the distinct Zn peak for the composite and zero Zn peak but high-intensity peaks for O, C, and H for the epoxy-filled pores. These results confirm that the epoxy was successfully impregnated into the pores in the foam ceramics. The random distribution of ZnO particles indicates the uniformity of ZnO-based foam ceramic. In addition, BET gas absorption measurements were conducted, which confirmed insignificant porosity in the filled foam ceramic composites. Temperature and Frequency Dependence for Semiconducting Behavior Exhibition The temperature dependence of a dielectric material depends on the polarization mechanism, which can help show the sources of dielectric loss and conduction loss. For the semiconducting fillers, a higher conductivity (lower resistance) is expected at higher temperatures. Dielectric loss increases at more elevated temperatures due to larger The microstructural characterization of epoxy-impregnated foam ceramic was further examined using the SEM by polishing the cured composite samples. Figure 4b shows a different morphology where the homogeneous distribution of ZnO varistor particles appears in the epoxy matrix (45 vol% loading). Since the grinding force makes the topographic change undetectable by the SE signals, a back-scattered electron signal was detected to show the atomic weight contrast between the epoxy-filled area and the ZnO area in the foamed ceramic matrix. EDS inspection confirms the compositional difference between the epoxy and ceramic matrix. Figure 4c,d show the distinct Zn peak for the composite and zero Zn peak but high-intensity peaks for O, C, and H for the epoxy-filled pores. These results confirm that the epoxy was successfully impregnated into the pores in the foam ceramics. The random distribution of ZnO particles indicates the uniformity of ZnO-based foam ceramic. In addition, BET gas absorption measurements were conducted, which confirmed insignificant porosity in the filled foam ceramic composites. Temperature and Frequency Dependence for Semiconducting Behavior Exhibition The temperature dependence of a dielectric material depends on the polarization mechanism, which can help show the sources of dielectric loss and conduction loss. For the semiconducting fillers, a higher conductivity (lower resistance) is expected at higher temperatures. Dielectric loss increases at more elevated temperatures due to larger polarization from thermally agitated dipoles in the dielectric media. The temperature dependence of dielectric responses and conductivity of the epoxy composites containing various volume percentages of ZnO varistor particles are generally measured using a frequency range of 1-100 kHz at a weak electric field. Figure 5 shows that the dielectric permittivity at 1 kHz (a common frequency for dielectric evaluation) under a bias of 1000 V/mm increases gradually from 9 (epoxy only) to 100 (ceramic only) when increasing the loading fraction of ZnO particles [20][21][22]. However, most compositions are nearly independent of temper-atures unless ZnO varistor content reaches 60 vol%. Correspondingly, the dielectric loss increases with an increase in temperature above 125 ºC and becomes more significant for compositions with 60 and 70 vol% ZnO varistor particles. Their dielectric loss is two orders of magnitudes higher than that of epoxy (~0.01) [23]. The vast increase can be associated with the contribution of the larger polarizations of the ZnO-ZnO interfaces and ZnO-epoxy interfaces. For compositions from 80 vol% to 100% ZnO varistor (ceramic sample), the dielectric loss is even higher than 100%, a clear indication of the conductive nature of the samples. Electric conductivity measurements reveal more direct evidence of the transition from semiconducting predominance to conducting. The increase in conductivity with the increase in temperature in Figure 5c is the semiconductive characteristic of increasing the ZnO particle concentration, which rises nearly three orders of magnitude from 10 −10 S/m to 10 −7 S/m. V/mm increases gradually from 9 (epoxy only) to 100 (ceramic only) when increasing the loading fraction of ZnO particles [20][21][22]. However, most compositions are nearly independent of temperatures unless ZnO varistor content reaches 60 vol%. Correspondingly, the dielectric loss increases with an increase in temperature above 125 ºC and becomes more significant for compositions with 60 and 70 vol% ZnO varistor particles. Their dielectric loss is two orders of magnitudes higher than that of epoxy (~0.01) [23]. The vast increase can be associated with the contribution of the larger polarizations of the ZnO-ZnO interfaces and ZnO-epoxy interfaces. For compositions from 80 vol% to 100% ZnO varistor (ceramic sample), the dielectric loss is even higher than 100%, a clear indication of the conductive nature of the samples. Electric conductivity measurements reveal more direct evidence of the transition from semiconducting predominance to conducting. The increase in conductivity with the increase in temperature in Figure 5c is the semiconductive characteristic of increasing the ZnO particle concentration, which rises nearly three orders of magnitude from 10 −10 S/m to 10 −7 S/m. In contrast, the pure ceramic sample of the same varistor composition exhibits a decreased conductivity with increasing temperature, a conductive characteristic. These phenomena suggest that the critical composition needed to form the conducting path via conducting ZnO particle inside epoxy is approximately 70-80 vol% ZnO. Below that, the interfaces predominate the dielectric loss. In contrast, the pure ceramic sample of the same varistor composition exhibits a decreased conductivity with increasing temperature, a conductive characteristic. These phenomena suggest that the critical composition needed to form the conducting path via conducting ZnO particle inside epoxy is approximately 70-80 vol% ZnO. Below that, the interfaces predominate the dielectric loss. The complex dielectric responses as a function of frequencies were also measured to better outline the composite effect. Figure 6a shows the pronounced dependence of dielectric permittivity on the frequency for the composites containing ZnO varistor particles higher than 50 vol%. The magnitude of decrease in dielectric permittivity at higher frequencies becomes lower. For the lower loading fraction, such as 20 vol%, the frequency dependence is unmeasurable. Similarly, the dielectric loss in the composites with lower loading fraction is on the lower level of the epoxy host. Increasing the filler concentration results in a remarkable increase and frequency dependence (Figure 6b), indicating the increased filler contribution and its semiconducting effect. The complex dielectric responses as a function of frequencies were also measured to better outline the composite effect. Figure 6a shows the pronounced dependence of dielectric permittivity on the frequency for the composites containing ZnO varistor particles higher than 50 vol%. The magnitude of decrease in dielectric permittivity at higher frequencies becomes lower. For the lower loading fraction, such as 20 vol%, the frequency dependence is unmeasurable. Similarly, the dielectric loss in the composites with lower loading fraction is on the lower level of the epoxy host. Increasing the filler concentration results in a remarkable increase and frequency dependence (Figure 6b), indicating the increased filler contribution and its semiconducting effect. Electric Bias Effect for Conducting Behavior Exhibition Complex dielectric responses of the composite containing electric-field-dependent ZnO varistors are rarely understood. Because the conductivity in this material system consists of DC conduction (leakage current) in the ZnO particles and dielectric loss related to the interfaces and filler themselves, dielectric and conductivity measurements under the bias electric field will help differentiate their contribution. The electric field dependences of both composites and pure ceramic as shown in Figure 7 at 1 kHz and 125 °C. The composites follow the expected general rules of the increase in dielectric permittivity with the addition of high-permittivity ZnO fillers. The dielectric and electric responses fall into three compositional regimes. The first regime contains ZnO fillers of below 50 vol%. These composites maintain a low conductivity, permittivity, and loss factor, weakly responding to the increased bias fields. The reason for this is that the epoxy has the dominant insulating property, and the added ZnO varistor particles stay separated in the host epoxy, giving rise to negligible contribution and thus little dielectric loss. All the composites remain good insulators withstanding high electric fields. The second regime contains compositions between 50 vol% and 70 vol% of ZnO particles, exhibiting significant distinction from the first regime. The conductivity is ten times higher, the permittivity is five times higher, and bias dependence also occurs. This is because a sufficient amount of fillers form specific interconnections partially contributing to the composite's properties. The increase in these dielectric and electric properties is more associated with the interfaces of ZnO-ZnO particles and ZnO-epoxy. The third regime contains ZnO particles above 70 vol% and shows an extremely high dielectric loss (above 1) under higher bias fields, which indicates the metallic-type conducting nature (Figure 7b,c). The dielectric properties of the compositions are also ZnO filler dominant instead of epoxy dominant since the ZnO varistor particles are interconnected or close. An applied bias field can easily break down the interfacial barriers at such a high loading concentration leading to electrical conduction. Figure 7d shows the conductivity as a function of ZnO varistor particle loading, which marks the transition to Electric Bias Effect for Conducting Behavior Exhibition Complex dielectric responses of the composite containing electric-field-dependent ZnO varistors are rarely understood. Because the conductivity in this material system consists of DC conduction (leakage current) in the ZnO particles and dielectric loss related to the interfaces and filler themselves, dielectric and conductivity measurements under the bias electric field will help differentiate their contribution. The electric field dependences of both composites and pure ceramic as shown in Figure 7 at 1 kHz and 125 • C. The composites follow the expected general rules of the increase in dielectric permittivity with the addition of high-permittivity ZnO fillers. The dielectric and electric responses fall into three compositional regimes. The first regime contains ZnO fillers of below 50 vol%. These composites maintain a low conductivity, permittivity, and loss factor, weakly responding to the increased bias fields. The reason for this is that the epoxy has the dominant insulating property, and the added ZnO varistor particles stay separated in the host epoxy, giving rise to negligible contribution and thus little dielectric loss. All the composites remain good insulators withstanding high electric fields. The second regime contains compositions between 50 vol% and 70 vol% of ZnO particles, exhibiting significant distinction from the first regime. The conductivity is ten times higher, the permittivity is five times higher, and bias dependence also occurs. This is because a sufficient amount of fillers form specific interconnections partially contributing to the composite's properties. The increase in these dielectric and electric properties is more associated with the interfaces of ZnO-ZnO particles and ZnO-epoxy. The third regime contains ZnO particles above 70 vol% and shows an extremely high dielectric loss (above 1) under higher bias fields, which indicates the metallic-type conducting nature (Figure 7b,c). The dielectric properties of the compositions are also ZnO filler dominant instead of epoxy dominant since the ZnO varistor particles are interconnected or close. An applied bias field can easily break down the interfacial barriers at such a high loading concentration leading to electrical conduction. Figure 7d shows the conductivity as a function of ZnO varistor particle loading, which marks the transition to conduction under a bias of 1000 kV/mm. The SEM image of the ZnO varistor particles of 80 vol% is shown in the inset displaying the interconnection of the particles in such a high filler concentration. Comparatively, the direct mixing of ZnO particles with epoxy fails to deliver the increased dielectric loss and conductivity up to 50 vol% loading concerning the pure epoxy, which suggests the isolation of primary ZnO particles in the composites. conduction under a bias of 1000 kV/mm. The SEM image of the ZnO varistor particles of 80 vol% is shown in the inset displaying the interconnection of the particles in such a high filler concentration. Comparatively, the direct mixing of ZnO particles with epoxy fails to deliver the increased dielectric loss and conductivity up to 50 vol% loading concerning the pure epoxy, which suggests the isolation of primary ZnO particles in the composites. Analysis of Particle Interconnection at Higher Concentrations To provide a more pertinent analysis of the significant change in dielectric loss and conductivity previously, the authors proposed a schematic model in Figure 8. The ZnO varistor particles embedded in the epoxy are equivalent to a ZnO core resistor, and additive-phase diodes on both sides of the ZnO core are in contact with epoxy dielectric (two equivalent capacitors), provided an electric field is applied vertically. A very high bias field or temperature is required to overcome two diode barriers (or tunneling) to induce local leakage or dielectric loss. Because of the presence of an epoxy capacitor, no pronounced leakage is expected unless the ZnO varistor particles are highly loaded. It is common to obtain certain ZnO varistor particles with no conformal decoration of additive oxides, as shown by the image on the right of Figure 8a. The bias field will easily overcome one diode barrier and cause higher leakage and dielectric loss. The ZnO particle interconnection pre-set by the foaming agent and binders is evident because the compact fabricated under a high uniaxial press holds more ZnO varistor particles together at a higher Analysis of Particle Interconnection at Higher Concentrations To provide a more pertinent analysis of the significant change in dielectric loss and conductivity previously, the authors proposed a schematic model in Figure 8. The ZnO varistor particles embedded in the epoxy are equivalent to a ZnO core resistor, and additivephase diodes on both sides of the ZnO core are in contact with epoxy dielectric (two equivalent capacitors), provided an electric field is applied vertically. A very high bias field or temperature is required to overcome two diode barriers (or tunneling) to induce local leakage or dielectric loss. Because of the presence of an epoxy capacitor, no pronounced leakage is expected unless the ZnO varistor particles are highly loaded. It is common to obtain certain ZnO varistor particles with no conformal decoration of additive oxides, as shown by the image on the right of Figure 8a. The bias field will easily overcome one diode barrier and cause higher leakage and dielectric loss. The ZnO particle interconnection pre-set by the foaming agent and binders is evident because the compact fabricated under a high uniaxial press holds more ZnO varistor particles together at a higher probability than the direct mixing of epoxy and particles. The interconnected ZnO varistor framework will respond more sensitively to the applied electric field according to a dielectric material's electric field screening principle. It can be conjectured that the ZnO varistor particles below 50 vol% are randomly dispersed in the matrix as isolated from one another. No conducting paths exist, and very low dielectric loss and conductivity occur. The ZnO-epoxy interaction occurs through the epoxy's molecular chain bonding to the ZnO varistor particle surface, which is generally believed to form an interface in a three-layer particle model [11]. When the filler fraction increases to 60 vol%, some of the particles become closer, resulting in more local clusters and interfaces. Due to their increased contribution, there appears to be higher dielectric loss and conductivity. When further increasing the ZnO particles to 70 vol% or higher, the proximity of ZnO particles subjects to tunneling through the conduction paths among ZnO particles, thus giving rise to a vast conductivity. The electrically lossy behavior is accompanied by the higher permittivity of the composites, as expected. probability than the direct mixing of epoxy and particles. The interconnected ZnO varistor framework will respond more sensitively to the applied electric field according to a dielectric material's electric field screening principle. It can be conjectured that the ZnO varistor particles below 50 vol% are randomly dispersed in the matrix as isolated from one another. No conducting paths exist, and very low dielectric loss and conductivity occur. The ZnO-epoxy interaction occurs through the epoxy's molecular chain bonding to the ZnO varistor particle surface, which is generally believed to form an interface in a three-layer particle model [11]. When the filler fraction increases to 60 vol%, some of the particles become closer, resulting in more local clusters and interfaces. Due to their increased contribution, there appears to be higher dielectric loss and conductivity. When further increasing the ZnO particles to 70 vol% or higher, the proximity of ZnO particles subjects to tunneling through the conduction paths among ZnO particles, thus giving rise to a vast conductivity. The electrically lossy behavior is accompanied by the higher permittivity of the composites, as expected. Conclusions Polymer composites containing a higher volume percentage of fillers can be designed for energy storage capacitors using high-dielectric-permittivity films, electrical cable insulation exhibiting semiconducting behavior, and voltage surge suppressors such as nonlinear metal oxide varistors. This work developed ZnO varistor particles by controlling the additive oxides and the calcination profile, which were processed into the epoxy composites for improved nonlinear electrical behavior. After compaction, the highly loaded Conclusions Polymer composites containing a higher volume percentage of fillers can be designed for energy storage capacitors using high-dielectric-permittivity films, electrical cable insulation exhibiting semiconducting behavior, and voltage surge suppressors such as nonlinear metal oxide varistors. This work developed ZnO varistor particles by controlling the additive oxides and the calcination profile, which were processed into the epoxy composites for improved nonlinear electrical behavior. After compaction, the highly loaded composites were successfully made feasible via epoxy impregnation into the foaming-agent-assisted ZnO varistor framework. The composites with oxide fillers of >80 vol% were shown to achieve dielectric permittivity as high as 100 and differentiation of electrical conductivity due to conducting ZnO particles and dielectric loss caused by the epoxy-ZnO varistor interfaces in the composites. In the temperature ranging from 25 • C to 175 • C and a bias electric field ranging from 1 V/mm to 1900 V/mm, three regimes of filler loading concentration appear based on the dielectric loss and conductivity behaviors of the composites. Below 50 vol% fillers, the composites behave as insulators, while between 50 and 70 vol%, the higher dielectric loss is associated with the semiconducting nature of the varistor additives around the ZnO core particles. Above 70 vol%, the composites are dominated by the conducting nature of ZnO core particles. Their interconnection or proximity forms the conduction paths contributing to enhanced electric conductivity. A simple model on equivalent capacitor, diode, and resistor was used to explain the three different types of property changes. The vol% dependence of dielectric and electrical conductivity for highly loaded composites is more apparent under a bias field or higher temperatures. Author Contributions: D.Q.T. has conceptualized the study, designed the experiments, and wrote the manuscript. L.L., C.C., H.N. and X.W. collected the data, compiled and carried out the statistical analysis, wrote and interpreted the results, and archived the data. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: The data used to support the findings of this study are already incorporated in the results section.
8,866.2
2022-09-01T00:00:00.000
[ "Materials Science", "Engineering" ]
METHOD FOR CALCULATION OF DRILLING-AND-BLASTING OPERATIONS PARAMETERS FOR EMULSION EXPLOSIVES Purpose. Development of a new method for calculation of drilling-and-blasting operations parameters during underground mining with application of emulsion explosives taking into account their energy characteristics as well as physical and mechanical properties of rocks. Methods. The integrated methodological approach including analytical transformations of the received formulas for calculation of drilling-and-blasting operations parameters, their improvement and also computer modeling on the basis of a finite element method were used for the establishment of compression zones and formations of cracks in the massif around shots taking into account such energy characteristics of emulsion explosive as detonation velocity, explosion heat, density of the explosives, etc. Findings. The relative force coefficient was determined for the emulsion explosive of “Ukrainit” type taking into account the extent of detonation velocity realization, which allowed to calculate the necessary amount of explosives. On the basis of experimental data, consistent patterns of detonation velocity change depending on the charge density and diameter yielding to power law are determined for the emulsion explosive of “Ukrainit” type. Improvements have been made to the analytical ex-pression determining the sizes of compression and fracturing zones around blast holes taking into account energy characteristics of the emulsion explosive of “Ukrainit” type as well as physical and mechanical properties of the blasted rocks. This allowed to develop a new algorithm of calculating parameters for drawing up the passport of drilling-and-blasting operations during underground mining. Originality. The method for calculating drilling-and-blasting operations parameters is based on the regularities of emulsion explosives energy characteristics change, the extent of detonation velocity realization as well as physical and mechanical properties of rocks. Practical implications. A new method has been developed for calculation of drilling-and-blasting operations parameters during mining with emulsion explosives application, which results in minimization of energy consumption for the mass breakage. INTRODUCTION Drivage is one of the main production processes during the opening and preparation of the deposit for mining operations. Nowadays, up to 95% of mine workings are driven with the help of drilling-and-blasting. Therefore, special attention is paid to improvement and development of new technique for calculation of the drilling-and-blasting parameters that will provide improvement of drivage indicators and increase in safety of blasting operations. Application of various types of ex-plosives in mining led to the development of a large number of methods for calculation of drilling-andblasting parameters. However, there are no methods for accounting energy properties of emulsion explosives, which are safer, cheaper and more ecological in comparison with others (Kholodenko, Ustimenko, Pidkamenna, & Pavlychenko, 2014). Therefore, substantiation of a new calculation procedure and drawing up passports of drilling-and-blasting operations for application of emulsion explosives are of important scientific and practical value. Creation and development of emulsion explosives dates back to 1961, while the early seventies of the XX century saw the creation of emulsion explosives with a critical diameter and detonation velocity corresponding to various types of dynamite (Brown, 1998). The main advantages of emulsion explosives are their low cost, fully mechanized loading and high safety in preparation and loading (Kholodenko, Ustimenko, Pidkamenna, & Pavlychenko, 2015), absence of dangerous individual explosive components and minimum content of harmful gases in explosion products (Khomenko, Kononenko, & Myronova, 2013). Intensive operation of deposits produces a negative impact on the environment and increases pollution levels of the atmospheric air (Myronova, 2015), water and land objects. It also results in accumulation of significant amount of industrial waste in mining regions of Ukraine (Kolesnik, Borysovs'ka, Pavlychenko, & Shirin, 2017). The Agreement on Association of Ukraine with the European Union stipulates implementation of the European standards and norms in the sphere of environmental protection, atmospheric air in particular. To decrease the negative impact on the environment during blasting operations, all quarries are using emulsion explosives. It is known that detonation of one kilogram of emulsion explosives is accompanied by the emission of 14 times less amount of harmful gases than during application of emulsion explosives containing trotyl. Ore mines of Ukraine use only 5% of emulsion explosives which is explained by the complexity in the development of underground mining technologies and designing charging machines for using explosives of these type (Mironova & Borysovs'ka, 2014). Extraction of ores is connected with drilling-andblasting operations which define efficiency of deposit mining in many respects. The high cost of industrial ammonite #6 ZhV explosive, grammonite 79/21 and grammonite A, danger of their transportation, make it feasible to use emulsion explosives made directly on the sites of blasting operations, which ensures higher safety and smaller amount of explosion products. In conditions of the private joint stock company "Zaporizhzhia Iron Ore Plant" (PJSC "ZIOP"), emulsion explosives of "Ukrainit-PP" type have been used since 2008 for mining, and since the end of 2013 -for ore breaking (Khomenko, Kononenko, & Myronova, 2017). The present research is based on physical and chemical analysis as well as biological assessment (Gorova, Pavlychenko, Borysovs'ka, & Krups'ka, 2013) of the atmospheric air, which allowed to establish the extent of decrease in the concentration of the harmful substances emitted into the atmosphere during drilling-and-blasting operations conducted with emulsion explosives "Ukrainit-PP". Furthermore, it became possible to specify the reduction of technogenic impact on the atmospheric air and decrease of the ecological danger index to 35% (Khomenko, Kononenko, Myronova, & Sudakov, 2018). Application of emulsion explosives in underground mining operations is difficult not only due to the lack of charging machines, but also because of outdated method for calculation of drilling-and-blasting operation parameters which is developed for emulsion explosives containing trotyl (Kutuzov & Andrievskiy, 2003). Thus, there is a necessity for development and implementation of a new method for calculation of drilling-and-blasting operation parameters considering energy indicators of emulsion explosives (Chernai, Sobolev, Chernai, Ilyushin, & Dlugashek, 2003). METHOD FOR CALCULATION OF EMULSION EXPLOSIVE FORCE COEFFICIENT Application of various types of explosives during mining with different energy characteristics necessitates determination of force coefficient in respect to a standard reference explosive, Ammonite #6 ZhV. However, in the known techniques, the force coefficient is determined by the indicators of force or explosion heat, which brings about considerable divergences in the calculation results, which are generally understated in the case of emulsion explosives. In this regard, it is suggested that the force coefficient for emulsion explosives should be defined considering the extent of detonation velocity realization, which allows to incorporate energy characteristics of explosives. Ideal detonation velocity, i.e. greatest possible at the set explosive density is determined by the formula offered by the Chinese researchers (Wang, 1994): where: Δ -charge density (explosives), g/cm 3 ; ω -the characteristic product of explosion heat and the volume of explosion products, offered by M. Bertlo (1883) for assessment of explosive efficiency: where: Q exp -explosion heat, kkal/kg determined by value Q e kJ/kg divided by coefficient 4.19 (mechanical equivalent of thermal energy); V EP -volume of explosion products, l/kg. The extent of detonation velocity realization, i.e. completeness of chemical reaction is determined by: where: D e -experimental detonation velocity, m/s. Explosion heat considering the extent of detonation velocity realization: where: Q -explosion heat of 1 kg of explosives, kJ/kg. Coefficient of explosive relative force is: where: Q ET -explosion heat of 1 kg of etalon explosive (Ammonite #6 ZhV) taking into account the extent of detonation velocity realization, kJ/kg; Q E -explosion heat of 1 kg of the used explosive taking into account the extent of detonation velocity realization, kJ/kg. RESULTS AND DISCUSSIONS We determine energy indicators for etalon explosive Ammonite #6 ZhV explosives and "Ukrainit-P" and "Ukrainit-PP" emulsion explosives by formulas (1) -(5). The obtained results are presented in Table 1. The specific charge of explosives is determined by the universal formula of professor N. Pokrovsky (Pokrovsky, 1977): where: q 1 -normal specific charge of explosives: where: f -rock hardness coefficient; f 1 -coefficient which considers rock structure, and for dense fine-grained rock is 0.8, while for friable rock is 1.1, for slate and finely fractured rock -1.3, for viscous and porous rock -2.0; k cons -rock constraining factor for two surfaces of exposure is 1.2 -1.5, for one surface of exposure it is determined by a formula: where: S dr -cross-sectional area of the working in drivage, m 2 . Volume of blasted rock in the massif: where: l b -blast-hole depth, m. Estimated amount of explosives for a face: ( 1 0 ) The arrangement of blast-holes, both in cuts, and in the face, depends on the size of least resistance line of the outside hole. Nowadays, there are many formulas allowing to define the line of least resistance of the outside hole which can conditionally be divided into two groups. Formulas of the first group, which are obtained from practical experience of blasting operations, are specified by correction coefficients. Formulas of the second group allow to define the calculated zones of rock breaking around a blast-hole charge. Therefore, the authors suggest considering formulas that are most common for computing passports of drilling-and-blasting operations because of their high accuracy and universality. Thus, V. Shekhurdin (Shekhurdin, 1985) proposed to determine line of least resistance of the outside blast-hole by a formula which considers the amount of explosives in 1 m of the blast-hole and specific charge of explosives: ( 1 1 ) where: m -the coefficient of charge convergence 0.7 -1.0, smaller value is acceptable for cutholes; p -the amount of explosives placed in 1 m of the blast-hole, equal to: where: d -diameter of the blast-hole, m; Δ -charge density (explosives), kg/m 3 . S. Onika and V. Stasevich, departing from their practical experience (Onika & Stasevich, 2005), suggest determining the line of least resistance for outside blasthole by a formula: where: K c -constraining factor, equal to 0.6 at S dr < 4 m 2 , 0.7 -0.8 at S dr = 4 -60 m 2 and 0.9 at S dr ˃ 60 m 2 ; d c -diameter of blasting charge, m; Δ -charge density (explosives), t/m 3 ; γ -rock density, t/m 3 . K т -coefficient of local geological conditions depending on the category of rocks fracturing (Table 2). The long-term research conducted by A. Andrievskiy allowed to make a discovery (Andrievskiy & Kutuzov, 1992) and develop a technique for determining line of least resistance of outside hole along the zone of rock mass fracturing around the explosive charge (Andrievskij, Avdeev, Zileev, & Zileev, 2004). The suggested technique is based on reliable determination of radii of compression and fracturing formation zones in such sequence: -radius of compression zone: where: d -diameter of charged blast-hole, m; ρ -density of explosive in a charge, kg/m 3 ; D -detonation velocity of the applied explosive, m/s; σ comp -ultimate compression strength, Pa; -radius of fracturing zone: or: where: r -radius of charged blast-hole, m; d sh -diameter of compression zone, m; τ с -ultimate shear strength determined as τ с ≈ (0.02…0.10)·σ comp , Pa; Radius of fracturing zone is accepted as line of least resistance between outside holes, i.е. W o = R fr . The disadvantage of the above technique is that the detonation velocity is taken in average values, which influences the accuracy of calculating the line of least resistance of outside hole. Work (Mertuszka, Cenian, Kramarczyk, & Pytel, 2018) presents the results of the research into detonation velocity change depending on the charge diameter, but without emulsion explosives density and losses of explosive properties in time . Therefore, to optimize the technique for emulsion explosives of "Ukrainit-PP" type, we analyzed interrelations of detonation velocity, diameter of the blast-hole (borehole) and density of load. Using industrial measurement data, we plotted the graph of detonation velocity dependence on diameter of blast-hole and load density for "Ukrainit-PP" type explosive ( Fig. 1) (Gorinov, Kuprin, & Kovalenko, 2009). By substituting formula (17) in expressions (14) and (15), we will receive formulas for determination of radii of compression and fracturing zones during "Ukrainit-PP" type explosive application: For checking the results convergence, we will calculate the line of least resistance for outside blast-hole using the three above techniques for the following mining-and-geological conditions: rock -ferrous quartzite, cross-sectional area of drivage S dr = 12.8 m 2 ; rock compression strength σ comp = 140 MPa; blast-hole diameter d = 0.043 m; rock density γ = 3250 kg/m 3 ; "Ukrainit-PP" type of explosive; density of explosive charge ρ = 1250 kg/m 3 . The received results are given in Table 3. From results of the calculation, it is evident that the divergence of least resistance lines of the outside borehole does not exceed 10%. Therefore, it is possible to use any of the above techniques for its calculation. But the first two approaches do not consider energy characteristics of the applied explosives except through force coefficient of explosives rather than etalon explosives. It is offered to apply A. Andrievskiy's technique (Andrievskiy & Avdeev, 2006) for calculating the line of least resistance of the outside blast-hole, which considers both characteristics of explosives as well as physical and mechanical properties of blasted rocks. For checking the optimized formula (18), we will execute analytical modeling of rock mass deformation (Khomenko, Kononenko, & Petlovanyi, 2015) around the blast-hole charge in the SolidWorks Simulation software product. It is possible to simulate the rock mass compression zone around the blast-hole and further to determine fracturing formation zone radius by the formula (19). For this purpose, the program is used to build the model of the explored mass in 1:1 scale with the following dimensions: height -1 m, width -1 m, length -3.5 m and the blast-hole diameter of 43 mm. The 1390 MPa pressure of explosion products of "Ukrainit-PP" explosives is set in the blast-hole. As an example, we will consider movements and deformations of rocks around the blast-hole charge during its detonation (Fig. 2) (Khomenko, Kononenko, & Bilegsaikhan, 2018). The analysis of simulation results has allowed to establish that rock compression zone radius was 0.18 m. Fracturing formation zone radius, which was 0.76 m, has been calculated by the formula (19). . Areas of movement (a) and displacement with deformations (b) of rocks around the exploded blasthole charge According to the results of comparison between numerical calculation and analytical modeling, the divergence of values has not exceeded 10%. Therefore, we suggest applying A. Andrievskiy's method for calculating the line of least resistance of outside blast-hole. After defining the line of least resistance of outside blast-hole, we proceed to calculating distances between blast-holes and start to determine their arrangement in the mine working face. Departing from the practical experience of drawing up passports of drilling-andblasting operations, the technique of blast-holes arrangement in the mine working face has been offered (Khomenko, Rudakov, & Kononenko, 2011). The technique was based on the idea that the area of the mine working face is divided into the areas of cuthole, outside, and outline blast-holes. The area of the face, where cutholes are located, depends on the size of the face displacement behind the explosion (Table 4). The distance from a mine working contour to outline bore-holes is used to determine the area of outline boreholes which in practice is equal to 0.15 -0.25 m or determined by the formula (18). The distance from the line of outline bore-holes to the exfoliated surface which is formed during detonation of outside boreholes is defined as 0.5·R fr . The face area for outside blast-holes is: , p c s , ( 2 1 ) where: Δ -explosive density in the blast-hole or cartridge, kg/m 3 ; d -diameter of the blast-hole or cartridge of the explosive, m; k ful -coefficient of blast-hole loading, equal to 0.30 -0.85. The received calculated number of outside blast-holes has to be analysed. Big number leads to increase in labor input and duration of drilling operations which reduces the speed of mine working drivage. On the contrary, small number of blast-holes leads to poor rock crushing by explosion which complicates its loading and transportation. The experience gained from blasting operations during mining has allowed to establish that the number of outside blastholes at which every 1 m 2 of the face area has 1 -2 blastholes is optimal. A big number of blastholes indicates that the chosen explosive type was of insufficient power and the charge diameter was underated. In this case, it is necessary to use the explosive of a more powerful type, to increase the charge diameter and to recalculate blast-holes number. The area of the face that corresponds to one outside blast-hole is: . ( 2 2 ) The line of least resistance of outside blast-holes is: ( 2 3 ) The distance between outside blast-holes in a row is: where: m -the coefficient of charges convergence, equal to 0.8 -1.3, smaller value of coefficient is accepted for hard rocks. The number of outside blast-holes rows is defined as the distance from a cut to the line of outline blast-holes divided by the line of least resistance of outside blastholes which is determined by the formula (23). Optimum contours of outside blast-holes rows arrangement repeat the shape of the mine working cross section. The number of outside blast-holes in i-row is: where: P i -perimeter of i-row of outside blast-holes, m. The actual distance between outside blast-holes in i-row is: , m . ( 2 6 ) The number of outline blast-holes is: where: P o -perimeter of outline blast-holes, m; а o -distance between outline blast-holes, equal to 0.75 -0.95 from R fr . Usually, the distance is equal to 0.75·R fr for bottom blast-holes, for blast-holes from mine working sides -0.95·R fr , in mine working roof -0.85·R fr . The actual distance between outline blast-holes is: , m . ( 2 8 ) Total number of blast-holes in mine working face is: where: N cut -number of outside blast-holes, pcs; N out -total number of outside blast-holes, pcs. When the drilling-and-blasting operations passport is specified, it is allowed to increase the number of blast-holes in the face by no more than 10%, and in mine workings with a section up to 5 m 2 -no more than by 4 blast-holes. The blast-holes are arranged in forward, horizontal and profile projections of the face area. The arrangement of blast-holes in the face of a mine working begins with cutholes placement. If self-moving drilling equipment is used, cutholes are placed in the center of the mine working. The cut is moved closer to the bottom of the mine working to reduce scattering of rock pieces from a cuthole part. The lower boundary of the cut placement is equal to R fr measured from the mine working floor. Then, a contour for outline blast-holes placement is laid at the distance of 0.15 -0.25 m or R sh , from the working outline. Outline blast-holes of the floor are arranged in the following sequence. First, one blast-hole is placed in each corner on the bottom line, then other floor blastholes are placed on this line in the direction to the face center maintaining the actual distance between them. Outline blast-holes on the sides are laid off the blastholes located in bottom corners of the line of the floor blast-holes. Outline blast-holes of the roof are arranged in the following sequence. At first, one blast-hole is placed in the corners of the roof line, then other roof blast-holes are placed on this line in the direction to the face center maintaining the actual distance between them. Outside blast-holes are arranged on the lines of the outside contours laid near the cut and parallel to the cut contour, while near the working contour -parallel to the latter. At the same time there has to be a smooth transition from one contour to another. The average value of a charge per one blast-hole is: The value of the cuthole charge is: ( 3 1 ) The value of the outside blast-hole charge is: ( 3 2 ) The value of the outline blast-hole charge from the sides and the roof is: The value of the outline blast-hole charge of the floor: ( 3 4 ) The actual explosive consumption on the face is: where: N o.sr -the number of outline blast-holes in the roof and on the sides, pcs.; N o.b -the number of outline blast-holes of the floor, pcs. The layout arrangement is followed by the graphic presentation of a charge design for each type of blastholes and of switching network of charges. The method for calculating drilling-and-blasting parameters can be used only during underground mining using emulsion explosives, while determination of stoping operations parameters requires modeling of compression and fracturing zones, studying the changes of emulsion explosives density and mass as well as detonation velocity along the charge length in the boreholes. CONCLUSIONS 1. The analysis of experience related to emulsion explosives application during the last 10 years has allowed to establish a tendency for the increase in their usage in underground mining operations. Safety of operations, smaller volumes of explosion products and high power rates of emulsion explosives lead to the reduced influence on the state of the mine and atmospheric air. However, lack of the approved and widely applied methods for calculation of drilling-and-blasting operation parameters which consider energy characteristics of emulsion explosives constrain their implementation. 2. The coefficient of explosives force calculated on the basis of force or explosion heat underates up to 30% of indicators for emulsion explosives. To eliminate these disadvantages, the suggested method of calculating drilling-and-blasting operation passports defines the force coefficient considering the extent of detonation velocity realization. Calculation of drilling-and-blasting operations parameters is based on the determination of compression and fracturing formation zones around blast-hole charges. 3. The rational arrangement of blast-holes in the face of a mine working is based on the accounting of cuts areas, breaking-off and outline parts of the face. The suggested technique of developing passports of drillingand-blasting operations is based on industrial measurements of detonation velocity during the change in density and diameters of charges of Ukrainit-PP-type emulsion explosives. The technique is recommended for calculation and drawing up passports of drilling-and-blasting operations during mining.
5,139.4
2019-09-30T00:00:00.000
[ "Chemistry", "Engineering" ]
First light of BEaTriX, the new testing facility for the modular X-ray optics of the ATHENA mission The Beam Expander Testing X-ray facility (BEaTriX) is a unique X-ray apparatus now operated at the Istituto Nazionale di Astrofisica (INAF), Osservatorio Astronomico di Brera (OAB), in Merate, Italy. It has been specifically designed to measure the point spread function (PSF) and the effective area (EA) of the X-ray mirror modules (MMs) of the Advanced Telescope for High-ENergy Astrophysics (ATHENA), based on silicon pore optics (SPO) technology, for verification before integration into the mirror assembly. To this end, BEaTriX generates a broad, uniform, monochromatic, and collimated X-ray beam at 4.51 keV. [...] In BEaTriX, a micro-focus X-ray source with a titanium anode is placed in the focus of a paraboloidal mirror, which generates a parallel beam. A crystal monochromator selects the 4.51 keV line, which is expanded to the final size by a crystal asymmetrically cut with respect to the crystalline planes. [...] After characterization, the BEaTriX beam has the nominal dimensions of 170 mm x 60 mm, with a vertical divergence of 1.65 arcsec and a horizontal divergence varying between 2.7 and 3.45 arcsec, depending on the monochromator setting: either high collimation or high intensity. The flux per area unit varies from 10 to 50 photons/s/cm2 from one configuration to the other. The BEaTriX beam performance was tested using an SPO MM, whose entrance pupil was fully illuminated by the expanded beam, and its focus was directly imaged onto the camera. The first light test returned a PSF and an EA in full agreement with expectations. As of today, the 4.51 keV beamline of BEaTriX is operational and can characterize modular X-ray optics, measuring their PSF and EA with a typical exposure of 30 minutes. [...] We expect BEaTriX to be a crucial facility for the functional test of modular X-ray optics, such as the SPO MMs for ATHENA. Introduction Testing optics for an X-ray telescope requires an at-wavelength illumination that mimics an astronomical X-ray source, that is, a parallel, broad, and uniform X-ray beam. Standard laboratory X-ray sources are naturally diverging and therefore have to be placed at several hundreds of meters from the optic being tested to obtain a sufficiently low-divergence, full illumination of the optics (such as at PANTER, the X-ray test facility of the Max-Planck Institute near Munich, Germany; Burwitz et al. 2019). Since the soft X-ray absorption in air is very high, a very long high-vacuum system has to be built to enable X-ray propagation from the source to the experimental chamber. Moreover, large optics usually require large volumes in high vacuum, which entail a long pumping time prior to any X-ray characterization. Synchrotron light (such as at the BESSY synchrotron facility, in Berlin, Germany) provides the required level of collimation (Krumrey et al. 2016); however, the beam is usually much narrower than the aperture of the module, and so it has to be scanned through the module aperture to reconstruct the full focal spot. Beam generation and handling occurs in the "short arm" on the left side. The 12 m tube (the "long arm") for X-ray propagation to the focal plane is on the right. As an alternative solution, the Beam Expander Testing X-ray facility (BEaTriX), was designed as a compact (9 m × 18 m) X-ray apparatus with fast high-vacuum pumping, a small experimental chamber, and a unique optical setup that produces a 170 mm × 60 mm wide, parallel and monochromatic X-ray beam (Spiga et al. 2012(Spiga et al. , 2014(Spiga et al. , 2016Pelliciari et al. 2015;Salmaso et al. 2018Salmaso et al. , 2019Salmaso et al. , 2021. BEaTriX is suitable to fully illuminate the aperture of X-ray modular optics, and thus test their focusing performances. The main goal of BEaTriX is to perform the acceptance tests of mirror modules (MMs) for the Advanced Telescope for High-ENergy Astrophysics (ATHENA) at their production rate of 2 MM/day, directly measuring their point spread function (PSF) and effective area (EA) in X-rays. ATHENA is the second largeclass mission selected by the European Space Agency (ESA) within the Cosmic Vision Program, with a launch foreseen in the early 2030s (Nandra et al. 2013). The optics will consist of a large aperture X-ray mirror with a diameter of about 2.5 m, an EA of 1.4 m 2 at 1 keV, and a half-energy width (HEW: twice the median value of the PSF) of 5 arcsec at 1 keV (Bavdaz et al. 2021). The aperture of such a large X-ray telescope will be populated by 600 MMs, produced with the silicon pore optics (SPO) technology developed by ESA and cosine (Collon et al. 2021). BEaTriX ( Fig. 1), now operational at the Istituto Nazionale di Astrofisica (INAF), Brera Astronomical Observatory (OAB) in Merate (Italy), will soon start the systematic characterization of MMs for ATHENA. BEaTriX is also part of the Integrated Activities for the High Energy Astrophysics Domain (AHEAD) focused on X-ray optics (Burwitz et al. 2018). This paper reports the characterization of the X-ray parallel beam of BEaTriX and the first focused image obtained with an SPO MM, which was specifically provided by cosine to test the expanded beam properties. Facility description and X-ray beam handling Tests of SPO MMs in BEaTriX are foreseen at the monochromatic energies of 4.51 keV and 1.49 keV. The 4.51 keV beamline of BEaTriX is currently operated and characterized. Its working principle is depicted in Fig. 2; (A) X-rays are emitted by a microfocus (35 µm full-width half maximum) source with a titanium anode set at 30 kV and 200 mA, (B) propagate through a vacuum tube, and (C) become parallel via reflection onto a paraboloidal mirror, accurately figured and polished in INAF-OAB to a 3 arcsec HEW level, then coated with a 30 nm platinum layer at the Denmark Technical University (DTU) to enhance its reflectivity. The mirror PSF was tested in X-rays at PANTER, before and after coating (Spiga et al. 2021;Vecchi et al. 2021). The fluorescence line of titanium at 4.51 keV is subsequently filtered via a 4-fold diffraction monochromator based on silicon crystals (D), cut parallel to the (220) planes, and finally diffracted at about 90 deg off-surface by another silicon crystal, asymmetrically cut with respect to the (220) planes (E). The asymmetric diffraction ensures an approximately 50-fold horizontal expansion, making the final beam the same size as the asymmetric crystal (170 mm × 60 mm). This concept was previously tested for the calibrations of the Soviet Danish Röntgen Telescope (Christensten et al. 1994). In the 1.49 keV beamline, the asymmetric silicon crystal will be replaced by an ammonium dihydrogen phosphate crystal (ADP; Ferrari et al. 2019). Due to the intrinsic dispersive power of asymmetrically cut crystals (Sanchez Del Rio & Cerrina 1992), a very narrow energy band (0.05 ÷ 0.1 eV) within the Ti-Kα 1 line has to be selected. To this end, the monochromator is based on two channel-cut crystals (CCCs) that can be used in two different configurations. If both CCCs are aligned at the diffraction peaks, the flux is maximized with some degradation in the horizontal collimation ("high-intensity" setup). In contrast, if the second CCC is rotated by 10 arcsec, so as to detune the diffraction peaks and make the system more selective in energy, then the total divergence is minimized at the expense of the flux intensity ("high-collimation" setup). Higher diffraction orders, such as (440), are prevented from reaching the monochromator by the cutoff of the collimating mirror at 6 keV (Spiga et al. 2016). After expansion, a small beam monitor continuously records the intensity of the beam over time; the monitor is placed in a beam corner to minimize the obstruction. The optical components that enable beam filtering and expansion are displayed in Fig. 3. The parallel and expanded beam then enters the experimental chamber (F), where it fully illuminates the aperture of the MM being tested (Fig. 4); the MM is mounted onto a hexapod that allows us to align it in six degrees of freedom. The incidence plane, which corresponds to the radial plane of the MM, is set vertically to exploit the better collimation of the beam in the vertical direction, which is independent of the beam monochromation. Therefore, the beam is deflected downward and gets focused on a CCD camera placed at a 12 m distance (H). The CCD, equipped with a 27.6 mm square sensor with 13.5 µm pixels, is connected to the experimental chamber by a vacuum tube (G). A system of turbopumps keeps the full X-ray path in a vacuum at 10 −6 mbar; the whole vacuum system is partitioned by gate valves to evacuate and vent different sections separately. Notably, the experimental vacuum chamber can be isolated from nearby ones, so as to minimize the time needed for MM replacement. The CCD can be moved along the focus by 500 mm, laterally by 150 mm and vertically by 1400 mm in order to follow the beam deviation of modules with different incidence angles. The detector tower can be moved to abridge the CCD distance from the sample to 8 m or 10 m. Finally, the temperature of the MM being tested can be varied in the range -15 • C to +60 • C by means of a thermal box surrounding the optic inside the experimental chamber . The components that handle the X-ray beam (C-D-E) are placed over precision motors for high vacuum (Fig. 3). For alignment purposes, the parabolic mirror can be rotated around the vertical axis and around the mirror normal; the CCCs can be rotated around the vertical axis or around the incident beam. The parabolic mirror was firstly aligned to its nominal position using a 3D coordinate metrology system. A subsequent alignment in the BEaTriX vacuum chamber was performed using a laser tracker. The alignment was refined with a Hartmann plate (Sect. 3) at the parabolic mirror exit. Each CCC was aligned at the peak of the beam intensity at its exit. The asymmetric crystal was also aligned by finding the incidence angle that maximizes the intensity of the expanded beam. X-ray beam characterization We used a Hartmann test to measure the beam divergence (Idir et al. 2010). The Hartmann plate, an array of 400-micron-wide square holes, was made in stainless steel with a thickness of 250 µm. The center-to-center hole spacing is 2 mm vertical and 4 mm horizontal. The different spacing reflects the expectedly higher divergence in the horizontal direction: a larger separation between holes was left to avoid a possible superposition of beamlets. The plate was mounted on the MM holder in the experimental chamber and is illuminated by the expanded beam. At a 12 m distance, the CCD records the displacements of the beamlets from their nominal positions due to the residual wavefront distortions. The signal was integrated by the CCD for 30 minutes. Due to the beam being much wider than the CCD area, we took exposures at 21 positions, which were stitched together into a mosaic in a post-processing phase. From the composite image, two components of the divergence HEW along the horizontal and the vertical direction can be calculated. First, measuring the displacements of the beamlet centroids from their nominal position (Fig. 5), we obtain the local slopes of the wavefront. Taking twice the median values of the horizontal and of the vertical slopes, we respectively obtain HEW hor−centr and HEW vert−centr . These terms come from wavefront distortions due to residual misalignments of the optical components and low-frequency errors of the parabolic mirror. Second, fitting the intensity profiles of the beamlets along the two axes with an appropriate convolution model (Fig. 6), we derived HEW hor−prof and HEW vert−prof . These two contributions arise from the imperfect spatial coherence of the beam due to (a) the nonzero, albeit small, size of the X-ray source, (b) the residual micro-roughness of the paraboloid, and (c) the bandwidth of the energies out of the monochromator. While HEW vert−prof is affected only by the X-source dimension, all the factors above contribute to broaden HEW hor−prof . Therefore, the horizontal profile is typically broader than the vertical one. To characterize the collimation of the expanded beam, we can add in quadrature the two contributions, which are assumed to be independent: HEW hor = HEW 2 hor−centr + HEW 2 hor−prof (1) HEW vert = HEW 2 vert−centr + HEW 2 vert−prof . In the high-intensity setup (Fig. 6), with an expanded beam density of 50 ph/s/cm 2 , the Hartmann test returned as total diver-Article number, page 3 of 5 A&A proofs: manuscript no. A_A_44028_final Fig. 5. Divergence computation on the composition of 3x7 images. Colors represent the amplitude of the divergence in arcseconds and the red arrows the related direction. The "Div50" parameter denotes the median value of absolute angular deviation and it equals HEW/2. The bright spot was due to a cosmic ray that altered the count during the exposure. The horizontal collimation can be improved in the highcollimation setup by rotating the second CCC by about 10 arcsec from the maximum intensity position. In this case, however, a mosaic image collection is very time-consuming due to the reduced flux. Therefore, we simply sampled the expanded beam at just one CCD position near the beam center. As a result, the HEW hor−prof term is improved to just 2.3 arcsec, to be compared with a 2.2 arcsec value expected from simulations. On the other hand, the intensity was reduced to 10 ph/s/cm 2 , which also agrees with the theoretical expectation and enables, in a 30minute integration time, a fully meaningful measurement of the MM PSF and EA. During BEaTriX operation, either the high-intensity or the highcollimation configuration will be adopted, depending on the measurement accuracy required from each SPO MM being tested. X-ray measurements of a mirror module An uncoated inner MM (named MM-0042) provided by cosine and tested at the X-ray Parallel Beamline Facility 2.0 at BESSY (Handick et al. 2020), was used as a validation test for the BEaT-riX expanded beam. It consists of two identical X-ray optical units (XOUs), made from bare Si/SiO 2 plates, with an on-axis incidence angle of about θ inc = 0.3 deg, sufficiently small to reflect at 4.5 keV without any coating. Only the outer XOU was S. Basso, B. Salmaso, D. Spiga et al.: First light of BEaTriX, the new testing facility suitable for the measurement. Although this MM is not representative of the current production process of SPO optics, it is, at the present time, the only MM available for tests at 4.5 keV. Due to the grazing incidence, figure imperfections are expected to mostly cause PSF elongation in the incidence plane, which is set vertical in the BEaTriX setup. The vertical direction, being the one with better collimation, is also the one with higher sensitivity to the PSF of the MM. The MM was mounted in the vacuum chamber (Fig. 4) and pre-aligned mechanically; then the CCD was placed at the position expected from a 4θ inc deviation of X-rays downward. The alignment in pitch and yaw was optimized by maximizing the reflected X-ray flux; the maximum corresponds to the on-axis illumination because the obstructions by the ribs and the membranes are minimized in an SPO MM. To find the best focus distance, a scan along the beam direction was carried out finding the minimum of the image width in the horizontal direction because, due to the relevant vertical elongation of the PSF, the HEW minimum is not a good metric for the best focus. The focused image was integrated for a total time of 2000 s. The focused image of the MM (Fig. 7) returned a HEW value of (25.24 ± 0.89) arcsec, in agreement with the value measured at BESSY. The computed EA was (6.84 ± 0.35) cm 2 , evaluated with a collection of at least 10 5 photons. The value is to be compared with a theoretical value of 6.72 cm 2 . This validates the proper functioning, alignment, and calibration of BEaTriX. Conclusions The first beamline of the BEaTriX facility, operated with a flux of 50 photons/s/cm 2 at 4.5 keV, is now ready to test ATHENA MMs, an activity that will be given the highest priority in the next few years. In the meantime, the second line at 1.49 keV will be implemented. BEaTriX was built to test X-ray optics with a parallel beam of 170 mm × 60 mm in size, with a focal length in the range 7.8 ÷12.2 m. The first light of the direct beam showed collimation and intensity in line with the expectations. Measurements show that the beam directionality is stable on timescales of hours, largely within the typical time foreseen for measurements (30 minutes). The recalibration and alignment of the components on longer timescales (e.g., days) will be assessed in the forthcoming months, increasing the statistics of the recorded data. The performance of the beamline was tested with an early prototype of inner radius SPO MM, which confirmed that both the angular resolution and the EA can be reliably measured. The optical quality of this MM did not fully exploit the BEaTriX capabilities, which can characterize optics with a quality ten times better, but it demonstrates the great potential of the facility to test the ATHENA MMs (focal length, angular resolution, and EA) at the requested rate of 2 MMs/day.
4,058.6
2022-06-30T00:00:00.000
[ "Physics", "Engineering" ]
Hysteresis of foF2 at European middle latitudes The hysteresis of foF2 is studied for several European stations over the whole 24-hour diurnal interval for the equinoctial months of the years just before and just after the solar cycle minimum for solar cycles 20 and 21. Based on previous results, the hysteresis is expected to develop best just for the equinoctial months and near the solar cycle minimum. The hysteresis is generally found to be negative, i.e. higher foF2 for the rising branch compared to the falling branch of solar cycle. However, this is not the case in some individual months of some years. The noontime hysteresis represents the hysteresis at other times of the day qualitatively (as to sign) but not quantitatively. The hysteresis appears to be relatively persistent from one solar cycle to another solar cycle in spring but not in autumn. A typical value for springtime hysteresis is about 0.5 MHz. The inclusion of hysteresis into long-term ionospheric and radio wave propagation predictions remains questionable. Introduction Long-term predictions of foF2 and other ionospheric parameters have traditionally been based on the relationship between the predicted ionospheric parameters and the sunspot number R, or better 12-months running average R 12 . The reason for this is not a better correlation of R with foF2 and other parameters, because a better correlation is not the case. The reason is that long-term prediction of R, provided by solar physicists, is the only long-term solar parameter prediction. Some ionospheric prediction codes use ionospheric indices, which characterize solar activity eects on the F-region better than that by R, e.g. MF2 (Mikhailov and Mikhailov, 1995), IF2 (Minnis, 1955), T-index (Turner, 1968), or IG (Liu et al., 1983). However, their long-term prediction is again based on long-term prediction of R. The dependence of foF2 on R is``poisoned'' by the phenomenon of hysteresis. For a given station and a constant value of R, foF2 diers for the rising and the falling parts of the 11-year solar cycle. The variation of foF2 monthly medians with R12 over the solar cycle displays a curve likened to the hysteresis variation of a solid material magnetization cycle. The phenomenon of ionospheric hysteresis has been known for a long time. Rao and Rao (1969) reported its dependence on latitude with the maximum at midlatitudes and the minima near the equator and at high latitudes. Apostolov and Alberca (1995) analyzed monthly medians of noon foF2 for Slough (52°N, 1°W) over ®ve solar cycles for each month separately. They found negative hysteresis in general, with foF2 higher for the rising than the falling branches of solar cycle. This hysteresis was well pronounced for equatorial months and very weak, if distinguishable, in other months. The strength of hysteresis and its detailed seasonal course diered signi®cantly from cycle to cycle. Apostolov and Alberca (1995) attributed the behaviour of hysteresis to the behaviour of geomagnetic activity throughout the solar cycle. Kouris (1995) found that hysteresis is not stable and not so important for predictions. Therefore the foF2 monthly median prediction code, developed by the European Union project COST251 which joined experts from 17 countries, does not include hysteresis (COST251 Final Report, 1999). In previous investigations, noon monthly medians of foF2 have been used (e.g. Rao and Rao, 1969;Apostolov and Alberca, 1995). The hysteresis is the largest in equinoctial months and appears to be associated with geomagnetic activity, which is higher at the falling branch of the solar cycle in relation to motion of solar activity (sunspots) from higher heliolatitudes towards the helioequator. Therefore we can expect the most pronounced hysteresis when we compare the equinoctial months from late falling and early rising branches of the solar cycle. Our objective is to investigate the hysteresis of foF2 for several European stations over the whole 24-hour diurnal interval for the equinoctial months of years just before and just after the solar cycle minimum for solar cycles 20 and 21. The main aim consists in extending the investigation of hysteresis from the noon throughout the whole 24-hour interval of the day. Observed hysteresis Data from nine European stations altogether have been used: Belgrade, Juliusruh, Kiev, Lannion, Lindau, Miedzeszyn, PruÊ honice, Rome, and Slough. Their geographic coordinates are given in Table 1. A variable number of stations has been used to analyze dierent pairs of years due to variable availability and quality of data. The request was that there would be no data gap in any of the months. The data from years 1975 and 1985 (just before the solar cycle minimum) versus the data from years 1965 and 1977 (just after the solar cycle minimum) which belong to solar cycles 20 and 21 have been used. The analysis has been performed separately for March, April, September and October for monthly median values at each hour of the day. Solar and geomagnetic activity indices for these months of the given years are presented in Table 2. The solar activity was generally very low, R=3.9±20.1, except for autumn 1997, when it was moderate. The geomagnetic activity was clearly enhanced in March 1975 and April 1985. Three pairs of years have been analyzed in the form of dierences in foF2 for one month at the falling branch minus one month at the rising branch : 1975± 1965, 1975±1977, 1985±1977. The ®rst pair is from solar cycle 20, the third one from cycle 21, and the second pair describes the situation around the solar cycle minimum in 1976. The hysteresis in Slough at noon was weak in cycle 20 and strong in cycle 21 (Apostolov and Alberca, 1995). The results are shown in Figs. 1 and 2 separately for each pair of the years and spring versus autumn. The hysteresis is plotted as the dierence between foF2 for the falling and the rising branches of the solar cycle, which means that the hysteresis is positive if foF2 is higher at the falling branch of the solar cycle. Figure 1 shows the hysteresis for March and April 1975versus 1965, 1975versus 1977, and 1985versus 1977, the diurnal course of the hysteresis is very similar for Juliusruh and PruÊ honice and not principally dierent for Belgrade. The hysteresis is negative and quite strong for April and negative and less well-marked in March, with no pronounced diurnal variation in April, and the weakest hysteresis is near noon in March. The weaker hysteresis in March is probably caused by the high geomagnetic activity in March 1975 (Table 2), which lowered foF2 values and thus also the hysteresis. For 1975±1977, the hysteresis is negative in both months. In March, it is more negative during daytime that at night, while in April there is no pronounced seasonal variation of hysteresis. The dierences in solar and geomagnetic activity levels are not large enough to substantially aect the hysteresis. For 1985±1977, the hysteresis is negative in March, very weak at night and strong during daytime. April also provides negative hysteresis but with a dierent diurnal course. The hysteresis is mild with a very strong and narrow peak in early night and a secondary peak in early daytime. The 1985±1977 dierence in solar and geomagnetic activities cannot substantially aect the hysteresis either in March or in April. Figure 2 displays the hysteresis for September and October of 1975versus 1965, 1975versus 1977, and 1985versus 1977, the hysteresis in September is positive and very strong in daytime but almost disappears at night. Solar and geomagnetic activities in September of 1975 and 1965 are almost identical (Table 2). October does not display any systematic hysteresis and the scatter caused by the dierences in the results of individual stations is rather large. The increase of geomagnetic activity and the decrease of solar activity from October 1965 to October 1975 (Table 2) decrease foF2 in 1975, i.e. they make the asymmetry more negative than for the equal level of both activities. For 1975±1977, no systematic and persistent hysteresis was observed in September despite a substantially higher level of solar activity (Table 2) in 1977. Therefore, after a correction to the eect of solar activity we should have rather positive hysteresis. Miedzeszyn and Slough provide a rather opposite diurnal course of hysteresis, while the three remaining stations agree reasonably well with each other and do not provide any signi®cant hysteresis. October gives negative hysteresis, which almost Discussion The spring months, March and April, display consistently negative hysteresis (Fig. 1). The diurnal course of hysteresis is rather weak, except for March 1985±1977, with pronounced daytime maximum. The typical/representative value of hysteresis is about 0.5 MHz. Autumnal months provide quite a dierent and internally inconsistent pattern (Fig. 2). Taking into account corrections for dierences in solar and geomagnetic activities, the hysteresis is negative for 1985±1977, weak but rather positive, for 1975±1977, and positive for 1975±1965. The well-developed diurnal course with the maximum of hysteresis during daytime (irrespective of its sign) prevails in most of the months studied. The noontime hysteresis appears to be suciently representative for most of the spring months studied, even if not for all; see March 1985±1977. However, this is not the case for autumn, when the hysteresis appears to be evidently weaker at night. Consequently, the results of the investigations of hysteresis based on noontime data may be applied to the whole day only qualitatively. Comparable values of hysteresis at night and during daytime in spring mean that the relative hysteresis is much stronger at night, due to considerably lower foF2 at night. In other words, in spring the physical phenomenon of hysteresis seems to be stronger at night. In general, all European stations used provided similar a pattern of the behaviour of hysteresis with the exception of Miedzeszyn versus Slough in September 1975±1977. Thus the results shown may be considered representative for Europe, except for Northern Europe because its stations have not been included in the data set analyzed and that area is under strong in¯uence from auroral zone processes. It is dicult to say why there is the dierence in the hysteresis pattern at autumn and in spring, and even whether such a substantial dierence is real. It should be mentioned that the only solar cycle out of the ®ve analyzed for Slough, which did not display negative hysteresis in autumn (September + October), was just the cycle 20 (Apostolov and Alberca, 1995), i.e. 1975± 1965. This is the cycle where we found positive hysteresis. Thus an analysis of data for more solar cycles is necessary to resolve the question of the dominant sign of hysteresis at autumn. Based on the results of Apostolov and Alberca (1995) and their comparison with our results for solar cycles 20 and 21, the autumnal hysteresis may be expected to be negative. Conclusions The extension of the investigations of hysteresis to the whole 24-hour daily interval for a couple of European stations and the equinoctial periods, together with the long-term investigations of hysteresis for Slough at noon by Apostolov and Alberca (1995), and the results of Kouris (1995), allow us to make the following conclusions for the European midlatitudes: 1. The hysteresis is developed best in the nearequinoctial months; it is better in spring than in autumn. In solstice months it is quite undeveloped (e.g. Apostolov and Alberca, 1995). 2. The hysteresis is generally negative, i.e. higher foF2 for the rising branch compared to the falling branch of solar cycle. However, this is not the case in some individual months of some years. 3. The noontime hysteresis represents the hysteresis at other times of the day qualitatively (as to sign) but not quantitatively. 4. The hysteresis appears to be relatively persistent from one solar cycle to another solar cycle in spring but not in autumn. 5. The inclusion of the hysteresis into long-term ionospheric and radio wave propagation predictions remains questionable. The hysteresis seems to aect predictions only under lower to low solar activity conditions in near-equinoctial months, but at all times of day. Its magnitude for spring is estimated to be about 0.5 MHz, i.e. a typical error caused by neglecting it would be about 0.3 MHz in foF2. However, the hysteresis is remarkably irregular, particularly in autumn, which makes its use in predictions very questionable.
2,823.8
2000-08-31T00:00:00.000
[ "Physics" ]
Influence of roughness on initial in vitro response of cells to Al2O3/Ce-TZP nanocomposite ABSTRACT Al2O3/Ce-tetragonal zirconia polycrystal (TZP) nanocomposite was synthesized by a colloidal processing route and sintered in air atmosphere. Sandblasting treatment was made to alumina toughened zirconia (ATZ) nanocomposite in order to evaluate the influence of surface roughness on the osteogenic differentiation performing in vitro growing a human osteoblast-like cell line, SaOs-2, and human adipose-derived mesenchymal stem cells (hADMSC) osteogenic differentiated. Smooth roughness values around Ra = 0.5 µm were obtained when the abrasive material was below 90 µm increasing the expression of BGLAP and IBSP genes and Ra = 1.5 µm was found with particles of sizes between 90 and 250 µm upregulating SPARC gene. The non-cytotoxicity and haemocompatibility of ATZ nanocomposite were proved. Alumina-ceria-stabilized zirconia nanocomposite presented in this work exhibits a high potential for application in the fabrication of dental implants due to their biological behavior and very promising mechanical properties. Introduction For more than 40 years, commercially pure titanium and titanium alloys were widely used as dental implant materials due to their excellent biocompatibility, early osseointegration and high corrosion resistance [1]. Nevertheless, titanium may induce allergic reactions or sensitivities [2] and it possesses a dark color that could be exposed during peri-implant mucosa recession and ruin the entire esthetic result [3]. As a viable alternative to resolve these problems, some new ceramic materials were developed. Bioceramic materials offer excellent opportunities to combine the absence of metal ions, good bone ingrowth characteristics and improved esthetics due to the possibility of dying the product with pigments. In this context, alumina (Al 2 O 3 ) was the first bioceramic used as an implant material [4], due to its low friction, wettability, wear resistance and biocompatibility. However, it showed insufficient physical properties. In the 80s, zirconia (ZrO 2 ) emerged as a ceramic material valid for implants because of its improved fracture toughness and mechanical strength with respect to alumina. Tetragonal zirconia polycrystals, specially 3 mol% yttria-stabilized zirconia (3Y-TZP), serves as a metal substitute in substrates and possesses good physical characteristics; its bending strength doubles and its fracture toughness almost triples that of alumina [5]. Nevertheless, the big disadvantage of pure 3Y-TZP is its low temperature degradation (LTD) [6]. Actually, combining the positive properties of Al 2 O 3 (wear resistance, hydrothermal stability and hardness) with those of ZrO 2 (strength and fracture toughness) it is possible to obtain alumina toughened zirconia (ATZ) and zirconia toughened alumina (ZTA) nanocomposites with a higher potential for application as dental implants [7]. Among them, composite materials with Ce-TZP and alumina have shown very promising mechanical properties to be used in the fabrication of implants [8]. Besides the mechanical properties of the bulk material, the characteristics of an implant's surface, such as composition, topography and roughness play an important role in cell-material integration and biocompatibility [9]. The interaction between cells and a biomaterial's surface is fundamentally relevant and essential in terms of the response of cells at the interface, affecting the growth and quality of newly formed bone tissue [10,11]. For this reason, cell culture models are routinely used to study the response of osteoblastic cells in contact with different substrates for implantation in bone tissue. Moreover, human adiposederived mesenchymal stem cells (hADMSC) are considered to contain a group of pluripotent mesenchymal stem cells and manifest multilineage differentiation capacity, including osteogenesis, chondrogenesis and adipogenesis [12]. These last cells could differentiate into odontogenic lineage, expressing bone marker proteins, and might be used as suitable seeding cells for tooth regeneration [13]. Few data are available concerning the response of mesenchymal stem cells to ATZ. The purpose of the present study was to perform in vitro osteogenic differentiation assays growing a human osteoblast-like cell line, SaOs-2, and hADMSCs on different ATZ supports in order to determine the influence of the composition and surface roughness on the behavior of the cells in relation with these new implants. In this context, different tests were performed to study cytotoxicity, viability, hemolysis and differences in terms of osteogenic and apoptotic gene expression between the samples. Materials and methods The Al 2 O 3 /Ce-TZP nanocomposite was made using the following materials: Ce-TZP (10 mol% CeO 2 ) from Daichi (Japan) with an average particle size of 35 nm (d 50 ) and a specific surface area of 15 m 2 .gr −1 , α-Al 2 O 3 powder (TM DAR, Taimei Chemical Co., Japan) with a specific surface area of 14.6 m 2 .gr −1 and an average particle size (d 50 ) of 150 nm. In addition, the following chemical precursors were also used: i) aluminum chloride (Sigma-Aldrich, Spain), ii) zirconium IV-propoxide (70% solution in 1-propanol) (Sigma-Aldrich, Spain), iii) 2-Propanol (99.9% Panreac, Spain) and iv) absolute ethanol (99.97% Panreac, Spain). A colloidal processing route described in Rivera et al. [14] and L. A. Díaz et al. [15] was followed in order to obtain the nanocomposite. In this route Ce-TZP was coated by an alumina amorphous layer, using aluminum chloride as precursor and subsequently thermally treated in order to activate the formation of γ-alumina transition phase. After this, the alumina powders were also coating with zirconia nanoparticles using a zirconium propoxide as chemical precursor. Finally, both chemically modified raw materials (zirconia and alumina) were mixed using a ratio of 80/20 in volume, respectively, in a polypropylene container with zirconia balls for 72 h in order to ensure a good homogeneity of the mixture. After this, the material was dried at 120ºC, grounded and sieved through <63 microns mesh. Disk specimen preparation The powders were cold isostatically pressed at 300 MPa into cylindrical rods of 50 mm in length and 9 mm in diameter. After surface machining and firing at 1475ºC for 1 h, disk-shaped specimens of 7 mm diameter and 1.3 mm thickness were prepared by cutting and polishing (applying microcrystalline diamonds of 9, 3 and 1 microns). In total, 36 disks were used for sandblasting tests and six disks more were used as target specimens. Sandblasting process Disks were sandblasted with white corundum and SiC powders (see Table 1). Laser diffraction (Beckman Coulter LS 13 320, USA) was used for the granulometric characterization of the selected fractions. The air pressure was applied perpendicular to the surface of the disk at 0.4 bars and at a distance of 10 mm using sandblaster equipment (Sandblaster I, Astursinter, Spain). Surface roughness and morphology The morphology of the samples and the raw materials used in the process of sandblasting was characterized by field emission scanning electron microscopy (FESEM) (FEI: Quanta FEG 650, USA). The surface roughness of the specimens was analyzed using a surface roughness tester (MicroTest: MT4002, Spain). Six measurements on each specimen according to ISO 4287-1997 [16] were performed. The assessed profile (Ra) as an arithmetical mean deviation was calculated. The ratio of monoclinic and tetragonal (X m ) and the amount of transformation (monoclinic volume content,v m ) induced by sandblasting were determined by X-ray diffraction (XRD) (Bruker D8 Advance, Germany) using the equations (1) and (2) described in Toraya et al. [17]. Where Þis the peak height of monoclinic phase at around 2θ = 28.2°I m 111 ð Þis the peak height of monoclinic phase at around 2θ = 31.3°I Þis the peak height of tetragonal phase at around 2θ = 30.2° where P was a constant with value of 1.311. This behavior was studied with a Tuttnauer Autoclave (2540EL, Tuttnauer, NY, USA) following ISO 13,356:2015 [18]. In this way, samples were placed in a suitable autoclave and exposed to steam at 134ºC under a pressure of 0.2 MPa for a period of 5 h. After this period, cool the autoclave and remove and dry the test specimens. In vitro studies methodology Different tests were carried out in order to assess the biocompatibility, non-cytotoxicity and haemocompatibility of the Al 2 O 3 /Ce-TZP nanocomposite: (1) Human adipose-derived mesenchymal stem cells (hADMSCs) isolation and culture: hADMSCs were isolated from abdominal subcutaneous adipose tissue. Adipose tissue was submitted to mechanical digestion and then digested with collagenase I (Sigma-Aldrich, USA) DMEM solution (Lonza, Belgium) and the cell suspension was filtered and centrifuged. The obtained cell fraction was cultured in expansion medium to 80% confluence, at 5% CO 2 and 37ºC. Finally, cells were harvested using Trypsin-EDTA 1X (Biowest, France). (2) Cytotoxicity tests using the neutral red uptake (NRU) assay and the MTS assay. The potential cytotoxic effect of materials on mammalian cells was determined following ISO 10,993-part 5 [19] for biomaterials and medical device testing. Samples were sterilized before use. SaOs-2 cells (human osteosarcoma cells, kindly provided by the SCT of the University of Oviedo, Spain) or human MSCs from adipose tissue were seeded onto 48-well plates at a density of approximately 4 · 10 4 cells/ml/cm 2 (3) Hemolysis index (ASTM F 756-08 [20]): Hb released into plasma when blood was exposed to the materials was measured: After a period of incubation, samples were removed and tubes were centrifuged. Each supernatant (100 µL) were mixed with Drabkin's reagent (100 µL) and cyanmethemoglobin was produced and detected by spectrophotometry at 540 nm. Total blood hemoglobin (TBH) was also measured. The hemolytic index was calculated (equation 5). Hemolytic Index ¼ Hb released mg mL À �. TBH mg mL À � (5) (4) Osteogenic differentiation of SaOs-2 (human osteoblast-like cell line) and hADMSCs: Cells were seeded at a density of 50 × 10 3 cells on each of the four kinds of samples until confluence and cultured in differentiation medium supplemented with dexamethasone, ascorbic acid and β-glycerol phosphate (all reagents from Sigma-Aldrich, USA). Every 72 h, differentiation medium was changed, and in 3 weeks, the whole differentiation process was completed. In order to confirm the adequate osteogenic differentiation, alkaline phosphatase and alizarin red staining were performed. For descriptive purposes, a Student's t test t on raw Ct means computed for samples using Proc GLM of SAS/STAT was also carried out, assuming that the sample type effect included two independent groups of normally distributed observations. FESEM The representative microstructure of the sintered ATZ nanocomposite is shown in Figure 1 where two different phases can be observed. The lightest one corresponds to the Ce-TZP matrix with a particle size of 400-500 nm and the darkest phase corresponds to alumina with an average size of 250 nm. As it can be observed alumina grains are homogeneously distributed in the Ce-TZP matrix and no pores are observed. Grains with straight edges appeared in both phases indicating that the sintering process has been completed. Surface roughness Sandblasting is a commonly used surface treatment and involves impacting with hard particles at high velocities on a surface in order to erode it and leave a roughened surface with expected higher wettability. The FESEM micrographs of the modified ATZ surfaces by sandblasting with white corundum and silicon carbide for 60 s and 15 s are shown in Figure 2(a-c). Sandblasting with white corundum and SiC particles <90 microns (Figure 2(a and c), respectively), revealed a regular and slightly waved structure. However, the surface of the sandblasted ATZ with SiC particles between 90 and 250 µm (Figure 2(b)) showed a more irregular structure with visibly larger voids and grooves, increasing the surface roughness according to the results shown in Table 2 for ATZ samples. The mean roughness index, Ra (arithmetical mean deviation of the profile), in case of samples sandblasted with SiC between 90 and 250 microns was significantly higher than the other sandblasted samples. According to the Altbrektsson and Wennerberg classification [19], the samples sandblasted with white corundum present a "smooth" surface roughness (Ra < 0,5 µm), while the samples sandblasted with SiC show "minimally rough" and "moderately rough" surface roughness. Similar results have been found by Sato et al. [20] where sandblasting by SiC particles resulted in surface roughness values larger than those by alumina particles. XRD diffraction X-ray diffraction (XRD) patterns of characterized samples are shown in Figure 3. The materials do not present spontaneous phase transformation on their surface after sintering and any aging process. However, sandblasting processes lead to the transformation, under tension, of a part of the tetragonal zirconia to monoclinic zirconia since an increase of the intensity of monoclinic peak ( � 111) m with respect to the tetragonal peak (111) t can be observed. The volumetric fraction (v m ) of the monoclinic phase is calculated according to equations (1) and (2) on (1) as-sintered surfaces, (2) as-sintered surfaces after sandblasting process, (3) as-sintered surfaces after sandblasting and its aging process and the results are shown in Table 3. The particle size and the kind of material used for the sandblasted process have an effect on the transformation phase of ATZ composite [21]. In this sense, the effect of sandblasting on the monoclinic content was larger in case of SiC particles than alumina particles since it depends on the difference in the hardness of the material: e.g. the Vickers hardness of Al 2 O 3 and SiC is 1800 and 2200, respectively. Furthermore, increasing the particle size increases the erosion of material and the transformation surface layer. This transformation process is reversible and, in every case, a posterior thermal treatment at 1200ºC for 15 min succeeds in transforming the totality of the monoclinic zirconia back to its initial tetragonal state [22]. ISO 10, states that a material is considered non-cytotoxic when cell viability is above 70%. The potential cytotoxicity on SaOs-2 and hADMSCs was assessed by the MTS assay and the NRU method using tissue culture polystyrene (TCPS) as the blank. According to the results shown in Figure 4(a and b) all of the studied samples allowed for higher than 90% cell viability; therefore, none of the surface modification treatments can be considered cytotoxic. In vitro biological assays Hemolysis is the alteration, dissolution or destruction of red blood cells that results in hemoglobin liberation into the surrounding medium. According to Stanley's classification criteria, a material is considered non-hemolytic for hemolytic indexes <2 while it is considered slightly hemolytic and hemolytic for hemolytic index values of 2-5 and >5, respectively. Different factors such as surface roughness, surface energy and surface tension and surface wettability can have an influence on the blood compatibility and it is shown that surface modification has a great . X-ray diffraction patterns for ATZ samples: as sintering; as sintering and sandblasting process; as sintering, sandblasting and aging process (see Table 3). potential for improving the hemocompatibility of biomedical materials and devices [24]. In this case, the studied ATZ nanocomposite showed a hemolytic index close to 0 (0.1-0.2) for all the surface modifications tested, which was <1% indicating nonhemolytic material. ALP levels increase when active bone formation (osseous differentiation) occurs, as it is a by-product of this process. According to the results shown in Figure 5(a and b) a correct osteoblast differentiation of hADMSCs has been taken placed since all of the differentiated cells used for gene expression studies were stained and, consequently, osteoblast differentiation confirmed. Product identity was confirmed by electrophoresis on ethidium bromide-stained 2% agarose gels in 1X TBE buffer, which resulted in a single product of the desired length. In addition, an iCycleriQ melting curve analysis was performed, which rendered single product specific melting temperatures. No primer-dimers were generated during the 40 real-time PCR cycles conducted. All polymerase chain reaction efficiencies were above 90% and linearity was high, with correlation coefficients (R 2 ) above 0.989. To quantify gene expression, the relative standard method (relative fold changes) was used and expression levels were determined for the ceramics and the control group (NA sample) by normalizing results with respect to β-ACTIN. For hADMSCs, a discreet increase was noticed in the relative expression of four of the studied genes for samples A, B and C when compared with the control sample. In Figure 6(a), these increases are plotted. In the problem group, the genes ΒGLAP, CASPASE 3, IBSP and SPARC were up-regulated 2.27fold for sample A, 1.57-fold for sample C, 3.10-fold for sample A, and 2.27-fold for sample B, respectively, with respect to the basal levels recorded for the endogenous control (β-ACTIN). However, these increases were only significant (p < 0.05) in the case of IBSP for sample A. In the case of the SaOs cells, an even more discreet increase was observed in the relative expression of two of the studied genes for samples A, B and C when compared with the NA sample (control). These results are also shown in Figure 6(b). The IBSP gene is 1.55-fold up-regulated for sample B. The COL1A1 gene is upregulated 2.95-fold for sample A, 2.85-fold for sample B and 1.74-fold for sample C with respect to the basal levels recorded for the endogenous control. In this case, differences between control and problem groups were not significant. Discussion The alumina toughened zirconia (ATZ) biomaterial studied in this work is a nanocomposite that combines the properties of Al 2 O 3 and Ce-ZrO 2 . Moreover, Ce-TZP does not suffer low temperature degradation. The good combination of mechanical properties of this nanocomposite has already been proven [14] and now it is necessary to confirm its good interaction with cells. The ideal material to make implants should not be purely tolerated by the host but should interact with biological systems in a way that induces the appropriate host response for a specific application [25] controlled by the proteins that coat the surface of the biomaterial. The nature and activity of the proteins adsorbed on the surface depend on its physical and chemical properties. In fact, the biomaterial's surface properties such as topography and hydrophilicity will be determinant in terms of biocompatibility and other biological phenomena. Topography also can modify the shape and activity of mesenchymal stem cells leading to a higher differentiation rate of these cells into osteogenic lineage with the upregulation of osteoblastic genes [26]. Zhang Y et al. [27] showed how surface roughness affects osteoclastic differentiation as well as the stimulatory effects of osteoclasts on osteogenic differentiation of osteoprogenitor cells. Concerning osteoblast differentiation microscale surface roughness has been shown to enhance osseointegration of titanium implants through increased osteoblast differentiation while osteoblast proliferation remains greater on smooth titanium [28]. According to the literature, surfaces with Ra ≤1 μm are considered smooth and those with Ra >1 μm are described as rough. In general, the surface roughness range that favors osseous differentiation has been reported to be 1.0-1.5 μm [29]. Sandblasting is known to form surface roughness and irregularities on surface materials and, in the particular case of zirconia, induces a transformation from tetragonal to monoclinic phase due to the stress generated during the process [30]. Human cells were preferably attached to hydrophilic surfaces than hydrophobic ones. For this reason, the modification of the surface allows increasing the surface area and wettability of the ceramic surface, increasing protein adsorption, promoting the attachment, proliferation and differentiation of human cells. According to the literature [31], increasing surface areas would be more favorable for cell attachment due to mechanical interlocking at the initial cell attachment stage. In the present study, SaOs-2 cells and hADMSC cells were used to investigate the influence of roughness on regulation of cell viability, hemolysis and osseous differentiation. According to the results obtained in the cytotoxicity and hemolysis experiments, the data obtained in our study revealed that there are no significant differences in the responses of osteogenic cells and hADMSCs toward the different surface modification treatments in terms of biocompatibility. In fact, none of the surface modification treatments induced significant cell death in osteoblasts, hADMSC or erythrocytes. Similar results regarding cell viability and hemolysis have been published over the years [32]. Given the bioinertness of ceramic materials, such as those studied in this work, treatments to modify the surface and provide a suitable environment are necessary in order to trigger a favorable biological response in terms of osseous differentiation. Concerning gene expression, different osteogenic genes were studied in order to determine differences between samples. For hADMSCs, the increase was only lightly observed in the case of four genes BGLAP, CASPASE3, IBSP and SPARC. BGLAP was upregulated 2.27-fold for sample A, CASPASE3 was upregulated 1.57-fold for sample C, IBSP was upregulated 3.10-fold for sample A and, finally, SPARC was up-regulated 2.27-fold for sample B with respect to the basal levels. However, the increase was only significant (p < 0.05) in the case of IBSP in sample A. These findings, in terms of gene expression as well as histological results, showed that the roughness and the nature of the material are adequate to allow cells to achieve osteogenic differentiation, despite the fact that the surface roughness of the studied samples is clearly below the Ra values reported to favor osseous differentiation (1.0-1.5 μm). It is important to note that osteoblastic differentiation of MSCs into functional differentiated osteoblasts requires a series of steps involving the expression of different proteins at each stage. Alkaline phosphatase is regarded as a marker of early osteoblastic differentiation. In fact, it is the main signal that compromises cells to differentiate toward osteoblastic lineage; in a similar way, COL1 gene is over-expressed in the pre-osteoblastic phase coinciding with the beginning of the bone tissue-differentiating cascade, whereas secretion of Osteocalcin and IBSP as well as matrix mineralization is associated with the final differentiation phase. The observation of an increasing expression of IBSP with significant differences between groups for white corundum (sample A) showed that the differences in terms of osteogenic differentiation seem not to appear until the last phases, meaning that the gene expression pattern is similar for all samples during the entire process. Significantly, the main increase observed in IBSP is in agreement with the results obtained by Wang and cols [33] that pointed out the importance of the integrin-linked kinase/β-catenin pathway in mediating signals from topographic cues to direct the osteogenic differentiation of cells. For SaOs-2 cells, the three samples (A, B and C) showed upregulation related to sample NA (control) for two of the studied genes, IBSP and COL1. These results are correlated with the different levels of roughness showed for the different samples, since NA shows the smoothest one. Moreover, these results also are according to the concept that the roughness affects the attachment and spreading of cells, showing that probably the number of osteoblastic cells is higher in the rough supports. However, this upregulation was not signification any case. The expected pattern for SaOs-2 cells is clearly different from that of hADMSC because it is a human osteoblast-like cell line itself and it is supposed to express osteogenic genes such as COL1 at the beginning of the bone tissue differentiating cascade or IBSP that are over expressed at middle-to-late osteogenic differentiation. These observations are in agreement with Czekanska et al. [34], who described high levels of expression of osteocalcin, bone sialoprotein, decorin and procollagen-I. For both types of cells (hADMSCs and SaOs-2), the apoptotic gene studied, Caspase-3, did not show important differences between samples, meaning that neither material nor roughness had a clear influence on cell death. This circumstance is in agreement with the absence of deleterious effects on cell viability. Conclusions The non-cytotoxicity and haemocompatibility of a nanocomposite ceramic material formed by alumina and ceria stabilized zirconia has been proved. The surface roughness of this nanocomposite can be adjusted depending on the particle size of the materials used for sandblasting. Smooth roughness values of around 0.5 µm are obtained when the abrasive material (white corundum or silicon carbide) is below 90 µm, while the use of silicon carbide particles of sizes between 90 and 250 µm leads to surface roughness values of around 1.5 µm. Moreover, the roughness and the nature of the material used have been proved adequate for cell osteogenic differentiation. An increase in the expression of BGLAP and IBSP genes was observed on samples sandblasted with white corundum below 90 μm, whereas SPARC gene was upregulated on samples sandblasted with SiC between 90 and 250 μm. Then, the studied nanocomposite is a very promising material for dental applications thanks to its good mechanical properties in comparison with conventional ceramics, the possibility of adjusting its surface roughness and the different in vitro results obtained.
5,612.6
2020-07-22T00:00:00.000
[ "Materials Science" ]
Facile Synthesis of Electroconductive AZO @ TiO 2 Whiskers and Their Application in Textiles 1Key Laboratory of Science and Technology of Eco-Textiles, Jiangnan University, Ministry of Education, Wuxi 214122, China 2College of Textile & Clothing, Jiangnan University, Wuxi 214122, China 3State Key Laboratory of Molecular Engineering of Polymers, Department of Macromolecular Science and Laboratory of Advanced Materials, Fudan University, Shanghai 200438, China 4Institute of Orthopaedics, The First Affiliated Hospital of Soochow University, Suzhou 215006, China 5Wuxi Entry-Exit Inspection and Quarantine Bureau, Wuxi 214101, China Introduction With the rapid development of nanotechnology, a great number of nanomaterials and nanostructures have been applied in diverse electronic and related fields.In recent years, many researchers have focused on the preparation of TiO 2 with different morphologies for special functional applications [1], such as nanotubes [2], nanorods [3][4][5], and nanowires [6].Rod-like TiO 2 whiskers are more likely to disperse uniformly compared with nanoparticles, guaranteeing the good inner homogeneity [7].Besides, the pathway for electron is readily formed as the rod materials overlap with each other, thus improving conductivity.However, the high electric resistance of TiO 2 limits its application.One effective approach to reduce the electric resistance is to dope electroconductive elements onto TiO 2 [8][9][10].You et al. [11] prepared monodispersed Cu-doped TiO 2 nanorods with tunable lengths (2-30 nm), diameters (2-5 nm), and doping concentrations (1.7-3.2%)through a low temperature hydrolytic route.The surface-functionalized method was employed to synthesize TiO 2 @CdS core-shell nanorods using citric acid as an agent by Das and De [12].Nevertheless, the reported works only focus on the photoelectron chemical behavior; the conductivity of the TiO 2 whiskers coated with the conductive material has rarely been studied. Al doped ZnO (AZO) is promising as a cost-effective replacement for other transparent conductors, such as In 2 O 3 : Sn (ITO), SnO 2 : Sb (ATO), and SnO 2 : F (FTO), due to its high thermal stability, low price, and nontoxicity characteristics [13,14].P-type doping of ZnO can be achieved via the addition of Al as the dopant [15].AZO powders have been successfully prepared by a simple chemical coprecipitation method by Zhang et al. [16] and the photocatalytic and electrical properties were improved compared to those of ZnO.Wu et al. synthesized aluminum and gallium codoped ZnO powders (AGZ), which led to fine grain size ranging from 14 to 28 nm and low resistivity of 2.518 × 10 3 Ω⋅cm [17].The conductivity is caused by the replacement of Zn 2+ with Al 3+ , releasing excessive electrons into the conduction band. In this study, the Al doped ZnO materials coatings onto TiO 2 whiskers were synthesized by a facile hydrothermal method.The microstructure, morphology, and electrical performance of the AZO@TiO 2 whiskers were investigated.The properties of the polypropylene nonwoven fabrics modified with the AZO@TiO 2 whiskers were further studied.with the molar ratio of (Ti)/(K 2 CO 3 ) = 4 were firstly mixed in distilled water.After dispersing in ultrasonic cleaner (100 W) for 30 min, the mixture was heated and blended with magnetic stirrer.When it became sticky, the dope was transferred into drying oven.After grinding process, the dried powders were performed in a muffle furnace at 1000 ∘ C for 10 h in order to form K 2 Ti 4 O 9 whisker bunches and then boiled for 8 h in deionized water to disperse into the sole whiskers.After the HCl (60 mol/L) treatment for 6 h in the thermostatic water bath, the precursor was annealed at 1000 ∘ C for 5 h with the speed of 5 ∘ /min to obtain TiO 2 whiskers. Preparation of AZO @TiO 2 Whiskers.Firstly, Zn(NO 3 ) 2 ⋅6H 2 O was dissolved in distilled water with an amount of Al(NO) 3 ⋅9H 2 O further added into the Zn(NO 3 ) 2 solution to obtain the desired solution A. The suspension was made with TiO 2 nanoparticles and 100 mL distilled water.After the ultrasonic treatment for 30 min, it was poured into a three-necked flask.A mixed acid solution of A was added to the suspension drop by drop at 65 ∘ C with stirring (300 rpm).The pH value was maintained with the buffer solution during the process.After a 120 min reaction, the white precipitation was filtered and washed with distilled water and dried in a drying oven at 70 ∘ C.After the sintering, the electroconductive AZO@TiO 2 whiskers were obtained. Preparation of Antistatic Fabric. After cleaning with detergents and washed with distilled water, all the polypropylene nonwoven fabrics (40 cm × 45 cm) were dried in the oven for 24 h.The synthesized size, consisting of sticking agent (Guangdong Gongchenrihua Co., Ltd., China) and thickening agent (Shenzhen Yongzhiqiang Chemical Technology Co., Ltd., China), was mixed with 0.1, 0.2, 0.3, 0.4, and 0.5 wt% electroconductive AZO@TiO 2 whiskers.The polypropylene nonwoven fabrics were prepared with the mixture by surface coating manually.After coating, the fabrics were dried in vacuum oven for 1 h. Characterization. The TiO 2 whiskers were fabricated from TiO 2 and K 2 CO 3 nanoparticles in a stable reactor, which could be heated up to 1000 ∘ C with a controllable heat rate of 10 ∘ /min by an automatic tube furnace (GSL 1600X, Hefei Kejing Materials Technology, China). The morphology of the electroconductive AZO@TiO 2 whiskers was examined by scanning electron microscope (SEM) (SU-1510, Hitachi, Japan) operated at 10 kV.The samples were gold-sputtered on the surface to ensure the electrical conductivity and observed in vacuum environment to eliminate air impact.XRD analysis was performed using the X-ray powder diffraction (XRD) (D8 Advance, Bruker, Germany) with monochromated Cu-K radiation ( = 1.54183Å) across a 2 range of 10-80 ∘ with a rate of 0.1 s/step.The component of electroconductive AZO@TiO 2 whiskers was conducted with the X-ray photoelectron spectroscopy (XPS) (ESCALAB 250Xi, Thermo Fisher Scientific, America), employing a soft X-ray source of Al-K (ℎV = 1486.6eV) and operating at 150 W. The surface resistivity of the electroconductive AZO@TiO 2 whiskers was manifested by the four-probe meter (SZT-2A, Tongchuang Electronic, China).The antistatic property of nonwoven fabric was measured by fabric induced electrostatic measuring instrument (YG (B) 342D, Darong Textile Instrument, China). Orthogonal Test for Electroconductive AZO@TiO 2 Whiskers.Orthogonal design is one of the most effective and time-saving methods for the studies involving multiple variables in order to find out which factors (or variables) influence to the most extent properties of the target product [18].Besides, it provides a powerful way to find an optimal combination of a smaller number of arrays to obtain the optimum thus reducing more experiments associated with the analysis [19,20].Based on our previous experimental results, the main factors, including calcination temperature, pH, doping ratio (Al/Zn), and coating ratio (Zn/Ti), affected the formation process of electroconductive AZO@TiO 2 whiskers.In this study, the L 9 (2 3 ) orthogonal array of the Taguchi method [21] was adopted to investigate these factors and optimize the synthetic conditions.The factors and levels are listed in Table 1.Table 2 and Figure 1 present the experimental results of the L 9 (2 3 ) orthogonal array.The surface resistivity of resultant AZO@TiO 2 whiskers was measured.The surface resistivity at level I for factor (, , or ) is as follows: where , , , and represent the numbers of levels for factors , , , and and , , , and represent the ranges, which affect the influences of levels on the experimental result.According to the value of in Table 2, the influence of four factors on the electroconductivity of AZO@TiO 2 whisker decreases in the order: doping ratio > pH > calcinations temperature > coating ratio.Doping ratio and pH value are the dominant factors over others. Our previous studies showed that the conductivity was enhanced with the increase of coating ratio (Sn/Ti) when TiO 2 nanoparticles were encapsulated by ATO [22].In this study, we continue to optimize the doping ratio to find the lowest surface resistivity as shown in Table 3.The lowest surface resistivity was approached when the coating ratio was 45 at%.Nevertheless, when the coating ratio continues to increase, the electroconductivity was deteriorated, which indicates that the coating ratio has been saturated. In the factor of doping ratio, the surface resistivity of AZO@TiO 2 whiskers decreases at first and then increases steadily at the optimal level of 2.5 at%.Al 3+ is ready to substitute Zn 2+ position to form negatively charged lattice defects as the radius of Al 3+ is smaller than Zn 2+ .Therefore, a free electron emerges.The conductivity would increase with the implement of Al 3+ contents when doping ratio is below 2.5 at%.The excess Al would form ZnAl 2 O 3 rather than replace Zn 2+ lattice site, which increases the resistivity.According to the report of Chen et al. [23] and Lu et al. [24], ZnAl 2 O 4 would lead to the increasing resistivity of AZO. The pH value also has a big impact on the conductivity of AZO@TiO 2 whiskers, as it offers the opportunities for the suitable reactive environment.According to reaction equation (2), Zn(OH) 4 2− is prepared in alkaline conditions and uniformly coated on the surface of TiO 2 whiskers.With further dehydration reaction, the H 2 O is removed as shown in reaction equation (3): The influences of calcination temperature and coating ratio are relatively small that the resistivity fluctuated between 18 KΩ⋅cm and 26 KΩ⋅cm.The optimal condition of resistivity was as follows: doping ratio = 2.5 at%, pH = 8, calcination temperature = 400 ∘ C, and coating ratio = 45 at%.Due to the fact that the optimal experiment is out of nine groups in Table 2, we carried out another experiment according to the optimal condition and minimal surface resistivity of 3.6 KΩ⋅cm was approached. Properties of Electroconductive AZO@TiO 2 Whiskers.The electroconductive AZO@TiO 2 whiskers are yellow in color and small in size, as revealed in Figure 2(a).Figure 2(b) shows the SEM image of AZO@TiO 2 whiskers.AZO@TiO 2 whiskers are rod-like, with a diameter of 280-340 nm and a length of 1.95-3.75m.Furthermore, coating layer of AZO locates on the surface of TiO 2 whiskers, where the electrons could be freely spread among whiskers when they are connected with each other and readily conduct electricity within the materials. XPS analysis was employed to further confirm the chemical composition of the synthetic whiskers.Figure 2(d) illustrates that synthetic whiskers consist of Zn, C1s, O, and Ti elements.Among them, the binding energy of C 1s peak was at 284.8 eV and used as the reference standard. With scanning over the corresponding XPS spectra area, the binding energies for the Zn 2p region at around 1000 eV were analyzed.The peak located at 1044.43 eV was the same as Zn 2p Antistatic Fabric Coated with AZO@TiO 2 Whiskers. The coating method is of special interest in fabrication of nanostructured materials such as nanocomposite coatings carrying a variety of functionalities [29].A layer of electroconductive AZO@TiO 2 whiskers was coated on the surface of polypropylene nonwoven fabrics.Compared with the uncoated polypropylene nonwoven fabrics in Figure 3(a), the electroconductive AZO@TiO 2 whiskers were coated onto the surface (Figure 3(b)).The influence of the content of AZO@TiO 2 whiskers on the antistatic property of fabrics was shown in Figure 3(c)I.With increased AZO@TiO 2 whiskers, the half-life voltage of fabrics decreased sharply at first.When the content is more than 4 wt%, the tendency to decrease is slow, and the half-life voltage drops to 1.47 s.This suggests that the modified fabrics have good antistatic property.When the content is 8 wt%, the half-life voltage is 0.401 s (<0.5), which indicates that the coated fabrics reach the index of excellent antistatic property (the red line).Meanwhile, antistatic property after washing is very important for surface-modified fabrics.If it is poor, the antistatic property does not remain for long [30].Changes in the half-life voltage with different amount of AZO@TiO 2 whiskers after washing for 20 times are shown in Figure 3(c)II, similar to Figure 3(c)I.In addition, when the content of AZO@TiO 2 whiskers reached 10 wt%, the half-life voltage of surface-modified fabrics after washing for 20 times was 0.481 s (<0.5) still meeting the requirement of excellent antistatic property.The application of fiber materials is limited in some particular environment due to the high resistance.The electroconductive AZO@TiO 2 whiskers with unique rodshaped structure and excellent conductivity own a wide future application in the functional textiles. Conclusion In summary, the rod-like electroconductive AZO@TiO 2 whiskers were successfully fabricated by coating Al doped ZnO onto TiO 2 whiskers.The optimal synthetic condition 6 Journal of Nanomaterials for electroconductive AZO@TiO 2 whiskers was obtained with doping ratio of 2.5 at%, pH value of 8, calcination temperature of 400 ∘ C, and coating ratio of 45 at%.The electroconductive AZO@TiO 2 whiskers are light yellow with a length to diameter ratio of 5.6.The polypropylene nonwoven fabrics modified with AZO@TiO 2 whiskers exhibited excellent antistatic performance and laundering durability, indicating wide application of AZO@TiO 2 whiskers in the antistatic textiles. Figure 1 : Figure 1: The trends between factors and levels and surface resistivity. Figure 3 : Figure 3: (a) Pure polypropylene nonwoven fabrics; (b) polypropylene nonwoven fabrics coated with AZO@TiO 2 whiskers; (c) the relationship between the content of AZO@TiO 2 whiskers and fading period of half-life voltage before washing (I) and after washing for 20 times (II). 2.1.Materials.TiO 2 nanoparticles with an average diameter of 250 ± 35 nm were purchased from Jianghu Chemical Industry Co., Ltd., China.Al(NO) 3 ⋅9H 2 O, Zn(NO 3 ) 2 ⋅6H 2 O, and NaOH were purchased from Sinopharm Chemical Reagent Co. Ltd., China.All the chemicals and reagents used in this study were of analytical grade.Preparation of TiO 2 Whiskers.The TiO 2 whiskers were prepared by two-step synthesis.Generally, TiO 2 nanoparticles and K 2 CO 3 Table 1 : Factors and levels of orthogonal design.Symbols , , , and represent factors of coating ratio, doping ratio, pH, and calcination temperature.Symbols 1, 2, and 3 represent concentration levels of each factor. The arrangements of columns , , , and were decided by orthogonal design for 4 (factor) * 9 (run number); every row of run number represents one experimental replicate; every run was carried out twice. Table 3 : Coating ratio and surface resistivity. [5],28] the other one located at 1021.53 eV was assigned to Zn 2p 1/2 , while those of Zn 3d, Zn 3p, and Zn 3s were located at 10.08 eV, 88.08 eV, and 139.08 eV, respectively.These peak positions agree with the reference values, suggesting a normal state of AZO in AZO@TiO 2 whisker[27,28].The binding energy of O 1s peak was at 530.08 eV, clarifying the existence of TiO 2 whisker.The Ti 2p 3/2 and Ti 2p 1/2 peaks also had obvious double peaks at binding energies of 458.53 eV and 475.33 eV, respectively, indicting a normal state of TiO 2 in AZO@TiO 2 whisker[5].
3,418.8
2016-09-01T00:00:00.000
[ "Materials Science" ]
The serine protease homolog spheroide is involved in sensing of pathogenic Gram-positive bacteria In Drosophila, recognition of pathogens such as Gram-positive bacteria and fungi triggers the activation of proteolytic cascades and the subsequent activation of the Toll pathway. This response can be achieved by either detection of pathogen associated molecular patterns or by sensing microbial proteolytic activities (“danger signals”). Previous data suggested that certain serine protease homologs (serine protease folds that lack an active catalytic triad) could be involved in the pathway. We generated a null mutant of the serine protease homolog spheroide (sphe). These mutant flies are susceptible to Enterococcus faecalis infection and unable to fully activate the Toll pathway. Sphe is required to activate the Toll pathway after challenge with pathogenic Gram-Positive bacteria. Sphe functions in the danger signal pathway, downstream or at the level of Persephone. Introduction The fruit fly, Drosophila melanogaster, spends its life among decaying matter and rotten fruit, where it coexists with different microorganisms. One of the main characteristics of the immune response of Drosophila melanogaster is the challenge-induced synthesis and secretion of antimicrobial peptides (AMPs). This response involves the activation of two signal transduction cascades-the Toll and IMD pathways [1]. Gram-positive bacteria and fungi activate the Toll pathway, whereas Gramnegative bacteria and Gram-positive bacteria of the genus Bacillus activate the IMD pathway [2]. In both cases, signaling leads to the activation of NF-κB transcription factors and expression of target genes including AMPs. In the late 1980's, Charles Janeway proposed that the innate immune mechanisms are essential for the early detection and defense against infection. These mechanisms discriminate between self and microbial non-self. Janeway proposed the existence of germ-line encoded pathogen recognition receptors (PRRs) that recognize conserved signature molecules expressed by pathogens, referred to as Pathogen Associated Molecular Patterns (PAMPs) [3]. A few years later, Polly Matzinger proposed the danger signal hypothesis. This hypothesis proposed that the activation of immune mechanisms is not due to discrimination between self and non-self, but a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 rather to sensing of danger signals: either recognition of pathogens, or alarm signals produced by microbial activities or by the host's own damaged cells or tissues [4]. The Toll pathway can be activated in two ways: recognition of PAMPs by circulating Pathogen Recognition Receptors (PRRs) in the hemolymph or by virulence factors, mostly proteases, secreted by the pathogens. This activation triggers proteolytic cascades in the hemolymph. The terminal protease in the cascade cleaves Spaetzle to its activated ligand form, which is able to bind the Toll receptor and activate the intracellular pathway. Depending on the triggering signal, two proteolytic cascades can be distinguished. First, the recognition cascade activated by PAMPs which includes 3 serine proteases, ModSP [5], Grass [6,7] and SPE [8]. Second, the danger signal cascade can be activated by pathogen-encoded, secreted proteases. Such abnormal protease activity indicates that potentially dangerous changes are happening. Danger signaling involves the serine protease Persephone (Psh) [6]. There are over two hundred genes coding for serine proteases (SPs) and serine protease homologs (SPHs) in the Drosophila genome [9]. SPHs maintain the serine protease fold but lack amidase activity since at least one of the catalytic triad residues is missing [9]. The physiological functions of SPHs are poorly understood, although they have been implicated in different arthropod immune responses, in the horseshoe crab [10], Manduca sexta [11] and Anopheles gambiae [12,13]. We identified the protease Grass as being required for Toll pathway activation downstream of PRRs [6]. Grass was initially identified during an RNAi based screen of serine proteases and serine protease homologs [7], but its function was incorrectly assigned, probably due to the incomplete knockdown of the gene mediated by RNAi. We decided to verify the function of the other candidates identified in this work and focused on the serine protease homolog Spheroide (Sphe). It has been reported that Sphe is involved in the activation of Toll pathway. The knockdown of sphe by RNAi induced the same phenotype upon immune challenge as that of SPE, implying that Sphe might function as an adaptor or regulator of SPE [7]. Here, we use a null mutant of sphe to demonstrate that Sphe is involved in the activation of Toll pathway. By using protease-deficient bacteria we conclude that Sphe is sensing the virulence factors (proteases) produced by pathogenic Gram-positive bacteria. Furthermore, using flies that are double mutants for both sphe and grass, we show that Sphe is involved in the danger signal cascade. Results Sphe is required in the activation of the immune response after a challenge with Enterococcus faecalis As previously shown for Grass, RNAi mediated knock-down could potentially give different results compared to null mutants [6]. The reasons for this are not always clear and could be attributed to incomplete knock-down and OFF-target effects that are not detected by bioinformatics tools. To circumvent limitations of RNAi-mediated knockdown, we looked for another way of inactivating sphe. In the fly line Mi{ET1}spheroide MB11555 a Minos transposon element is inserted 412 bp downstream of the start codon, in an intronic sequence. This insertion reduced the expression of the sphe transcript compared to wild type (S1A Fig). Flies in which sphe expression was reduced were more susceptible than wild type flies to infection with the Gram-positive bacterium Enterococcus faecalis (S1C Fig). Sphe mutants showed reduced activation of the Toll pathway after immune challenge compared to wild type flies as measured by the levels (30%) of drosomycin (drs) antimicrobial peptide gene expression (S1B and S1D Fig). We excised the Minos insertion element to confirm that the susceptibility phenotype was due to its insertion in sphe. We obtained a line with precise excision of the element (sphe Δ11 ) that expresses wild type sphe mRNA levels. When sphe Δ11 flies were challenged with Enterococcus faecalis they showed normal expression of drs (S1A and S1B Fig). This excision line is used as a wild type control in subsequent experiments (ctrl) unless otherwise stated. This demonstrates that the Minos insertion was indeed responsible for the susceptibility phenotype. We also obtained two imprecise excisions, sphe Δ49 and sphe Δ104 , in which no sphe expression was detected. The sphe Δ49 deletion includes the entire transcript as well as 974 bp upstream that include 46 bp of the 3'UTR of CG9673, and 307 bp downstream that include the 5'UTR of CG9676 (S1I Fig). The sphe Δ104 deletion starts at the Minos insertion site and includes 835 bp of upstream sequence. At the protein level, the first 74 amino acids residues are missing, which include the signal peptide and 50 amino acids residues of the catalytic domain, including the His residue from the catalytic triad. Both deletions are therefore null alleles of sphe. When sphe null mutant flies were challenged with pathogenic Gram-positive bacterium Enterococcus faecalis we observed a significant decrease of drs levels 24 hours after infection compared to that of wild type flies (drs reaches 45% of wild type level) ( Fig 1A). Furthermore sphe flies are more susceptible to this immune challenge than wild type flies ( Fig 1B). Since both null alleles sphe Δ49 and sphe Δ104 show the same phenotype, we will describe only the results obtained with sphe Δ104 . When sphe null mutant flies, were challenged with the non-pathogenic Gram-positive bacterium, Micrococcus luteus, or by natural infection with the entomopathogenic fungus Beauveria bassiana, drs expression was comparable to that in wild type flies ( Fig 1C and 1E and S1E-S1G Fig). Accordingly, sphe null mutant flies showed the same susceptibility to Beauveria bassiana infection as wild type flies ( Fig 1D). To confirm that the phenotype we observed is due to sphe inactivation, we overexpressed Sphe with the UAS-Gal4 system using the ubiquitous Actin5C>Gal4 driver. Sphe overexpressing flies are viable and show no obvious phenotype. Sphe overexpression does not induce the Toll pathway as measured by levels of drs mRNA. We therefore expressed Sphe in sphe mutant background, and observed the rescue of the phenotype as assayed by the induction of drs expression in response to Enterococcus faecalis infection, as well as an enhanced survival to the infection (Fig 2A and 2B). Taken together, these data show that Sphe is involved in Toll pathway activation and is required to activate a full and efficient response to the pathogenic Gram-positive bacterium Enterococcus faecalis. Sphe is involved in the "danger" signal Toll activation cascade Enterococcus faecalis is a pathogenic Gram-positive bacterium that activates the Toll pathway via both proteolytic branches, through recognition of Lys-type peptidoglycan [14], as well as through production of virulence factors that activate the "danger" signal cascade [6]. To assess in which of these branches Sphe is functioning, we generated double mutants sphe Δ104 ;grass hrd in which the recognition cascade is blocked and we challenged these flies with Enterococcus faecalis. The levels of drs expression 24 hours after immune challenge are significantly decreased (drs reaches 20% of wild type level) compared to both sphe and grass hrd single mutants, to a level comparable to that of spz mutants flies (Fig 3A). This additive effect indicates that Sphe is acting in a parallel pathway to Grass, in the "danger" signal Toll activation cascade. Enterococcus faecalis produces several virulence factors, including cytolysin, aggregation substance, the zinc metalloprotease gelatinase GelE, and the serine protease SprE [14,15]. We focused on the secreted extracellular proteases GelE and SprE as potential virulence factors that might be sensed by the danger signal cascade. To confirm the involvement of sphe in this danger signal sensing, we used protease-deficient strains of Enterococcus faecalis that were mutant for either gelE (TX5264), sprE (TX5243), or both gelE and sprE (TX5128) [16,17]. We observe slight, but reproducible, reductions in drs levels when wild type flies are challenged with protease-deficient bacteria compared to those challenged with wild type bacteria. The observed decrease in drs levels is similar to the one observed in psh mutant flies challenged with wild type bacteria suggesting that these proteases are required for activating the danger signal pathway. This is confirmed with the fact that there is no additive effect when psh mutant flies are challenged with protease-deficient bacteria compared to the same infection in wild type flies (Fig 3B). After protease-deficient bacteria immune challenge, sphe mutants behave as wild type flies and show no susceptibility to the protease-deficient bacteria ( Fig 3C) and the levels of drs 24 hours after infection are as in wild type controls indicating normal activation of Toll pathway ( Fig 3D). The same result was found using either of the single mutants for gelE or sprE (S2A and S2B Fig) indicating that both of these virulence factors contribute to the activation of Toll pathway. We confirmed this observation by using the non-pathogenic bacterium Enterococcus faecium that is closely related to E. faecalis but lacks these virulence factors [18]. After immune challenge with E. faecium, sphe mutant flies show no susceptibility ( Fig 3E) and drs levels 24 hours after infection are as in wild type controls indicating normal activation of Toll pathway (Fig 3F). Taken together these data demonstrate that Sphe is involved in sensing proteases produced by Enterococcus faecalis for Toll pathway activation. Sphe is involved in sensing Gram-positive pathogenic bacteria We tested another pathogenic Gram-positive bacterium, Staphylococcus aureus. We observed that sphe null mutant flies showed the same susceptibility as psh mutant flies to infection compared to wild type flies (Fig 4A), but drs levels 24 hours upon immune challenge are as in wild type controls (Fig 4B). The wild type activation of Toll pathway could however be due to the PRR pathway. In order to confirm the involvement in the "danger" signal cascade, we used double mutant sphe Δ104 ;grass hrd flies in which the recognition cascade is also blocked. drs levels are significantly decreased in sphe Δ104 ;grass hrd double mutants compared to the levels in wild type flies and to both sphe or grass hrd single mutant flies (it reaches 30% of wild type flies) ( Fig 4B). These data demonstrate that Sphe and Grass act in parallel in the sensing of virulence factors produced by Staphylococcus aureus. Sphe is acting in the Persephone pathway Over expression of serine proteases that lead to proteolytic cleavage of Spz constitutively activates the Toll pathway and induces drs expression in the absence of immune challenge. By over expressing the Toll pathway serine proteases in an sphe mutant background we assessed the position of Sphe in the cascades. As expected from the phenotype of sphe flies, Toll pathway activation after over expression of both PGPR-SA and GNBP1, or of SPE, is not blocked in sphe mutant background (Fig 5A and 5B). However, Toll pathway activation after Psh over expression is strongly reduced in sphe mutant background (Fig 5C). These observations demonstrate that Sphe is acting downstream of (or at the same level as) Psh in the danger signal cascade. Discussion A previous RNAi screen suggested that Sphe was required for Toll pathway activation [7]. By analyzing the null mutant phenotype of this SPH, we show that Sphe is not required for different kind of infections as previously reported but only after the immune challenge with the pathogenic Gram-positive bacteria Enterococcus faecalis and Staphylococcus aureus. Furthermore, we demonstrate that Sphe is a component of the danger signal cascade acting downstream (or at the same level) of Psh, and is involved in the sensing of virulence factors (proteases) secreted by these pathogenic bacteria. The Sphe serine protease homolog has a signal peptide and a trypsin-like protease fold. Within the catalytic triad, the active serine residue is mutated to a glycine residue, blocking the proteolytic activity. Since it has no amidase activity, Sphe cannot directly activate a downstream zymogen. Recent studies on serine proteases involved in the activation of Toll pathway during embryonic development reported that Gastrulation Defective (GD) forms a complex with Snake (Snk) and Easter (Ea) and that this association is required for the activation of Ea by Snk. When GD is itself activated, its NH 2 -terminal region interacts with sulfated proteins located in the ventral region of the perivitelline membrane. This localization acts to bring Ea and Snk together and promote the ventrally restricted processing of Ea. Surprisingly, this mediation of Snk activity is not dependent on the proteolytic activity of GD but still occurs in GD mutants that lack one of the active catalytic residues [19]. This result establishes that a proteolytically inactive SP can function as a mediator to promote zymogen activation via another SP component of a proteolytic cascade. Serine protease homologs have been implicated in various physiological processes. In 1991, Hogg et al. reported that a mammalian serine protease homolog, protein Z (PZ), a vitamin Kdependent glycoprotein, binds to thrombin causing its conformational change and its association with phospholipid membrane vesicles. This membrane localization is important during coagulation and clotting as it partitions thrombin to the site of an injury [20]. Later studies of protein crystal structure have demonstrated that PZ functions as a cofactor regulating proteolytic activity of Factor Xa (FXa) on phospholipid vesicles [21]. This is achieved through interaction between the N-terminal domain of PZ and FXa and PZ-dependent Protease Inhibitor (ZPI), which forms a serpin/protease complex with FXa. This interaction is required for the assembly of a protein complex on the phospholipid vesicle surface, which leads to the formation of an effective inhibitory complex containing PZ/FXa/ZPI. One reported serine protease homolog in Drosophila, Masquerade, is necessary during embryonic development to promote and/or stabilize cell-matrix interactions [22]. Homologs of Masquerade in Tenebrio molitor and Holotrichia diomphalia larvae are required for the proteolytic activation of prophenoloxydase, suggesting a function as a cofactor of the active protease [23]. The Manduca sexta serine protease homolog SPH3 is required in the immune response of the moth to infection with Gram-negative bacterium Photorabdus luminescens. SPH3 was initially identified as a target for the Repeats-in-toxin (RTX)-metalloprotease, protease A (PrtA), which is secreted by the bacterium. Upon infection, SPH3 is upregulated in both the fat body and hemocytes. RNAi-mediated knockdown of SPH3 increased the susceptibility of moths to infection by Photorabdus luminescens [11]. Sphe is involved in the sensing of proteases produced by Gram-positive pathogenic bacteria. We can hypothesize that Sphe is recruited in a complex that mediates the activation of a target SP that could be Persephone on infection with a virulent strain. However, Sphe it is not sensing infection by the entomopathogenic fungus Beauveria bassiana nor commercial Aspergillus oryzea proteases directly injected in the body cavity even both events induce activation of the Toll pathway through Psh [6] (S1H Fig). Two further SPHs, sphynx1 and sphynx2, were identified in the RNAi screen of Kambris et al., together with sphe, as putative components of the Toll pathway proteolytic cascade. Since Sphe functions upon infection with specific pathogens, it is likely that Sphynx1 and Sphynx2 might also be involved in the activation of the immune response against other pathogens. Further investigation is necessary to elucidate the mechanisms by which Sphe is functioning and how it contributes to the sensing of these virulence factors. In addition, characterization of other Drosophila SPHs would give insight into their mechanisms of action in other proteolytic cascades. Natural infection with Beauveria bassiana was performed as described [2]. Injection of sublethal doses of commercially available proteases from Aspergillus oryzae (over 500 U/g; P6110; Sigma-Aldrich) was previously described [6]. At least three independent survival experiments were performed. In each experiment and for each genotype, a mix of 20-30 (both males and females) six to eight days-old flies were infected with E. faecalis, protease-deficient E. faecalis or S. aureus, or by natural infection with B. bassiana at 29˚C [27]. The survival data was plotted using GraphPad Prism Software and for statistical analysis we used Log-rank (Mantel-Cox) test.
4,120.2
2017-12-06T00:00:00.000
[ "Biology", "Medicine" ]
Gut Microbiota Is Associated with Onset and Severity of Type 1 Diabetes in Nonobese Diabetic Mice Treated with Anti–PD-1 Abstract Our bodies are home to individual-specific microbial ecosystems that have recently been found to be modified by cancer immunotherapies. The interaction between the gut microbiome and islet autoimmunity leading to type I diabetes (T1D) is well described and highlights the microbiome contribution during the onset and T1D development in animals and humans. As cancer immunotherapies induce gut microbiome perturbations and immune-mediated adverse events in susceptible patients, we hypothesized that NOD mice can be used as a predictive tool to investigate the effects of anti–PD-1 treatment on the onset and severity of T1D, and how microbiota influences immunopathology. In this longitudinal study, we showed that anti–PD-1 accelerated T1D onset, increased glutamic acid decarboxylase–reactive T cell frequency in spleen, and precipitated destruction of β cells, triggering high glucose levels and pancreatic islet reduction. Anti–PD-1 treatment also resulted in temporal microbiota changes and lower diversity characteristic of T1D. Finally, we identified known insulin-resistance regulating bacteria that were negatively correlated with glucose levels, indicating that anti–PD-1 treatment impacts the early gut microbiota composition. Moreover, an increase of mucin-degrading Akkermansia muciniphila points to alterations of barrier function and immune system activation. These results highlight the ability of microbiota to readily respond to therapy-triggered pathophysiological changes as rescuers (Bacteroides acidifaciens and Parabacteroides goldsteinii) or potential exacerbators (A. muciniphila). Microbiome-modulating interventions may thus be promising mitigation strategies for immunotherapies with high risk of immune-mediated adverse events. INTRODUCTION As the use of cancer immunotherapy (CIT) has become common, immune-mediated adverse events (imAEs) such as chronic inflammation, autoimmunity, and hypersensitivity reactions have emerged as its Achilles heel.Checkpoint receptors, such as PD-1 and CTLA-4, are key players in maintaining self-tolerance by controlling the duration and amplitude of physiological immune responses and thus limiting collateral damage and preventing autoimmunity.Blocking these tolerance-inducing pathways is consequently linked to the occurrence of associated imAEs, which may lead to CIT-induced autoimmune diseases (14).As new CIT therapies are on the forefront, it is critical to understand the contribution of different factors to the onset and severity of imAEs in an attempt to optimize the therapeutic index for each patient and avoid premature cessation of therapies.The NOD mouse has proven to be a valuable model for studying autoimmune diabetes, which occurs spontaneously, includes multiple genetic mutations (polygenic), and involves innate and acquired immunity.The pathophysiological mechanisms leading to type 1 diabetes (T1D) in NOD mice are well characterized and offer robust translation to human disease (5).However, insulitis is not uniformly progressing to disease, and the mechanisms regulating the initiation and severity of insulitis to overt diabetes are less understood.It is believed that environmental factors (e.g., diet, lifestyle, microbiota) and genetics plays a crucial role here.As $7080% of immune cells are located in the gut, it is crucial to consider that trillions of microbes have the potential to prime immune cells via multiple pattern recognition patterns (PRRs), such as TLRs or nucleotide-binding and oligomerization domain-like receptors (NLRs) (6). On the one hand, it has been suggested that the microbiota modulates T1D onset and course in NOD mice, but that it is not the cause of disease, as germ-free NOD mice show similar incidences of insulitis compared with mice kept under specific pathogen-free (SPF) conditions (7).Furthermore, MyD88 À/À NOD mice under SPF conditions were protected from T1D (8); however, this was not seen under germ-free conditions.Additionally, TLR2 À/À NOD mice were shown to be resistant (9) and TLR9 À/À NOD mice to be protective against T1D (10), whereas TLR4 À/À NOD mice induced accelerated autoimmune diabetes under SPF conditions (11).On the other hand, PD-1 and CTLA-4 treatments seem to result in early onset of diabetes in NOD mice, which is in line with the mode of action of these molecules and is also observed in humans clinically (12,13).The effects of checkpoint inhibitors (PD-1/PD-L1 and CTLA-4) on the microbiota have also been described in multiple reports and cancer mouse models (1416). In this study, we applied the protocol from Ansari et al. (17), complemented with gut microbiota evaluation to further investigate its influence on the incidence, onset, and severity of diabetes developed during early life spans by NOD mice treated with a murine antiPD-1 Ab.We found that antiPD-1 treatment and T1D altered the microbiota composition and diversity and identified insulin-regulating bacteria that may have reduced glucose levels over time.To our knowledge, this is the first time a study investigated how antiPD-1 treatment is shaping the microbiome longitudinally in an autoimmune mouse model.In this study, we evaluated this predictive model for its applicability to future CITs in determining potential safety risks based on early safety biomarkers within the microbiota and host immune responses. Animals NOD (NOD/ShiLtJ), BALB/c (BALB/cAnNCrl), and NOD-SCID (NOD.CB17-Prkdc scid /NCrCrl) female mice were purchased from Charles River Laboratories (Sulzfeld, Germany).Animals were aged $5 wk at the start of dosing.They were kept in an airconditioned room under periodic bacteriologic control at a temperature of 22 6 2 C, with 4080% humidity and a 12-h light/12-h dark cycle and background music coordinated with light hours.Mice were housed in groups of two or three in environmentally enriched Makrolon type III boxes with autoclaved sawdust bedding.Additional filter top and HEPA filter changing stations were used for NOD-SCID mice.A pelleted standard rodent diet and tap water were supplied ad libitum.Animals were randomly assigned to dose groups based on body weight using the data collection software Provantis v9.4 (Instem Life Sciences, Stone, U.K.).This study was not performed under good laboratory practice.The animals were kept in a facility accredited by the American Association for Accreditation of Laboratory Animal Care International and treated in accordance with the guidelines of the Swiss Animal Welfare Act.All procedures were in accordance with the respective Swiss regulations and approved by the Cantonal Ethical Committee for Animal Research. Study design A 2-wk toxicity study was conducted in 55 mice.Mice were divided into five groups.Three groups of 15 NOD mice each were either 1) untreated (naive) to evaluate the onset and worsening of T1D in the model itself as reference, 2) treated with an unspecific isotype IgG as vehicle (InVivoMAb rat IgG2a isotype control, anti-trinitrophenol, Bio X Cell, NH, USA), or 3) treated with the InVivoMAb anti-mouse PD-1 (CD279) Ab from Bio X Cell (Lebanon, NH).Five BALB/c were added to assess the effect on the microbiome in the same room with the same food and the same age compared with genetically modified animals, and five NOD-SCID mice were allocated to the fifth group as these immunosuppressed mice do not develop T1D.The test items were administered i.v. in the tail vein at 500 mg/administration on day 1 and 250 mg/administration every second/other day for five administrations.The dosing period was followed by a treatment-free observation phase of 34 wk to assess the onset and progression of diabetes.Both dose and dose frequency were selected on the basis of a combination of in vitro (T cell activation assay and cytokine release) and in vivo data indicating the duration of response and exposure required for efficacy in mice.Clinical signs, body weight development, and food consumption were monitored throughout the study.Blood was taken before treatment, during treatment, and after the treatment phase.Lymph nodes, spleen, and pancreas were taken on days 12, 17, and 38.Fecal samples were collected before treatment, during treatment, and three times after treatment from all groups of the study. Monitoring diabetes via glucose measurement To monitor the onset of insulin-dependent diabetes mellitus, daily blood sampling for glucose measurement was performed by puncture of the tail tip without sedation (only groups 1, 2, and 3).A sample volume of 0.6 ml is required for analysis with Accu-Chek Advantage glucometers (Roche Diagnostics, Evaluation Report: Accu-Chek Aviva Test Strips 2013).Onset of type 1 diabetes was defined as a random blood glucose reading of 250 mg/dl (14 mmol/L) or greater for three consecutive occasions.Blood glucose levels were measured once prior to start of the study, daily for the first 11 d during treatment, and two to three times per week during the treatment-free posttreatment observation period. A terminal nonfasted blood sample for laboratory investigations was drawn sublingually under light isoflurane anesthesia from all animals shortly before necropsy and without overnight fasting.Animals were sacrificed immediately after blood collection without waking up.Approximately 50 ml of blood sampled into EDTA tubes was used for measuring hematologic parameters.Serum samples from at least 500 ml of blood were used for clinical chemistry assays and quantification of electrolytes. Bioanalytics Blood samples were obtained from group 3 animals at the end of the administration phase (day 11, three animals [10 min] before the last dose and three animals [10 min] after last dose) as follows to confirm exposure.Animals were not fasted before bleeding.Approximately 50 ml of blood was collected without anesthesia from the tail vein in K3-EDTA tubes.After centrifugation at $1500 × g, for $10 min at 4 C, the resulting plasma was used in a sandwich ELISA quantifying against anti-rat PD-1IgG.Owing to limitations in sample numbers, no pharmacokinetic evaluation were performed and only exposure confirmation for the provided sample time points are reported in this study. Immunopathology: tetramer staining and ELISPOT ELISPOT for glutamic acid decarboxylase (GAD)specific T cells and tetramer FACS of insulin-specific splenocytes were performed, aiming at quantifying autoreactive T cells upon antiPD-1 treatment.Spleen and draining lymph nodes from 10 animals per group in groups 1, 2, and 3 were collected in individual tubes on days 17 and 24 and the five animals in group 5 on study termination. ELISPOT analysis was applied to quantify the frequencies of IFN-gproducing GAD-reactive T cells and assess the state of activation of T cells in mice receiving antiPD-1 or vehicle.In brief, spleen and draining lymph nodes from treated and control mice were processed into single-cell suspensions.ELISPOT plates (Mabtech, no.3321-4APW-2) were blocked with PBS-FCS 10% for 1 h at room temperature and then washed with PBS.Cells were plated (5.10 4 /well) in complete RPMI 1640 medium containing 10% FCS (Sigma-Aldrich), 2 mM L-glutamine, 100 U/ml penicillin/streptomycin (BioWhittaker), and 50 mM 2-ME (Sigma-Aldrich).Control wells contained unstimulated cells or medium alone, whereas GAD Ag (10 mg/ml, Abcam, no.206646) or Con A (5 mg/ml, Sigma-Aldrich) was added to stimulated wells.After a 48-h incubation at 37 C and 5% CO 2 , the plates were washed five times.Biotinylated detection mAbs (R4-6A2biotin, 1 mg/ml) was added and the plates are left for an additional 2-h incubation at room temperature.Streptavidin alkaline phosphate (1:1000) was then added for 1 h at room temperature followed by substrate solution (BCIP/NBT-plus) until distinct spots emerged (1030 min).Color development was stopped by washing extensively in tap water and the plate was let to dry.The resulting spots were counted on a computerassisted ELISPOT image analyzer (Cellular Technology). Light sheet microscopy Tissue clearing.One naive mouse (blood glucose 5.8 mmol/l/104 mg/dl) and one antiPD-1 mouse surrogatetreated mouse (blood glucose 30.7 mmol/l/>286.5 mg/dl) designated for light sheet microscopy were sacrificed on day 37 of the study.The entire pancreas (splenic and duodenal parts labeled) of both mice was dissected at necropsy and fixed for 24 h in 4% PFA at 4 C.The pancreas was cleared using the CUBIC (clear, unobstructed brain/body imaging cocktails and computational analysis) method.Briefly, the pancreas was incubated in CUBIC-1 solution on a shaker (90 rpm) at 37 C. CUBIC-1 solution was changed after $12 h, 14 d, until the pancreas was completely cleared.The pancreas was then washed in PBS three to five times and placed in PBS overnight on a shaker (90 rpm) at 37 C.The pancreas was incubated in CUBIC-2 solution, on a shaker (90 rpm) at 37 C for 2 d, with daily changes of CUBIC-2 solution.The pancreas was then kept in CUBIC-2 solution at room temperature in the dark prior to immunostaining. Immunohistochemistry. Cleared pancreas was immunostained for chromogranin A. All steps were performed in closed, fully filled tubes to prevent oxidation.The pancreas was incubated in permeabilization solution at 37 C for a maximum of 2 d, followed by incubation in blocking solution at 37 C for a maximum of 2 d.The pancreas was then incubated in primary Ab (1:100 polyclonal rabbit antichromogranin A [Novus Biologicals, no.NB120-15160], 1:250 in DAPI [1/100] in PTwH [PBS/0.2%Tween 20 with 10 mg/ml heparin] [5%]/DMSO [3%] goat serum) in PTwH (5%)/DMSO (3%) donkey serum at 37 C for 4 d, followed by washing in PTwH four to five times until the next day.The pancreas was then incubated in secondary Ab (Alexa Fluor 488 donkey anti-rabbit [Invitrogen, no.A-21206] in PTwH/3% donkey serum) at 37 C for 4 d, followed by washing in PTwH four to five times until the next day.The pancreas was imaged in fresh CUBIC-2 solution using LaVision ultramicroscope, and acquired images were further processed with arivis software. DNA extraction, library preparation, and 16S rRNA gene sequencing DNA was extracted from mouse fecal pellets with the QIAamp Fast DNA stool mini kit (51604) using a modified extraction method.Fecal pellets were weighed (average weight 0.058 6 [SD] 0.027 g) and added to a sterile 2-ml microcentrifuge tube containing one 3.5-mm glass bead, 0.1 ml of 1.0-mm zirconia/ silica beads, and 0.1 ml of 0.1-mm glass beads (BioSpec, Bartlesville, OK).Then, 1 ml of Inhibitex buffer was added to the fecal samples, which were disrupted by bead beating in a Mini-Beadbeater-16 (BioSpec) at maximum speed (3450 strokes/min).Samples were disrupted for 1 min, followed by incubation on ice for 1 min, disrupted a second time for 30 s, and placed back on ice before incubation at 95 C for 7 min.Samples were vortexed for 15 s and centrifuged at 13,300 rpm for 1 min to pellet the stool particles, after which 400 ml of the supernatant was added to a sterile 1.5-ml microcentrifuge tube containing 20 ml of proteinase K.Then, 400 ml of buffer AL was then added to the tube and the samples were vortexed for 15 s before incubation at 70 C for 10 min, after which 400 ml of ethanol was added to the lysate and thoroughly mixed by gently pipetting the mixture.Lysate (650 ml) was added to the QIAamp spin column and centrifuged for 1 min at 13,300 rpm and the filtrate was discarded.The remainder of the lysate was added to the QIAamp spin column, which was centrifuged again under the same conditions before continuing with the DNA extraction as per the kit manufacturers instructions until the DNA elution step.The DNA was eluted by applying 50 ml of buffer ATE to the QIAamp spin column membrane and incubating at room temperature for 3 min before centrifugation to elute the DNA (1 min at 13,300 rpm).This eluate was then applied to the QIAamp spin column membrane a second time and incubated and centrifuged as before to finally elute the DNA. DNA quality and concentration were measured using a NanoDrop 2000 spectrophotometer (Thermo Scientific, Waltham, MA), and subsequently the DNA was stored at À80 C prior to PCR amplification.Library preparation for 16S rRNA gene amplicon sequencing was performed following the Illumina (San Diego, CA) recommendations with some modifications.Briefly, aliquots of 30 ng of extracted genomic DNA were subjected to PCR amplification of the V3V4 hypervariable region (341 forward, 5 0 -CCTACGGGNGGCWGCAG-3 0 and 805 reverse, 5 0 -GACTACHVGGGTATCTAATCC-3 0 ) of the 16S rRNA gene in a total PCR reaction volume of 30 ml.The PCR primers were selected from Klindworth et al. (18).Illumina adapters containing overhang nucleotide sequences were added to the gene-specific sequences and used at a concentration of 0.2 mM. PCR amplification with Phusion HIGH-FIDELITY DNA polymerase (Thermo Scientific) was performed on a SimpliAmp thermal cycler (Applied Biosystems) under the following conditions: 98 C for 30 s, followed by 25 cycles of 98 C for 10 s, 55 C for 15 s, 72 C for 20 s, and a final cycle of 72 C for 5 min before cooling to 4 C. Pre-and post-PCR steps were performed in separated and designated areas of the building.The successful generation of the PCR amplicon was verified by observing the PCR product band in 1% agarose gels. After PCR, amplified products were purified using Agencourt AMPure XP magnetic beads (Beckman Coulter, Brea, CA) and eluted in 52.5 ml of EB buffer (Qiagen).After this initial purification, 5 ml of the purified ampliconPCR product was amplified in a second PCR to add the Illumina barcode sequences to the 16S gene-specific sequences.This was performed on a 2720 thermal cycler (Applied Biosystems, Foster City, CA) employing Nextera XT v2 index primer kits (Illumina).The PCR conditions were 98 C for 30 s, followed by eight cycles of 98 C for 10 s, 55 C for 15 s, 72 C for 20 s, and a final cycle of 72 C for 5 min before cooling to 4 C.A second purification step with Agencourt AMPure XP magnetic beads was carried out after the Nextera PCR.These 16S V3V4 rRNA gene amplicons containing the Nextera indexes were finally eluted in 27.5 ml of EB buffer, and the concentration of each sample was measured using a Qubit 3 fluorometer (Invitrogen) employing the Qubit dsDNA HS assay kit.Pooled libraries were created by combining 40 ng of each sample.Negative PCR controls and extraction kit negative controls were processed alongside study samples and were also sequenced.A sample of the final library pool was sequenced at the Teagasc next-generation sequencing facility (Teagasc Moorepark, Fermoy, Ireland) on an Illumina MiSeq generating 2 × 300-bp paired end reads. Potential reagent contaminants were identified and removed using a frequency-based algorithm implemented in the decontam package (26).In total, 7285 unique RSVs were identified, of which 215 were found to be potential contaminants and thus removed from further analysis.Samples with <5000 (n 5 1) reads were excluded from further analysis.Next, data filtering was carried out to include RSVs present in >5% of samples with percent abundance of >0.01.Except in the case of a diversity, this filtered RSV count table was used for all the downstream bioinformatics analyses. We excluded samples from animal number 359 (from group 3) because we found a pathogen genus Shigella that was already present in baseline samples pointing to an early colonization.While this genus did not have a strong impact on diabetes or antiPD-1 treatment, we excluded this animal from our analyses to eliminate the introduction of a potential bias. Imputed metagenomics To investigate functional potential of the murine microbiota after antiPD-1 treatment, we inferred abundance of Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways (27) in the samples based on the 16S rRNA RSV count table and representative sequences for each RSV using Piphillin2 (28).Briefly, this tool predicts functional gene content based on metataxonomic data with high accuracy by leveraging the most upto-date version of the KEGG reference database. Statistical analysis All statistical analysis and graphical representations were performed in R using CoDaSeq (29), zCompositions (30), vegan (31), ggplot2 (32), and ggpubr (33) packages.To account for the complex compositional nature of the microbiome data, we used the Co-DaSeq pipeline.In brief, we imputed zeros utilizing the count zero multiplicative replacement method (cmultRepl, method 5 CZM) implemented in the zCompositions package and applied centered log-ratio (CLR) transformation of the data using the co-daSeq.clrfunction from the CoDaSeq package. The a diversity was determined using Chao1, the Shannon index, and the inverse Simpson index.Differences in a diversity within/between groups and between termination rounds (in group 3) were determined using a KruskalWallis test.For principal component analysis, we performed a CLR transformation implemented in the CoDaSeq R package and calculated Euclidean distances, which correspond to Aitchison distances.Permutational multivariate ANOVA (PERMANOVA) (34) was performed on the Aitchison distances with 9999 permutations to test the significance of different clinical variables on mice gut microbiota composition. To find the association of microbial abundances with different clinical variables, we performed multivariate analysis using multivariate analysis by linear models (MaAsLin2 v1.1.1)in R (35).MaAsLin2 performs boosted, additive general linear models between metadata and microbial abundance.Boosting of metadata and selection of a model were performed per taxon.Microbial abundances were CLR transformed at each taxonomic level to account for the compositional nature of the data.To identify microbial taxa and pathways that are associated with different groups, we carried out 1) between-group analysis considering time point as a fixed effect and animal ID as a random effect, and 2) between-group analyses per individual time point.Similarly, to identify microbial taxa and pathways that are associated with different termination rounds in group 3, we carried out 1) betweentermination round analysis considering time point as a fixed effect and animal ID as a random effect, and 2) betweentermination round analysis per individual time point.Multiple testing correction was carried out via false discovery rate (FDR) estimation, and associations were considered significant below a FDR value threshold of 0.10. Next, we evaluated groupwise correlation differences between glucose level and CLR transformed abundance of microbiota using repeated measure correlation (rmcorr) analysis (36).By using this approach, we accounted for compositional effects via CLR and repeated measurements via rmcorr (CLR 1 rmcorr).Multiple testing correction was carried out via the BenjaminiHochberg method, and correlation was considered statistically significant below an adjusted p value cutoff (FDR < 0.10) of 0.10. RESULTS To evaluate the NOD model as a potential predictor of CITassociated risks for autoimmunity, we treated NOD mice with a murine antiPD-1 as previously described (17) and integrated a longitudinal microbiota analysis as a novel component (Fig. 1A).We included parameters such as glucose levels and IFN-g producing GAD-specific splenocytes for monitoring incidence, onset, and severity of autoimmune diabetes at three time points (see Materials and Methods for details) in three different mouse strains.We also applied light sheet microscopy to demonstrate the heterogeneity of islets in the pancreas and collected fecal ImmunoHorizons MICROBIOTA OF ANTI-PD-1-TREATED NOD MICE samples to evaluate the changes in the microbiota after treatment (group 3) and between different mouse strains (NOD/ShiLtJ, groups 13; wild-type BALB/c, group 4; and NOD/SCID, group 5). Incidence of insulitis onset and glucose levels The onset of T1D was diagnosed based on three consecutive elevated glucose levels (>14 mmol/l), after which animals were rapidly terminated for further analysis.Acceleration of T1D onset was observed only in group 3 with a total of 11 out of 15 mice ultimately diabetic (Fig. 1B).For each termination round an equal number of isotype-treated and naive mice were also sacrificed, except on day 17, where only two naive mice were euthanized.The first termination round of animals with high glucose levels occurred on day 17 for one third of the group 3 mice.The second termination round included five mice from group 3 on day 24, and the third termination round occurred on day 38, wherein only one animal (no.365) exhibited high glucose levels, but not the remaining animals (nos.355, 357, 353, and 363), which stayed nondiabetic until the end of the study). GAD-reactive T cells and IFN-g production Detection of self-reactive T cells in lymphoid structures at the vicinity of the pancreas is a hallmark of the autoimmune process taking place in the prediabetic stages of T1D.Therefore, to better appreciate differences between treatment groups, we quantified the frequencies of GAD-reactive T cells and insulinspecific CD8 1 T cells in the spleen (and pancreas-draining lymph nodes) of termination round 1 and 2 animals using IFN-g ELISPOT and IGRP tetramer staining, respectively (Supplemental Fig. 1).Splenocytes from all three animal groups showed comparable ability to produce high IFN-g in response to Con A, a mitogenic stimulus.In contrast, unstimulated cells did not exhibit any detectable signal.Interestingly, antiPD-1 treatment (group 3) significantly increased GAD-specific T cell numbers in the spleen and pancreasdraining lymph nodes of animals relative to group 1 and group 2 mice (p < 0.05, Supplemental Fig. 1A).Within the anti PD-1treated animal group weaker responses were detected after GAD65 than after Con A stimulation, likely reflecting the lower frequency of the responding population due to the Ag specificity restriction in comparison with lectin-reacting cells.Using the tetramer technology to follow another autoantigenic population, insulin-specific CD8 T cells staining from spleen showed no significant differences between group 1 and group 3 (Supplemental Fig. 1B). Three-dimensional pancreas reconstruction After performing standard histological evaluation by H&E staining, we observed immune cell infiltration in all three NOD groups by day 38.However, we did not observe marked differences between the three NOD groups and sought whether infiltration of islets could be further evaluated by light sheet microscopy.This technology depicts a three-dimensional reconstruction of the pancreas to allow a better understanding of the distribution and volume of the islets and overcome biases that might be introduced by performing histology on a slide-by-slide basis.Supplemental Fig. 2 illustrates islet staining of the pancreas of one animal in group 1 and group 3, respectively.Due to the low animal number, no statistics or further conclusions were possible; however, during the processing we observed that tissue consistency differed between the control (Supplemental Fig. 2A) and antiPD-1treated (Supplemental Fig. 2B) animal, with antiPD-1treated pancreas being soft and fragile compared with solid of the control mouse.In addition, carefully looking at the islet distribution and staining intensity pointed toward reduced islet number and staining intensity in the anti PD-1treated animal.This assessment is for illustration only, and further analysis with a larger sample size may be required. Impact of microbiota on insulitis onset and progression We longitudinally investigated the bacterial composition of five groups of mice to understand impact of insulitis, antiPD-1 treatment, and murine genetic background on the microbiota composition and diversity. Next, we examined how the mouse gut microbiota composition changes in relationship to genetic background, time point, treatment, and termination rounds.We found that genetic background of the mouse accounted for the significant between-sample microbiota variance (PERMANOVA: R 2 5 19%, p 5 0.0001; Fig. 3A).On comparison of groups 13, we found less strong but significant shifts in microbial composition (PERMANOVA: R 2 5 1.8%, p 5 0.001; Fig. 3B), suggesting microbiota-driven changes associated with antiPD-1 treatment.Next, we examined longitudinal development of microbiota from NOD/ShiLtJ mice (groups 13) separately at each time point (Fig. 3C).As expected, we found no significant microbiotaassociated changes at the baseline time point between three groups.However, the microbiota composition differed significantly between group 1 and group 3 mice at day 17, and between group 1 and group 2 and group 2 and group 3 mice at day 38 (posttreatment time points).We subsequently investigated microbiota-level changes with respect to the termination round of animals in group 3 (Fig. 3D).The microbiota community composition shifted significantly between termination rounds 1 and 3 and termination rounds 2 and 3, indicating that animals that were terminated early because of elevated glucose levels had a different microbiota composition from animals that were terminated at later stage. We further measured within-sample a diversity using observed species (community richness estimator) and the Shannon (community richness and evenness estimator) index.We did, however, not find statistically significant changes between any groups at any of the time points compared with baseline samples (Fig. 4A).A comparison of a diversity between termination rounds in group 3 animals showed that species richness on day 11 was significantly lower than at the baseline (Fig. 4B; p 5 0.05) time point for termination round 1, demonstrating that animals treated with antiPD-1 have a lower community richness than animals not treated with antiPD-1.In the case of the Shannon index, we observed the same trend of decreasing diversity for animals from termination round 1 at day 11, but statistical significance (p 5 0.15) could not be reached. We next identified 20 RSVs that were significantly different (q value < 0.1) between animals from groups 13 (NOD/ShiLtJ; Fig. 5A).Particularly, we identified significantly reduced (q value 5 0.013) abundance of the RSV belonging to Bacteroides acidifaciens (Seq_0000004) in group 3 (antiPD-1) compared with group 1 and group 2, which has been previously reported to improve glucose intolerance and insulin resistance.Four different RSVs belonging to Lachnospiraceae (Seq_0000166, Seq_0000277, Seq_0000360, Seq_0000394), previously associated with T1D pathogenesis (37), were found to be significantly increased (q value < 0.1) in group 3 compared with group 1 and group 2. Four different RSVs from the Muribaculaceae family of the phylum Bacteroidetes (Seq_0000015, Seq_0000097, Seq_0000133, Seq_0000211) were significantly decreased in group 3 compared with group 1. Parabacteroides goldsteinii has been associated with reduced gut inflammation and insulin resistance in type 2 diabetes (38), and the RSV from this species (Seq_0000123) was found to be significantly decreased in group 3 compared with group 1.In addition to that, we found a significant decreased in abundance of RSV belonging to Akkermansia muciniphila (Seq_0000152) in group 1 compared with group 3, suggesting its association with antiPD-1 responses.Statistical comparison of RSV abundance between termination rounds in group 3 animals revealed three different Lachnospiraceae RSVs (Seq_0000038, Seq_0000113, Seq_0000115) and one Alistipes RSV (Seq_0000008) as significantly (q value < 0.1) different in termination round 1 than round 3 5B).Apart from that, we even found significantly (q value 5 0.079) higher abundance of the phylum Proteobacteria in termination round 1 compared with round 3 (data not shown).Pathway analysis using imputed metagenomics based on 16S rRNA sequences revealed tryptophan metabolism as significantly decreased (q value 5 0.045) in group 3 compared with group 1 (Supplemental Fig. 3), which has also been described in T1D patients and mouse model (39). Finally, we correlated glucose levels with abundance of microbial taxa obtained over time using repeated measure correlation analysis.Supplemental Fig. 4A shows significantly associated (q value < 0.1) taxa with glucose level in animals from groups 13.At the phylum level, we found significant positive correlation (r 5 0.38, q value 5 0.057) between abundance of Proteobacteria and high glucose levels in group 3; however, this was not the case for group 1 (r 5 0.12, q value 5 0.85) and group 2 (r 5 0.07, q value 5 0.82).On correlation of abundance of RSVs with glucose level measurements, we found a clear trend of negative association of B. acidifaciens (r 5 À0.44, q value 5 0.14) and P. goldsteinii (r 5 À0.41, q value 5 0.14) in the group 3 animals with elevated glucose level.When correlating the abundance of RSVs with glucose level in group 3 animals according to their termination rounds, we found a clear trend of negative association of RSVs belonging to B. acidifaciens and P. goldsteinii and a clear trend of positive association of the A. muciniphila RSV in termination round 1 (Supplemental Fig. 4B). DISCUSSION Our study confirms an accelerated onset of T1D, increased GADreactive T cell frequency in spleen, precipitated destruction of b cells, triggering of high glucose levels, and pancreatic islet reduction in susceptible NOD/ShiLtJ mice treated with murine antiPD-1 mAb.The commensal gut microbiota regulates the homeostasis and maturation of immune cells in the intestinal lamina propria and in the periphery.Mammals have coevolved with a specific consortium of gut capable of mounting proinflammatory and anti-inflammatory immune responses (40).Segmented filamentous bacteria are powerful inducers of Th17 and Th1 cells whereas Clostridia species and Bacteroides fragilis are regulatory T cell stimulators (4144).Around 70% of immune cells are located in the intestine and, if in circulation, they can enter the gut via gut homing markers, such as a 4 b 7 integrin.Given the central role of effector T cells in autoimmunity, it has long been suspected that alterations in the composition and function of the gut microbiota, particularly during the neonatal period, might affect onset and severity of autoimmunity via the priming of effector T cells in the gut (45).Changes in the microbiota in human T1D and NOD mice are well studied; common findings include increased abundances of Bacteroides species and deficiency of bacteria that produce short-chain fatty acids (46,47).In addition, increased intestinal permeability and decreased microbial diversity after islet autoimmunity but before T1D diagnosis has been reported (48).NOD mice fed with a specialized diet resulting in high bacterial release of shortchain fatty acids (acetate and butyrate) were almost completely protected from T1D (49).As we overlay genetics in T1D, the microbiome in diabetes with antiPD-1 treatment and agedependent effects on the microbiome, the first strong observation was the confirmation of common outcomes in T1D, such as reduced diversity and an increase of abundance of Bacteroidetes as well as a decrease in Firmicutes and Tenericutes.Interestingly, decreased abundance of B. acidifaciens and P. goldsteinii in group 3 indicates a contribution to an accelerated diabetes onset driven by antiPD-1 treatment.Several reports associated ImmunoHorizons both species with enhanced intestinal integrity, reduced levels of inflammation and improved insulin sensitivity in type 2 diabetes when mice were orally with living bacterial strains (38,50).The decreased abundance of these two bacterial species at day 11 indicates a prodromal role of the microbiome in the onset of T1D. We found higher abundance of Proteobacteria in termination round 1 (high glucose levels), which fits to an inflammatory status characterized by aerobic conditions known for diabetes.We also identified elevated Lachnospiraceae abundances in termination round 1, further confirming a shift of composition in an accelerated fashion compared with termination round 3 (low glucose levels) because it has been reported that Lachnospiraceae actively impairs glucose metabolism, leading to inflammation and promoting the onset of T1D (37).The decreased abundance of the genus Alistipes in termination round 1 compared with termination round 3 has not yet been described in NOD mice.However, Alistipes species have been linked to improved responses to antiPD-1/PD-L1 therapies in tumor mouse models and cancer patients (51), pointing to proinflammatory properties via activation of innate and adaptive immune cells.Those discrepancies might be due to differences in mouse models and methodologies applied in this and other studies, which will require further microbiome (A) Groups 1-3 (NOD/ShiLtJ; n 5 15 per group) and (B) termination rounds for group3 (NOD/ShiLtJ; n 5 5 per termination round).Significant q values (FDR) are noted as follows: *q < 0.1, **q < 0.01, ***q < 0.001. investigations, for instance complemented by metagenomics and metatranscriptomics sequencing for improved taxonomic potential and insights into potential and actual function.A limitation of our study is the small number of animals that were included, and it is possible that identified significant taxa are not completely correlated with findings in other studies on the role of antiPD-1 treatment on microbiota changes. Along those lines, checkpoint inhibitors play a pivotal role in the treatment of various advanced-stage cancers, as evidenced by improved immune responses with PD-1/PD-L1 and CTLA-4 therapies (52).Importantly, life-threatening side effects are frequently observed, such as cytokine release and immune-related adverse effects, including colitis, hepatitis, and diabetes.It is therefore important to accurately identify the subset of patients who benefit or suffer the most from checkpoint blockades.We found that A. muciniphila abundances increased in the antiPD-1treated group compared with naive NOD mice.Additionally, this species has been associated with responders in antiPD-1 therapies where CD8 T cells were increased in the tumor and gut tissue when fecal microbiota transplantation was performed from human cancer patients into germfree mice (16).Members of the Akkermansia genus are mucus degraders and are able to directly stimulate the intestinal epithelium and innate immune cells of the intestinal lamina propria.The increased abundance of this species confirms previous results in cancer mouse models and cancer patients; importantly, our results emphasize that this bacterial species is associated with accelerated diabetes in the NOD mice, as naive and isotype-treated NOD mice did not show a significant increase.Moreover, we observed a trend toward positive correlation of high glucose levels and increased abundance of A. muciniphila in group 3, which points to an association of this mucin-degrading species with an accelerated or exacerbated autoimmune outcome after antiPD-1 treatment.Further investigations are, however, necessary to fully understand whether the mucus layer changed, or gut permeability increased, due to the onset of diabetes and that A. muciniphila abundance increased because of the lack of barrier function.Finally, it is tempting to ask whether patients before and during therapy with checkpoint inhibitors should be monitored for their abundance of these three species to identify and stratify patients who have a high risk for developing diabetes.The prodromal role of bacterial species in T1D that is described in the current study provides the opportunity of adjusting dosage and subsequent safety in a personalized fashion when implemented in the clinic.Also, it is worth considering how to harness the prodromal role of other bacterial species in other mouse models of autoimmunity, such as in the experimental autoimmune encephalomyelitis model, to investigate potential early microbial markers for the risk of inflammation of the nervous system. In summary, our study shows that antiPD-1 treatment has the potential to accelerate the onset and severity of T1D in susceptible NOD mice.Besides the observation of immune cell infiltrates, GAD-reactive T cells, and high glucose levels, we describe, to our knowledge for the first time, longitudinal changes in the gut microbiota following antiPD-1 treatment by identifying bacterial species as early markers before T1D onset.These results will pave the way for further investigations to enable the identification and stratification of susceptible patients who are at risk for immune-mediated adverse events or even autoimmunity in CIT based on early microbial markers.As such, our approach provides guidance for treatment regimens potentially improving clinical outcomes. FIGURE 1 . FIGURE 1. Experimental study design and measurement of changes in glucose levels. ( A) Stool collection was performed longitudinally from all five groups: pretreatment, directly after treatment cessation, at first termination round (day 17), at second termination round (day 24), and at end of the study (day 38).All mice were 5 wk old at the start of treatment and were aged $10 wk when they were terminated.Groups 1-3 are NOD/ShiLtJ (n 5 15 per group); controls (group 4; n 5 5) were BALB/c, NOD-SCID (group 5; n 5 5).Glucose measurement was performed pretreatment, daily during treatment, and three times per week in the posttreatment observation period.Immunological analysis was done at three occasions based on high glucose levels in the posttreatment phase.(B) Changes in glucose levels were measured once daily while on treatment.Only the anti-PD-1-treated group (group 3) showed high glucose levels on three different time points, which resulted in three termination rounds.Control groups (BALB/c and NOD-SCID mice) were terminated at the end of the study on day 38. FIGURE 4 . FIGURE 4. Longitudinal changes in the mice gut microbiota community diversity as measured by observed and Shannon index. FIGURE 5 . FIGURE 5. Boxplot representation of significantly different RSVs between groups.
8,838.2
2023-12-01T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Production of ethanol from Jerusalem artichoke by mycelial pellets Mycelial pellets formed by Aspergillus niger A-15 were used to immobilize the ethanol producing yeast Saccharomyces cerevisiae C-15. The operation parameters, such as agitation speed, temperature and mixed proportion of strains were studied. The optimal adsorption 66.9% was obtained when speed was 80r/min, temperature was 40 °C and mixed proportion(mycelial pellets: yeasts) was 1:10. With Jerusalem artichoke flour as substrate, 12.8% (V/V) of ethanol was obtained after 48 h by simultaneous saccharification and fermentation using mycelial pellets. And mycelial pellets could tolerate 19% (volume fraction) ethanol. The above results proved that this new technology was feasible, and it had the advantages of higher ethanol yield, long service life, repeated use, easy operation and lower cost in producing ethanol. there are no relevant reports about using immobilized pellets as fermenting microbe. This is the first time that this concept and technology are adopted to combine carrier with fermenting microbe in ethanol production. Because A. niger A-15 could secrete inulinase and form mycelial pellets, A. niger A-15 was used as carrier. As saccharification was required before fermentation, mycelial pellets formed by A. niger A-15 were used to immobilize the ethanol producing yeast S. cerevisiae C-15. Ethanol was produced by mycelial pellets through simultaneous saccharification and fermentation, to simplify the process and improve the ethanol tolerance and ethanol yield of yeast. Results Effect of temperature on the adsorption rate of yeast. The process diagram of the whole experiment is shown in Fig. 1.To determine how different temperatures affected the adsorption rate of yeast, the pellets of A. niger A-15 were conducted in 500 ml flask containing 100 mL yeast suspension. Then the yeast cells were adsorbed on a rotary shaker operating at different temperatures (25,30,35,40,45, 50 °C) for 2 h, shown in Fig. 2. The maximum adsorption rate of yeast was about 62.3 ± 0.1% at 40 °C. Therefore, temperature 40 °C was employed in the following experiments. Effect of shaking speed on the adsorption rate of yeast. To determine how different shaking speeds affected the adsorption rate of yeast, the pellets of A. niger A-15 were conducted in 500 mL flask containing 100 mL yeast suspension. Then the yeast cells were adsorbed on a rotary shaker operating at different speeds (40, 60, 80, 100, 120 r/min) for 2 h, and the results are shown in Fig. 3. The maximum adsorption rate of yeast was about 65.1 ± 0.1% at 80 r/min. Therefore, the shaking speed 80 r/min was employed in the following experiments. Effect of mixed proportion on the adsorption rate of yeast. The effect of mixed proportion mycelial pellets: yeasts (1:1, 1:3, 1:6, 1:10, 1:12) on the adsorption rate of yeast were investigated, and the results are shown in Fig. 4. The mixed proportion refers to the ratio between the number of mycelial pellets and yeast. www.nature.com/scientificreports www.nature.com/scientificreports/ As shown in Fig. 4, the proportion of strains could affect the formation of mixed mycelial pellets. The adsorption rate of pellets increased with the concentration of yeast. The maximum adsorption rate of yeast was about 66.9 ± 0.1% at 1:10. Any further increase in the concentration of yeast resulted in a decrease of the adsorption rate of pellets. This decrease-trend at high yeast concentration could be partly attributed to a result of no redundant adsorption sites on mycelial pellets. In addition to the electrostatic adsorption of surface charges, mycelial pellets also secrete a kind of extracellular polymer film on the surface of mycelia, which is mainly composed of polysaccharides and proteins. It makes the surface of mycelial pellets have a greater adhesion force, and is more favorable for yeast adsorption than other biological carriers 32 . It can be seen from Fig. 5(a,b) that yeast is adsorbed to mycelia in large quantities. In addition, unlike the microporous structure of particulate activated carbon, the inner space of mycelial pellets is larger, which is conducive to the entry and adsorption of yeast. Comparison of different fermentation modes. A bulk of literature has investigated the SSF process using Jerusalem artichoke as substrate for ethanol production [16][17][18][19] . However, as a new process, the feasibility and advantages of ASSF process were studied first in this study. Moreover, many processes in the literature use different strains, so it is impossible to compare these processes with ASSF under the same conditions. Therefore, this study chose a more representative process reported in literature 11 to compare with ASSF. Table 1 indicates the comparison of different fermentation modes. After 48 h cultivation, the ethanol yield by the pellets was approximately 1.19 times higher than that by conventional simultaneous saccharification and fermentation (12.8% versus 10.8%), and lower residual sugar concentration (5.8 g/L versus 1.5 g/L). Considering the ethanol yield and residual sugar concentration, carrier fermentation was much more economical. Moreover, the chemical oxygen demand (COD) of effluent showed that the COD of effluent produced after the ethanol distillation of the fermentation liquid was reduced from 54612 mg/L to 27641 mg/L of SSF, which is beneficial to reducing the pollution from the source. In addition, due to the importance of inulinase in the fermentation process, the effect of ASSF process on inulinase production by A. niger was also investigated. Table 1 shows that ASSF process could also promote inulinase production by A. niger. The specific reason might be that yeasts and pellets bind more closely in local scope, which alleviated the substrate inhibition of inulinase to some extent. www.nature.com/scientificreports www.nature.com/scientificreports/ The stability of pellets reusing technology. The effect of recycle times on the fermentation were investigated, and the results are shown in Fig. 6. After recycling 10 times, the ethanol production remained stable at a high level. Figure 5(c,d) shows the SEM of mycelial pellets recycling 1 and 10 times. Meanwhile, mycelia pellets have good adsorptive stability of yeast. The ASSF has the advantages of long service life, repeated use, easy operation and low cost. Therefore, carrier fermentation had a good prospect of industrial production. www.nature.com/scientificreports www.nature.com/scientificreports/ High ethanol tolerance test. High concentration fermentation of ethanol is a hot research theme in recent years. The key technology of this study is to obtain yeast with high osmotic pressure and high ethanol yield. It has become a research hotspot to improve the ethanol tolerance of yeast and to relieve the inhibitory effect of high ethanol concentration. The results showed that the ethanol tolerance of yeast depended not only on the inherent tolerance ability of yeast cells to different concentrations of ethanol, but also on the close relationship between the plasma membrane lipid compositions of yeast, nutritional status, environmental conditions, and supplementary mode of carbohydrate substrate,etc 28 . Critical ethanol concentration led to cleavage of plasma membrane phospholipids. With appropriate conditions of the plasma membrane of yeast, nutritional statusor environmental factors, the yeast can be resistant to ethanol toxicity. For example, with the temperature increasing, the phospholipid content of the plasma membrane in cell decreased rapidly to maintain the fluidity of the plasma membrane and cell viability 30 . Since A. niger is rich in lipoproteins which has been shown to improve the structure of the protoplasm membrane of yeast, the fermentation activity and ethanol tolerance of yeast are significantly improved. The combination of A. niger A-15 and S. cerevisiae C-15 could significantly improve the ethanol tolerance of S. cerevisiae C-15, as shown in Table 2. As shown in Table 2, with the increase of ethanol volume fraction, the growth of A. niger C-15 and S. cerevisiae A-15 was gradually inhibited. However, mixed pellets could enhance the ethanol tolerance of yeast and moulds, and mycelial pellets could tolerate 19% (volume fraction) ethanol. Discussion This technology had the following advantages: (1) It could convert the fermentable sugars from hydrolysate of inulin to ethanol simultaneously, preventing sugar accumulation which inhibited inulinase secretion, avoiding substrate inhibition, preventing bacteria pollution effectively, increasing ethanol yield. (2) Because yeast could not directly utilize inulin, inulin was first decomposed by inulinase produced by A. niger, so the whole fermentation process was a mixed fermentation of yeast and A. niger. The immobilization of yeast with A. niger did not incur any immobilization cost. Therefore, this technology could reduce the high cost of cell immobilization. (3) Mycelial pellets could be easily recovered from broth, and the technology has the advantages of long service life, repeated use, simple operation, low cost and easy to industrialize. (4) It is generally known that mycelium of A. niger is rich in lipoprotein which can improve yeast viability and ethanol tolerance. Due to the microenvironment change of S. cerevisiae after immobilization by pellets, the tolerance of S. cerevisiae to ethanol was enhanced. (5) Using JA as feedstock for ethanol fermentation is more economical. Compared with lignocellulosic biomass, JA neither requires the pretreatment with high technical difficulty and high sugar consumption, nor does it need expensive enzyme. Compared with starch materials, due to the low degree of polymerization of inulin, the hydrolysis of JA is easier to achieve than that of starch which needs high temperature liquefaction, and synchronous inulinase hydrolysis can be obtained fermentable sugar, simple process flow, lower energy consumption. Therefore, JA can be a green and economical option for ethanol production. Mycelial pellets formed by A. niger A-15 were used to immobilize S.cerevisiae C-15, the yeast producing ethanol. The operation parameters, such as agitation speed, temperature and mixed proportion of strains were S.cerevisiae C-15 Mixed pellets +++ +++ +++ ++ + − Table 2. Ethanol tolerance of the strains. *+++ Indicates that the strains grow well; ++ indicates that the strains can grow; + indicates that some strains can grow; − indicates that the strains cannot grow. studied. The optimal adsorption 66.9% was obtained when speed was 80r/min, temperature was 40 °C and mixed proportion was 1:10. With JA flour as substrate, 12.8% (V/V) of ethanol was obtained after 48 h by simultaneous saccharification and fermentation using mycelial pellets. Considering the ethanol yield and residual sugar concentration, carrier fermentation is much more economical in terms of time and energy. Moreover, mycelial pellets could tolerate 19% (volume fraction) ethanol. The above results indicate that this new technology is feasible in producing ethanol. In addition, this study mainly investigated the feasibility and advantages of ASSF as a new technology. Development of strains with high alcohol tolerance is a research hotspot in ethanol industry. More detailed examinations like AFM (Atomic Force Microscope) will be used to explain the improved ethanol tolerance of pellets in future research. Materials and Methods Microorganism and culture medium. The A. niger A-15 (CMCC98003) was stored in our laboratory. The S. cerevisiae C-15 was purchased from ANGEL YEAST CO., LTD. Jerusalem artichoke was obtained from the local market in JiNan, China. The preparation of JA powder was to cut fresh JA into thin slices and air-dry them at 60 °C for 48 h, so as to reduce the water content to about 5%, and then ground to 60 mesh. Cell culture. The culture of S. cerevisiae C-15 was harvested in 250 mL flask containing 100 mL seed medium with 5% of inoculum. The flask cultivated on a rotary shaker operating at 30 °C and 100 r/min for 30 h. The broth was centrifuged at 5000 r/min for 5 min, and then the cells were collected. Three replicates were carried out for each experiment. A. niger A-15 was mixed with sterile water to form spore suspensions. The culture of mycelial pellets was grown in 250 ml flask containing 100 ml ME medium with 4% of inoculum. The flask was cultivated on a rotary shaker operating at 30 °C and 40 r/min for 30 h. The broth was centrifuged at 5000 r/min for 5 min, and mycelial pellets formed by A. niger A-15 were collected by filtration. Adsorption test. Yeast cells were suspended with 100 mL distilled water in a 250 mL Erlenmeyer flask, into which 5 g (wet weight) of pellet was added. The yeast cells were adsorbed at different temperatures, shaking speeds and different strains of the mixing ratio for 2 h. Then the cells were filtered by two-layer gauze, determined of the concentration of the cells that remained in the filtrate, and the adsorption rate was calculated with spectrophotometry. The stability of pellets reusing technology. Broth was centrifuged at 1000 r/min for 15 min, the collected mycelial cells were washed with normal saline and centrifuged at 1000 r/min for 5 min, and all mycelial pellets were transferred into fresh fermentation medium. High ethanol tolerance test. To ensure the ethanol tolerance of the mycelial pellets, these pellets were cultured in YPD solid medium supplemented with various concentrations of ethanol (12%, 14%, 16%, 18%, 19% and 20%). Pellets were inoculated at an initial cell density of 2 × 10 8 cells/ml. Samples were collected, diluted, and plated on YPD solid medium. After incubation at 37 °C for 2 days, the colonies appearing on the plates were counted. Viability was expressed as a percentage of colony-forming units of the high ethanol treatment compared with control for each culture of the strains. Compared with the control group, the survival rate of strains was between 50% and 100%, which meant that the strain grew well. Compared with the control group, the survival rate of strains between 50% and 100% indicates that the strains grew well, the survival rate between 20% and 50% means that the strains can grow, while the survival rate between 0% and 20% means that the strains can not grow. The survival rate of strains was between 0% and 20%, which meant that it was difficult for the strain to grow. www.nature.com/scientificreports www.nature.com/scientificreports/ Analytical methods. The number of yeasts and fungi was determined by plate counting. Ethanol concentration in the fermented medium was determined according to the methods given by Ge et al. 17 . Residual sugars were determined by the 3, 5-dinitrosalicylic acid (DNS) method 31 . Total sugar was measured following the method reported previously 31 . Sample solution (1 mL), 50 ml of 2.8% H 2 SO 4 was mixed and then heated at 100 °C to hydrolyze the carbohydrates. After 1 h, the solution was neutralized with 5 mol/L NaOH solution, and fixed capacity to 100 mL. The reducing sugar in the solution was determined by DNS method. Inulinase activity was determined according to the method described by Susana M 17 . One unit of inulinase activity (U/mL) was defined as the amount of enzyme responsible for the production of 1 μmol of reducing sugar per minute at 55 °C and pH5.4. = ρ × × × Inulinase activity (glucose) dilution rate 1000/(180 10) (2) where 180 is the molecular weight of glucose (mg/L), 10 is the reaction time (min), 1000 is the unit conversion factor. Scanning electron microscopy (SEM) was used to observe the structure and adsorption of mycelial pellets, and specific operation steps were determined according to the literature 32 . = m The yield of ethanol m(Ethanol production)/ (the decrease of total sugar in broth) (3) Chemical oxygen demand (COD) was determined by potassium dichromate method 31 .
3,497.2
2019-12-01T00:00:00.000
[ "Agricultural And Food Sciences", "Engineering" ]
3D RECONSTRUCTION WITH A COLLABORATIVE APPROACH BASED ON SMARTPHONES AND A CLOUD-BASED SERVER : The paper presents a collaborative image-based 3D reconstruction pipeline to perform image acquisition with a smartphone and geometric 3D reconstruction on a server during concurrent or disjoint acquisition sessions. Images are selected from the video feed of the smartphone’s camera based on their quality and novelty. The smartphone’s app provides on-the-fly reconstruction feedback to users co-involved in the acquisitions. The server is composed of an incremental SfM algorithm that processes the received images by seamlessly merging them into a single sparse point cloud using bundle adjustment. Dense image matching algorithm can be lunched to derive denser point clouds. The reconstruction details, experiments and performance evaluation are presented and discussed. INTRODUCTION Image-based approaches have become viral for 3D digitization in the last years.Requirements and needs of digital replica significantly change according to the application field, steering the choice of equipment, as well as software tools.In industrial metrology, accuracy and reliability are crucial factors, which imply the adoption of high-cost, professional-grade camera and lens systems, coupled with software applications fully manageable only by expert operators.In the geospatial domain, completeness of the results, accuracy of georeferencing, handling of huge amount of data, reliability and speed of automatic procedures, integration and homogenization of data from different sources are key topics.Researches and studies in the cultural heritage field specifically focus, among other topics, on colour fidelity, geometric level of details, handling, visualization and sharing of 3D models.Today, a range of economic activities, whose origin can be traced back to the beginning of the new millennium, is driving the digital economy all around the word, i.e. the creative industries (EY, 2015).Also referred to as 'creative and cultural industries' or 'creative and digital industries', they embrace thirteen subsectors: advertising, architecture, arts and antiques market; crafts; design; designer fashion; film and video; music; performing arts; publishing; interactive leisure and software; software and computer services; television and radio (Skillset, 2013).People working in the creative economy rely on their individual creativity, skill and talent, to produce economic values. To answer the needs of this growing community, technologies and tools are rapidly developing and changing.Emblematic is the progress of 3D printers, more and more used to realise fullyoperational, market-ready products rather than quick and cheap prototypes (The Economist, 2011).Similarly, we are witnessing a 'democratization' and massive spread of 3D digitization techniques (Alderton, 2016;Nancarrow, 2016;Santos et al., 2017), with an increasing demand for hardware and software solutions economically accessible, easily understandable and manageable by almost anyone wills to express his or her creativity through 3D digital products. The work described in this paper arises in this context and presents a collaborative image-based 3D digitization pipeline.Different users acquiresimultaneously or in separate sessions images with their smartphones and images are then 3D processed via a cloud-based server.A smartphone's app provides on-the-fly visual feedback about the 3D reconstruction to users co-involved in the digitization process.The idea is to (i) guide users during the image acquisitions and (ii) combine images from multiple devices from concurrent or disjoint acquisition sessions.The developed approach (Poiesi et al. 2017) and the achieved results, produced in real-world scenarios (i.e. a cultural heritage site and a city square), are compared against reference data, produced employing a professional-grade reflex camera and state-of-the-art image processing software solutions. RELATED WORKS AND MAIN INNOVATIONS Image-based 3D reconstruction methods using mobile devices have been pioneered in the research domain (Tanskanen et al., 2013;Kolev et al., 2014;Muratov et al., 2016), and are starting to appear on app stores for smart devices (e.g., ItSeez3D 1 , TRNIO2 ).These methods implement very similar workflows, relying on Structure from Motion (SfM) and dense image matching (DIM) or Multi View Stereo (MVS) algorithms, run either on the phone or on a server.Being the 3D reconstruction procedure computationally intensive, a feasible solution is to split the process between the mobile device and the cloud-based server (Untzelmann et al., 2013;Locher et al., 2016c).In this case, the smartphone is used as imaging device to capture images of the scene of interest, whereas the SfM and DIM steps are performed on the server.Current 3D reconstruction solutions running on smartphones only offer feedback to single users during image acquisitions, and do not yet seamlessly include collaborative approaches with simultaneous feedback to the multiple.The most common solution for collaborative mapping, based either on Simultaneous Localization and Mapping (SLAM) or SfM approaches, is to produce separate maps that are finally fused together (Forster et al., 2013;Untzelmann et al., 2013;Morrison et al., 2016;Schmuck, 2017).The procedure presented in this paper is based on an incremental SfM approach (Schonberger and Frahm, 2016), which updates and augments the global sparse 3D point cloud when a new image is uploaded.From video acquired by different smartphones, only significant frames are selected, sent to the server and process to increment the sparse 3D reconstruction.The updated results provide the user with visual feedback during the acquisition process and are accessible both on the mobile app and on a webbased visualization service developed on the server. While the sparse reconstruction of the scene is computed on the server and constantly updated when new images are sent via the SfM procedure, the DIM step produces dense point clouds, made available to the users on a web-based visualization window. THE PROPOSED PIPELINE The implemented approach is part of an image-based 3D reconstruction workflow under development within the EU funded H2020 project REPLICATE 3 (Nocerino et al., 2017, Fig. 1).A smartphone app allows the image acquisition phase, whereas the processing procedure is jointly performed on the smartphone as well as on a server (Locher et al., 2016c). Image acquisition app and device-server communications Each user running the smartphone app must first be authenticated by the cloud service.A unique smartphone identifier (ID) is assigned based on the user's account credentials, the device's manufacturer, its model and operating system.The smartphone app is used to acquire the video stream, extract the best frames (Section 3.2) and send them to the server for the 3D reconstruction procedure (Section 3.3) Accelerometer measurements from the device's Inertial Measurement Unit (IMU) are also transmitted together with the images to aid pose estimation and object reconstruction.Smartphone vibration is implemented as haptic feedback to help the user to understand whether the images are acquired correctly (i.e. the device motion is not to fast).Network communication between the reconstruction server and device is bidirectional and asynchronous.The app offers a user the option to start a new acquisition session or to update past acquisitions with new images in case of collaborative approaches (Section 3.4).To visualize updated point clouds as feedback, the smartphone sends periodic requests to the server. Image selection from smartphone's video stream Images are selected from the smartphone's app based on both their quality and on their novelty.The selection is based on the computation of a frame's sharpness and the number of new features present (Sieberth et al., 2016).Hence, a 'content rich' frame should be sharp (i.e. in focus and with no motion blur) and it should contain new visual information about the object.Newness is quantified by comparing current feature points with those extracted from previous frames.The quantification of the overlap is calculated for pairs of frames and by using ORB keypoints (Rublee et al., 2011).The image overlap is inferred by matching descriptors among adjacent frames based on the Hamming distance.If no frames were selected for a certain minimum interval of time, a frame is transmitted anyway. Orientation and 3D reconstruction The 3D reconstruction server adopts an incremental SfM algorithm followed by the DIM step, using multiple threads to process independent and asynchronous uploads of images from different users.Two pipelines are under testing: the first, described in Poiesi et al. (2017) and Nocerino et al. (2017) is based on approaches proposed by Sweeney et al. (2015), Schonberger et al. (2016), Locher et al. (2016a) and Locher et al. (2016b).The second procedure, hereafter presented, follows the SfM/DIM pipeline presented by Schonberger and Frahm (2016). Collaborative approach The developed method includes also a collaborative 3D reconstruction which allows the processing of images coming from multiple smartphone devices during concurrent or disjoint acquisition sessions. For each new image uploaded to the server, the algorithm matches news computed features to those from a subset of images acquired within the same acquisition job.This subset is composed of images already stored in the database featuring high similarities in image content with the new one (Poiesi et al., 2017).Relative image orientation is initially estimated via 2D-3D correspondences using feature points extracted on all the images, regardless of which smartphone they were captured from.If available, the nominal values of the interior orientation parameters are derived from EXIF metadata or extracted from the database containing already registered devices.The essential matrix is then estimated using a five-point algorithm (Nistér, 2004).When the camera parameters are not available, the fundamental matrix is estimated using an eight-point algorithm and, subsequently, the essential matrix is inferred (Nistér and Stewenius, 2006).Successively a Bundle Adjustment (BA) is applied.We are currently evaluating two approaches to efficiently handle video frames acquired by different devices and progressively process them on the cloud-based server.The first implementation, used in this paper (Section 4) entails an image-variant self-calibrating BA, i.e., for each image, a set of interior orientation parameters, comprising the principal distance, principal points coordinates and two radial distortion parameters, could be estimated.The second approach, presented in Poiesi el al. ( 2017), is based on a two-step procedure, where the interior and exterior orientation parameters are refined as follows.Images acquired in the same session and using same device are forced to share the same camera calibration parameters in the adjustment procedure.A local bounded BA refines only newly uploaded images with their associated points.Once the reconstruction has sufficiently grown, a full BA over all images and points is performed, taking into account the separate camera calibration groups.The implemented two-stage BA saves computation time and increases the stability of the BA optimization. 3D reconstruction preview and visualisation All users involved in a collaborative acquisition can visualize their (simultaneous or joint) 3D reconstruction progresses via a dedicated preview window in the smartphone's app as well as interact with the reconstruction session via a web page.The preview model in the app shows to the user, while he/she is acquiring images, the sparse point cloud with image positions from all concurrent users (Fig. 2).The preview window runs on a separate thread that periodically sends requests to the server to check and, in case, display the updated scene reconstruction.When the user terminates the acquisition and all images are uploaded, the 3D reconstruction process is completed on the server.In a web browser, users can visualize the oriented images and the sparse point cloud, download the estimated camera parameters and access intermediate reconstructions. EXPERIMENTS AND VALIDATION The following section reports three experiments, performed in real case scenarios, to showcase the capabilities of the proposed pipeline.The collected datasets (4.1 -Table 1) and reference data (4.2) are described, and the collaborative reconstruction results are shown together with a quality assessment (4.3).All experiments, acquired with different smartphones, were afterwards processed with 20 cores on an Intel Xeon 2.30GHz computer with 128 GB of RAM. Datasets The experiments entail the acquisition of video streams collected using six different off-the-shelf Android smartphones in three different locations (Table 1).To the authors knowledge, currently there are no datasets that involve multiple and different smartphones recording buildings or objects from different viewpoints.For this reason, our datasets are available for research purposes at the url http://tev.fbk.eu/collaborative3D.The first dataset (Saranta Kolones) features the 'Saranta Kolones' monument within the Pafos archaeological area in Cyprus.The site is ca 16x16x5m.Ten videos (at 30Hz) were recorded by three different smartphones in different orientations (landscape and portrait).Due to network connection limitations on the Saranta Kolones's site, the dataset was recorded using the video mode of the smartphones and post-processed later by the image selection algorithm (Section 3.2).A collaborative acquisition approach was simulated by stirring in and transmitting to the cloud-based server the extracted frames from the different devices. Seq Other two datasets were acquired using the smartphone's app in the cathedral square of Trento, Italy: from the smartphone's camera video feed, frames were selected by the image selector algorithm during the acquisition (Section 3.2) and directly uploaded to the reconstruction server.The Piazza Duomo dataset features the north facing facade of the cathedral (ca 100m wide and 30m tall).The third dataset (Caffe Italia) focuses on the south facing facade of a painted building in the same square.The facade is 30m wide/long and 15m tall. Figures from 3 to 6 show the results of the implemented 3D reconstruction procedure for the three datasets. Figure 3: The shaded mesh model of the surveyed Saranta Kolones monument.The position and orientation of the extracted frames are shown as pyramids with colours indicating the three employed devices (Table 1). Figure 4: The sparse point cloud for the Saranta Kolones dataset.The points are coloured based on the smartphone they are triangulated from (Table 1); in grey the entire point cloud; in yellow the points triangulated from images belonging to multiple devices.1). Figure 6: The sparse point cloud of the Piazza Duomo dataset.The points are coloured according to the smartphone they are triangulated from (Table 1); in yellow the points triangulated from images belonging to multiple devices. Reference photogrammetric models The reference (ground truth) datasets were acquired with a professional-grade digital single lens reflex (DSLR) camera, processed using state-of-the-art commercial software application and evaluated by computing the root mean square error (RMSE) on check points, measured through classing topographic surveying.For the SarantaKolones dataset, the Nikon D3X was equipped with a Nikkor 28 mm fixed focal length lens, 176 images were acquired and 20 points were used as check, providing a RMSE better than 5 mm.The photogrammetric survey of the entire Trento cathedral square, comprising 359 images, was realised with the Nikon D3X camera coupled with two prime lenses, a Nikkor 35 mm and a Nikkor 50 mm.The RMSE on 18 check points resulted better than 10 mm. Evaluation To evaluate the metric potentialities of the implemented collaborative reconstruction pipeline, the dense point clouds of the three datasets acquired with different smartphones (section 4.1) are compared against the ground truth dense point clouds obtained using a standard photogrammetric procedure (section 4.2).7).The evaluation analyses show that the greater differences, up to 50 cm, are localised on the edges of the structures, where the poorest image quality of the camera embedded in the smartphones is most evident.However, the global geometry of the structures in all the three case studies features deviations up ten times the average point cloud resolution. CONCLUSIONS The paper presented a 3D acquisition and reconstruction pipeline where multiple users can collaboratively acquire images of a scene of interest to produce a 3D dense point cloud. The pipeline entails an app running on smartphones that automatically selects the best frames out of a video stream and uploads them to a cloud-based server.Here the images are processed through a SfM and DIM procedures.The users can concurrently visualize the camera poses and joint 3D point cloud coming from other users / smartphones, either on the device or on a web-server page. The proposed procedure was evaluated through comparisons with reference data produced employing a standard photogrammetric acquisition and processing workflow.The analyses showed that the achieved results may suffice for the purposes of people involved in the creative industries.Future works will involve the implementation of Augmented Reality-based guidance for the user during image acquisition, based on device pose tracking and 3D reconstruction algorithms running on the smartphone.Moreover, a semi-automatic editing procedure to improve the dense point cloud quality is under development. Figure 1 : Figure 1: Part of the entire REPLICATE workflow (from Nocerino et al, 2017) jointly performed on smart devices and a cloud-based server (left)and the collaborative aspect of 3D digitization procedures presented in this article (right). 3 http://www.replicateproject.eu, last accessed: Oct 2017.The remote server handles user authentication, processes the images and generates updated results visualized by the device app and web-based interface.The web page enables users to see estimated camera positions and interact with the dense point cloud.The user can share the reconstruction job via an email option with other users, who become contributors.Contributors can then increment the reconstruction of an object by uploading more images of new acquisitions. Figure 2 : Figure 2: Example of on-the-fly visual feedback inside the smartphone's app (left) during a collaborative digitization process (here two users involved) or on the web browser (right). Figure 5 : Figure 5: The shaded dense point cloud of the Piazza Duomo dataset.The position and orientation of the extracted frames are shown in different colours to indicate the device they were acquired from (Table1). Figure 6 : Figure 6: The RGB dense point cloud of the Caffe Italia dataset.The position and orientation of the extracted frames acquired with three smartphones are shown using different colours according to the employed device (Table1). Figure 7 : Figure 7: Quality evaluation of the proposed collaborative approach.On the left column, for each dataset the dense point clouds from the reference 3D reconstruction and the collaborative approach are shown.The right column shows the colour-coded map of the signed distances computed between the reference and the collaborative dense point clouds.The given differences are in meters. Table 1 . . Main characteristics of the employed datasets.L stands for landscape and P for portrait.
4,027.8
2017-11-14T00:00:00.000
[ "Computer Science", "Engineering" ]
S MART P ROBABILISTIC R OAD M AP (S MART -PRM): F AST A SYMPTOTICALLY O PTIMAL P ATH P LANNING USING S MART S AMPLING S TRATEGIES An asymptotically optimal path-planning guarantees an optimal solution if given sufficient running time. This research proposes a novel, fast, asymptotically optimal path-planning algorithm. The method uses five smart sampling strategies to improve the probabilistic road map (PRM). First, it generates samples using an informed search procedure. Second, it employs incremental search techniques on increasingly dense samples. Third, samples are generated around the best solution. Fourth, generated around obstacles. Fifth, it repairs the found route. This algorithm is called the Smart PRM (Smart-PRM). The Smart-PRM was compared to PRM, informed PRM and informed rapidly-exploring random tree*-connect. Smart-PRM can generate the optimal path for any test case. The shortest distance between the start and goal nodes is the optimal path criterion. Smart-PRM finds the best path faster than competing algorithms. As a result, the Smart-PRM has the potential to be used in a wide variety of applications requiring the best path-planning algorithm. Another asymptotically optimal path-planning algorithm is the Informed Probabilistic Road Map (PRM) algorithm proposed by the author in [23].Aria reported that by combining informed searching with the PRM algorithm, the performance of the proposed algorithm can be enhanced by up to 25%.Ongoing research continues to improve the performance of the PRM algorithm.Chen et al. [24] proposed a new PRM sampling strategy to generate more suitable configurations for practical applications.Ravankar et al. [25] suggested the use of a Layered Hybrid PRM with an Artificial Potential Field (APF), while Liu et al. [26] proposed combining the PRM and D* algorithm.This research proposes a new fast, asymptotically optimal path-planning algorithm called the Smart PRM (Smart-PRM) algorithm.The approach enhances the PRM algorithm through five smart sampling strategies.Test results demonstrate the Smart-PRM algorithm's ability to construct optimal paths across all scenarios.The computational time required for Smart-PRM to generate optimal paths surpasses that of PRM, informed RRT*-Connect and informed PRM algorithms.The Smart-PRM algorithm exhibits efficient convergence due to the incorporation of five smart sampling strategies.These include generating samples using an informed search procedure, employing incremental search techniques on increasingly dense samples, samples generated around the best solution, samples generated around obstacles and the algorithm repairing the found route using a wrapping procedure.The efficacy of each strategy is confirmed through testing, showcasing the Smart-PRM algorithm's potential for implementation in diverse robotic systems and autonomous vehicles. While it is acknowledged that individual components of our proposed Smart-PRM algorithm draw upon existing techniques in motion planning, we contend that the integration and synergy of these strategies represent a novel and significant advancement in the field.Our approach synthesizes five distinct sampling strategies; namely an informed search procedure, incremental search techniques on increasingly dense samples, sample generation around the best solution, sample generation around obstacles and a route repair mechanism using the wrapping procedure.This amalgamation of strategies not only distinguishes our work, but also facilitates enhanced efficiency and performance compared to existing methods.Furthermore, our experimental results demonstrate a notable improvement in computational time and the ability to construct optimal paths across various scenarios when compared against traditional PRM, informed RRT*-Connect and informed PRM algorithms.The efficiency gains achieved by our Smart-PRM algorithm are particularly noteworthy, surpassing existing methods in terms of convergence speed and solution optimality.This paper is organized as follows: Section 2 describes the design of the suggested Smart-PRM algorithm.This section describes the strategies used to improve PRM's performance.Section 3 contains the findings and discussion.Initially, the effects of each recommended technique on improving PRM performance are investigated.After that, the suggested Smart-PRM algorithm is compared to PRM, informed RRT*-Connect and informed PRM.Finally, Section 4 includes closing remarks. PROPOSED ALGORITHM: SMART-PRM The proposed algorithm enhances the PRM algorithm through five strategies.First, it generates samples using an informed search procedure.Second, it employs incremental search techniques on increasingly dense samples.Third, samples are generated around the best solution.Fourth, samples are generated around obstacles.Fifth, it repairs the found route using the wrapping procedure.Thus, the PRM algorithm will be repeated for several iterations.In iterations, before a path solution is found, the second and fourth strategies will be employed.However, after finding a path solution, the fifth, first, third and fourth strategies will be used.Sub-section from 2.1 to 2.5 will discuss each of those strategies.Subsection 2.6 will discuss the complete algorithm of the proposed Smart-PRM. First Strategy: Informed Search Procedure for Sample Generation This informed search procedure for sample generation emulates the informed search procedure in the informed RRT* algorithm proposed by Gammel et al. [16].If a path solution connecting the start and goal nodes is successfully found during an iteration, an area is formed to restrict sample generation.This area takes the shape of an ellipsoid and its eccentricity depends on the length of the shortest-path solution found in that iteration.With the presence of this ellipsoidal area, the sample-generation process in the next iteration will only be carried out within this area.This area enhances the search concentration on regions with the potential to improve the quality of the path solution.Gammel et al. have demonstrated that once this ellipsoidal area is established, generating samples outside this area does not improve the quality of the path solution. If a shorter-path solution is found in the next iteration, the size of this ellipsoidal area will decrease and the concentration of the path search will become more focused.Gammel et al. [27] claimed that using this method, the informed RRT* algorithm may obtain an optimal solution approximately 3.4 times faster than the RRT* algorithm. An illustration of the informed search procedure for sample generation in the PRM algorithm is shown in Figure 1.In the first iteration, sample generation is randomly conducted throughout the area (Figure 1a).Then, using the created-sample nodes, Dijkstra's method [28] is used to find a path connecting the start and finish nodes.An example path successfully created by Dijkstra's algorithm is indicated by the red line in Figure 1a. Once a path solution is found, an area is established to constrain the sample-generation area, represented by the grey ellipsoid in Figure 1b.Subsequently, the sample generation procedure is applied only within this ellipsoidal area in the next iterations, as shown in Figure 1c.Suppose that a shorter-path solution is found in the following iteration.In that case, the size of this ellipsoidal area will decrease further and the path search will be more concentrated, as depicted in Figure 1d.In the illustration of Figure 1, it can be observed that the optimal solution must pass through a narrow path.Using this first strategy, a solution approaching this optimal path can be achieved by the 10 th iteration, as seen in Figure 1d.Therefore, a second strategy for enhancing the PRM algorithm is required to improve the convergence speed, where the search area begins with a small-sized ellipsoidal sub-set. Second Strategy: Incremental Search Techniques on Increasingly Dense Samples These incremental search techniques on increasingly dense samples emulate the strategies employed in initiating the incremental search techniques on increasingly dense samples within the Batch Informed Tree Star (BIT*) algorithm proposed by Gammell et al. in [27].This second strategy is distinct from the standard informed RRT* algorithm.During the first iteration of the basic informed RRT* algorithm, no ellipsoidal area constrains the sample-generation area (as illustrated in Figure 1a).However, for the incremental search techniques on increasingly dense samples, initially, sample generation is randomly conducted throughout the entire area.Then, during the first iteration, a small-sized ellipsoidal area is created to restrict only the samples within that ellipsoidal area, which the Dijkstra algorithm will use to find a path connecting the start node with the goal node.If a path solution cannot be obtained by connecting the samples within that small ellipsoidal area, then the ellipsoidal area will be iteratively increased.With the ellipsoidal area growing larger, more dense samples will be within the ellipsoidal area and the Dijkstra algorithm will use more samples to find a path connecting the start node with the goal node. Once a path solution is found, a new ellipsoidal area, the eccentricity of which depends on the length of that path solution, will be formed.The samples outside this new ellipsoidal area will be removed and transferred into this new ellipsoidal area, making the number of samples within the new ellipsoidal area denser.This ellipsoidal area will be reduced if a shorter-path solution is obtained and the samples outside the ellipsoidal area will be condensed into the new ellipsoidal area when a shorter-path solution is obtained.Gammell et al. reported that by employing these incremental search techniques on increasingly dense samples, the BIT* algorithm could achieve an optimal solution approximately 6.8 times faster than the RRT* algorithm. An illustration of this second strategy is depicted in Figure 2. In the first iteration, sample generation is randomly conducted throughout the area.Following that, a small ellipsoidal area is created, as depicted in Figure 2a.The eccentricity of the ellipsoidal area constraining the sample-generation area is determined by a line connecting the start and goal nodes.Since the length of the path connecting the start and goal nodes is unknown in the first iteration, the line determining the eccentricity of the ellipsoid is based on an assumption.An assumption of a straight line connecting the start and goal nodes is used and then, a certain length tolerance is added to that line.This ellipsoidal area will restrict only the samples within it, which the Dijkstra algorithm will use to find a path connecting the start node with the goal node.If a path solution cannot be obtained by connecting the samples within this small ellipsoidal area, then the ellipsoidal area will be iteratively increased, as demonstrated in Figure 2b.With the growing ellipsoidal area, denser samples will be within the ellipsoidal area and the Dijkstra algorithm will utilize more samples to find a path connecting the start node with the goal node. The procedure of gradually increasing the eccentricity of this ellipsoidal area is repeated until a path connecting the start and target nodes is obtained, as shown in Figure 2c.Once this path solution is discovered, the ellipsoidal area will not be extended in subsequent iterations.Instead, it will be lowered if a shorter-path solution is obtained, as shown in Figure 2d. Third Strategy: Sample Generation around the Best Solution The Smart-PRM algorithm's third strategy focuses on strategically generating samples around the identified best solution during algorithm iterations.This approach aims to refine the obtained path further and leverage the knowledge gained from the informed search. The Smart-PRM algorithm commences the third strategy once a path solution connecting the start and goal nodes is successfully found.In this strategy, the algorithm utilizes 50% of the sampling points for exploiting the area around this best solution, while the remaining 50% of the sampling points explore the area based on the informed search procedure described in the first strategy. By concentrating sampling efforts around the best solution, the Smart-PRM algorithm aims to identify alternative paths or variations that may contribute to a more optimal solution.This exploration has the potential to uncover paths that were initially not considered.The approach for exploiting the area around the optimum solution highlights the exploitation process in the RRT-ACS algorithm presented by Pohan et al. in [29]- [30]. An illustration of this third strategy can be seen in Figure 3. Initially, sample generation is conducted randomly throughout the area.Then, as shown in Figure 3a, Dijkstra's algorithm is used to find a path connecting the start and end nodes using the generated sample nodes.After the path is obtained, some sampling nodes are relocated around the best path.As shown in Figure 3b, there are more sampling nodes around the obtained best path compared to Figure 3a.Therefore, using sampling nodes around the best path has the potential to obtain a more optimal route, as demonstrated in Figure 3c. Fourth Strategy: Sample Generation around Obstacles The fourth strategy in the Smart-PRM algorithm focuses on strategically generating samples around obstacles encountered in the environment.After encountering newly identified obstacles during iterations, the Smart-PRM algorithm initiates the fourth strategy to systematically use several sampling points to explore and understand the areas around these obstacles.This strategy contributes to creating an optimal path, as optimal paths are often found around obstacles [31]. Strategic sampling around obstacles enhances the algorithm's flexibility and robustness, especially in scenarios where conventional approaches may face difficulties, such as in environments with narrow passages.An illustration of this fourth strategy can be seen in Figure 4.When the algorithm detects samples near an obstacle (purple points in the white gap in Figure 4a), the sides of the obstacle will be explored by more samples (as indicated by three purple points in the white gap in Figure 4b).When a sufficient number of areas on the sides of obstacles are explored by sample points (Figure 4c), there is the potential to discover a better path, as depicted in Figure 4d. Fifth Strategy: Route Repair Using the Wrapping Procedure The path-correction strategy using the wrapping process emulates the wrapping-based Informed RRT* algorithm discussed in [32].This wrapping process aims to find a shorter path by creating new nodes close to obstacles.An illustration of this fifth strategy is shown in Figure 5. In the example case depicted in Figure 5, there is an initial red path consisting of four nodes.The wrapping process begins by creating a temporary node (Xtemp) at node Xi+1 or node X2.Node Xtemp is connected to node X1 with a blue line, as shown in Figure 5a.Then, the position of node Xtemp is Comprehensive Overview of the Smart-PRM Algorithm The complete algorithm proposed is illustrated in Figures 6 and 7.The PRM algorithm consists of sample generation (lines 1-26 in Algorithm 1), roadmap construction (lines 30-37 in Algorithm 1) and path planning (the proposed algorithm uses Dijkstra's algorithm) connecting the start node to the goal node through the generated sample nodes (lines 38-39 in Algorithm 1). The second strategy of the Smart-PRM algorithm is implemented in lines 3-16 of Algorithm 1. Setting the value of to the minimum will create a small-sized ellipsoid subset area.If a path solution in this small area cannot be found, the ellipsoid area will be iteratively enlarged until a path solution connecting the start node to the goal node is found.The expansion process of the ellipsoid area during the path not being found is shown in line 44 of Algorithm 1. The first strategy of the Smart-PRM algorithm is implemented in lines 17-25 of Algorithm 1 and Algorithm 2. In Algorithm 2, the generation of samples will only be done in the ellipsoid area surrounding and with eccentricity depending on the length of .Each time the algorithm finds a shorter path, the value of will be updated (line 42 of Algorithm 1), therefore, the concentration of path search will increase. RESULTS AND DISCUSSION Several tests were performed to validate the performance of the suggested path-planning algorithm.The first test aimed to verify the effectiveness of the first strategy of Smart-PRM, which generates samples using an informed search procedure.The second test was conducted to confirm the effectiveness of the second strategy of Smart-PRM, which employs incremental search techniques on increasingly dense samples.The third test aimed to verify the effectiveness of the third strategy of Smart-PRM, where samples are generated around the best solution.The fourth test was carried out to confirm the effectiveness of the fourth strategy of Smart-PRM, which generates samples around obstacles.The fifth test was conducted to verify the effectiveness of the fifth strategy of Smart-PRM, which repairs the found route using the wrapping procedure. Meanwhile, the sixth test was developed to compare the Smart-PRM algorithm to the PRM algorithm [33], informed RRT*-Connect [18] and informed PRM [23].The computational time for each approach to attain the optimal result was measured as a performance metric.All tests were done 40 times independently with the identical settings.The comparison was based on each algorithm's average performance across the 40 tests.All tests were carried out on a PC with a Core i5 3.20 GHz CPU and 4 GB RAM running Windows 10 64-bit.The Smart-PRM algorithm and the comparative algorithms were built in LabVIEW 7.1 using the Robotic Path-planning LabVIEW Libraries [34]. Experimental Scenarios The proposed Smart-PRM method is compared to existing algorithms to validate its convergence speed and optimality performance.The performance of path-planning algorithms is evaluated using four common scenario cases.There are four scenarios: one with a single obstacle, one with narrow passages, one with a T-shaped obstacle and one with many randomly-scattered obstacles. The testing scenario with a single obstacle is illustrated in Figure 8a.This scenario assesses whether an algorithm can produce an optimally convergent path.Mashayekhi et al. [18] utilized a testing scenario like this to evaluate their proposed path-planning algorithm.The testing scenario in an environment with narrow passages is depicted in Figure 8b.This scenario is employed to evaluate the effectiveness of path-planning algorithms when the goal node is hidden behind narrow passages.Gammel et al. [16] and Mashayekhi et al. [18] used testing scenarios like this. The testing scenario in an environment with a T-shaped obstacle is shown in Figure 8c.This scenario assesses the algorithm's effectiveness in handling environments where the generated path needs to navigate turns.Islam et al. [35] used testing scenarios like this.The testing scenario in an environment with multiple randomly-scattered obstacles is illustrated in Figure 8d.This scenario is employed to evaluate the convergence speed of the path-planning algorithm.Gammel et al. [16] used testing scenarios like this. Verification of the First-strategy Effectiveness: Informed Search Procedure for Sample Generation The first test aims to verify the effectiveness of the first strategy; namely, sample generation based on information.The test compares the basic PRM algorithm with the improved PRM algorithm using the first Smart-PRM strategy, which involves generating samples based on information.Testing is performed on the four scenarios mentioned in sub-section 3.1.The measured performance is the computation time of each algorithm to achieve the optimal path.The test results can be seen in Table 1. Furthermore, an analysis of the average-percentage comparison of convergence time to reach the optimal path for both algorithms can be found in Table 2. Based on the data in Table 2, it can be observed that the average time of the improved PRM algorithm using the first Smart-PRM strategy is 5.49 times faster than the basic PRM algorithm.This result is consistent with the performance measurements of the informed RRT* algorithm (which employs the same algorithm-enhancement strategy) reported by Gammel et al. in [16].Gammel et al. said that by limiting the sample-acquisition area to the subset ellipsoid area with eccentricity matching the length of the path solution in that the informed RRT* algorithm becomes 3.4 times faster than the RRT* algorithm in achieving the optimal path.This result verifies the effectiveness of the first strategy, which involves generating samples based on information, in improving the performance of the PRM algorithm. Verification of the Second-strategy Effectiveness: Incremental Search Techniques on Increasingly Dense Samples The second test aims to verify the effectiveness of the second strategy.In this second test, the first strategy is not included; so, the enhancement of the PRM algorithm in this test is solely derived from the second strategy.The test compares the basic PRM algorithm with the improved PRM algorithm using the second S-PRM strategy.Testing is performed on the four scenarios mentioned in sub-section 3.1.The measured performance is the computation time each algorithm to achieve the optimal path.The test results can be seen in Table 3.Furthermore, an analysis of the average-percentage comparison of convergence time to reach the optimal path for both algorithms can be found in Table 4. Based on the data in Table 4, it can be observed that the average time of the improved PRM algorithm using the second Smart-PRM strategy is 7.48 times faster than the basic PRM algorithm.This result is consistent with what was reported by Gammel et al. [27] regarding the performance measurements of the BIT* algorithm (which employs a similar strategy to enhance the RRT* algorithm).Gammel et al. reported that by sampling in a small-sized sub-set ellipsoid area first, the BIT* algorithm can achieve an optimal solution 6.8 times faster than the RRT* algorithm.This result verifies the effectiveness of the second strategy; namely, using incremental search techniques on increasingly dense samples. Verification of the Third-strategy Effectiveness: Sample Generation around the Best Solution The third test aims to verify the effectiveness of the third strategy.In this third test, neither the first nor the second strategy is included; so, the enhancement of the PRM algorithm in this test is solely derived from the third strategy.The test compares the basic PRM algorithm with the improved PRM algorithm, which is enhanced only by adding the third Smart-PRM strategy.Testing is performed on the four scenarios mentioned in sub-section 3.1.The measured performance is the computation time of each algorithm to achieve the optimal path.The test results can be seen in Table 5.Furthermore, an analysis of the average-percentage comparison of convergence time to reach the optimal path for both algorithms can be found in Table 6. Based on the data in Table 6, it can be observed that the average time of the PRM algorithm, when adding the third strategy, is 8.94 times faster than the basic PRM algorithm.This result verifies the effectiveness of the third strategy, which generates a sample around the best solution for improving the performance of the PRM algorithm. Verification of the Fourth-strategy Effectiveness: Sample Generation around Obstacles The fourth test aims to verify the effectiveness of the fourth Smart-PRM strategy.The first, second and third strategies are not included in this fourth test.Therefore, this test's enhancement of the PRM algorithm is solely derived from the fourth strategy.The test compares the basic PRM algorithm with the improved PRM algorithm using the fourth Smart-PRM strategy.Testing is performed on the four scenarios mentioned in sub-section 3.1.The measured performance is the computation time of each algorithm to achieve the optimal path.The test results can be seen in Table 7. Furthermore, an analysis of the average percentage comparison of convergence time to reach the optimal path for both algorithms can be found in Table 8. Based on the data in Table 8, it can be observed that the average time of the improved PRM algorithm, when using the fourth strategy, is 6.22 times faster than the basic PRM algorithm.This result verifies the effectiveness of the fourth strategy, which involves generating samples around obstacles, in improving the performance of the PRM algorithm. Verification of the Fifth-strategy Effectiveness: Route Repair Using the Wrapping Procedure The fifth test is aimed at verifying the effectiveness of the fifth Smart-PRM strategy.The first, second, third and fourth strategies are not included in this fifth test.Therefore, this test's enhancement of the PRM algorithm is solely derived from the fifth Smart-PRM strategy.The test compares the basic PRM algorithm with the improved PRM algorithm using the fifth strategy.Testing is performed on the four scenarios mentioned in sub-section 3.1.The measured performance is the computation time of each algorithm to achieve the optimal path.The test results can be seen in Table 9.Furthermore, an analysis of the average-percentage comparison of convergence time to reach the optimal path for both algorithms can be found in Table 10.Based on the data in Table 10, it can be observed that the average time of the improved PRM algorithm, when using the fifth strategy, is 11.82 times faster than the basic PRM algorithm.This result verifies the effectiveness of the fifth strategy, which involves path refinement using the wrapping process, in improving the performance of the PRM algorithm. Analyzing the Contribution of Each Sampling Strategy Based on Tables 2, 4, 6, 8 and 10, a table illustrating the contribution of each sampling strategy, as demonstrated in Table 11, can be constructed.Table 11 presents a comparison of convergence time using each strategy against the basic PRM algorithm across various scenarios.As depicted in Table 11, which compares the convergence time using each sampling strategy with the basic PRM algorithm across various scenarios, we can evaluate the relative contributions of each strategy to the overall algorithm performance.Upon examining the data, it is evident that based on the test results, the fifth strategy, Route Repair Using the Wrapping Procedure, demonstrates the most significant contribution to achieving superior performance across different scenarios. Performance Comparison between the Smart-PRM Algorithm and Other Algorithms The sixth test compares the Smart-PRM algorithm (which implements all five proposed techniques) to the informed RRT*-Connect and informed PRM algorithms.The test is run on the four scenarios described in sub-section 3.1.The calculation time of each algorithm to find the best path is assessed as performance.Table 12 displays the test results.Table 13 also contains a study of the average-percentage comparison of convergence time to reach the optimal path for both techniques.According to the statistics in Table 13, the Smart-PRM algorithm has an average time that is 18.06 times faster than the informed RRT* algorithm and 3.47 times faster than the informed PRM algorithm.Therefore, the Smart-PRM algorithm requires less computational time to design an optimal path than the informed RRT* and informed PRM algorithms.The results of the tests show that the Smart-PRM algorithm can create an optimal path in all test scenarios. Evaluating the Stability of the Smart-PRM Algorithm According to Xue [36], a path-planning algorithm is considered stable if it consistently produces the same path when planning the same task.Therefore, we will evaluate the stability of the Smart-PRM algorithm using the data provided in Table 14. Example Application As an example of an application requiring fast asymptotically optimal path planning, we find that our algorithm, with its fast convergence, would be highly beneficial in the implementation of autonomous vehicles.The need for algorithms with fast convergence is paramount in traffic-safety contexts, where optimal path planning and rapid response to unforeseen situations are crucial.For instance, in Figure 9, we illustrate a scenario where an autonomous vehicle encounters a curve on the road while pedestrians are crossing unexpectedly.In such situation, autonomous vehicles must be able to respond quickly to plan alternative safe routes and avoid potential accidents.This study can be used as a reference for the current issues in vehicle automation, as discussed in previous studies [37]- [40]. Figure 9.An illustration where autonomous vehicles (green car) must be able to quickly plan alternative routes when sudden changes in environmental conditions are encountered, such as sudden pedestrian crossings (illustrated by the red circle). Figure 1 . Illustration of the information-based sample generation process in the Smart-PRM algorithm: (a) Initial random sample generation, (b) Establishment of constraint area based on initial path solution, (c) Subsequent sample generation within the constrained area and (d) Decrease in constraintarea size with successive iterations, leading to a concentrated path search. Figure 2 . Illustration of incremental search techniques on increasingly dense samples in Smart-PRM algorithm: (a) Initial sample generation with a small ellipsoidal area, (b) Iterative expansion of the ellipsoidal area to include denser samples, (c) Finalization of the ellipsoidal area with a path solution and (d) Adjustment of the ellipsoidal area based on path optimization. Figure 3 . Illustration of sample generation around the best solution in Smart-PRM: (a) Pathfinding using Dijkstra's algorithm and initial sample generation, (b) Relocation of sampling nodes around the best path and (c) Potential optimization of route with sampling nodes around the best path. Figure 4 . Illustration of sample generation around the obstacles in Smart-PRM: (a) Detection of samples near obstacles and initial exploration, (b) Increased exploration of obstacle sides by additional samples, (c) Sufficient exploration of areas around obstacles by sample points and (d) Potential discovery of better paths around obstacles. Figure 5 . advanced along the path connecting node Xi+1 to node Xi+2, as in Figure5b.The light blue area indicates the path covered by the blue line connecting X1 to Xtemp.The position of node Xtemp continues to advance until an obstacle obstructs the blue line connecting X1 to Xtemp, as shown in Figure5c.The position where the blue line meets the obstacle is marked as a new node for X2 (denoted as X2').In the next iteration, the position of Xtemp is advanced again, but because a new node X2' has been found, the blue line now connects Xtemp to X2', as depicted in Figure5d.The position of Xtemp continues to advance until it reaches node Xi+2 or node X3.Once node X3 is reached, the position of Xtemp is further advanced along the path connecting node Xi+2 to Xi+3 (or node X3 to X4).This process is shown in Figure5e.If the blue line connecting node X2' to Xtemp encounters an obstacle, the position where the blue line meets the obstacle is marked as a new node for X3 (denoted as X3').This iteration continues until node Xtemp reaches the destination node Xgoal, as shown in Figure5f.Figure5gdepicts a comparison of the initial path and the path produced by the wrapping operation.The red line is the original path and the blue line is the corrected/improved path as a result of the wrapping process.Green nodes represent new nodes created during the wrapping process.Illustration of the wrapping process to optimize the generated path.The red line represents the initial path, while the blue line represents the repairing/improved path: (a) Creation of temporary node (Xtemp) and connection to X1, (b) Advancement of Xtemp along the path between nodes i+1 and i+2 , (c) Identification of obstacle obstruction and creation of new node 2 ', (d) Continued advancement of Xtemp towards node i+2 or 3 , with connection to 2 ', (e) Further advancement of Xtemp along the path towards node i+2 or 3 , with potential creation of new node 3 ', (f) Completion of wrapping process when Xtemp reaches destination node goal and (g) Comparison of initial and improved paths resulting from the wrapping operation. Figure 7 . Figure 7. Sample-generation strategy in the smart-PRM algorithm. Figure 8 . Testing scenarios: (a) environment with a single obstacle, (b) environment with narrow passages, (c) environment with T-shaped obstacle, (d) environment with multiple randomly-scattered obstacles. Line 11 of Algorithm 2 implements the Smart-PRM algorithm's third strategy.Lines 27-29 of Algorithm 1 execute the Smart-PRM algorithm's fourth strategy.The fifth strategy of the Smart-PRM algorithm is implemented in Algorithm 1 (line 41).Sample ( , , ) end Figure 6.Smart-PRM algorithm.Algorithm 2. Table 1 . Comparison of improved PRM algorithm using the first strategy against the basic PRM algorithm (in seconds). Table 2 . Comparison of average convergence time of the improved PRM algorithm using the first strategy against the basic PRM algorithm. Table 3 . Comparison of improved PRM algorithm using the second strategy against the basic PRM algorithm (in seconds). Table 4 . Comparison of average convergence time of the improved PRM algorithm using the second strategy against the basic PRM algorithm. Table 5 . Comparison of improved PRM algorithm using the third strategy against the basic PRM algorithm (in seconds). Table 6 . Comparison of average convergence time of the improved PRM algorithm using the third strategy against the basic PRM algorithm. Table 7 . Comparison of improved PRM algorithm using the fourth strategy against the basic PRM algorithm (in seconds). Table 8 . Comparison of average convergence time of the improved PRM algorithm using the fourth strategy against the basic PRM algorithm. Table 9 . Comparison of improved PRM algorithm using the fifth strategy against the basic PRM algorithm (in seconds). Table 10 . Comparison of average convergence time of the improved PRM algorithm using the fifth strategy against the basic PRM algorithm. Table 11 . Comparison of convergence time using each strategy against the basic PRM algorithm across various scenarios. Table 12 . Comparison of the Smart-PRM algorithm against the informed RRT*-Connect and informed PRM algorithms (in seconds). Table 13 . Comparison of average convergence time of the Smart-PRM algorithm against the informed RRT*-Connect and informed PRM algorithms. Table 14 . Table 14summarizes the statistical results of performance measurements obtained by Smart-PRM and other algorithms in various benchmark scenarios.Performance measurements include the best-path length, worst-path length, average-path length and standard deviation.A decrease in standard deviation indicates that the cost values of paths generated in each iteration are more consistent.As shown in Table14, the standard deviation of the Smart-PRM algorithm is the smallest or relatively small compared to the standard deviation of other algorithms in each benchmark scenario.This smaller standard deviation suggests that the Smart-PRM algorithm tends to be more stable compared to other available algorithms Comparison of algorithm stability across various benchmark scenarios.Best results are highlighted for each section.
7,492.8
2024-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Employment in the post-pandemic period: problems and prospects (regional aspect) . The COVID-19 pandemic has fundamentally changed the employment situation, both globally and within individual states. Several tens of millions of people in the world were left without work. The unemployment rate, both statistically confirmed and hidden, has risen significantly. Only the most developed and richest countries were able to restrain the rapid growth in the number of unemployed through budget transfers. At the same time, the era of social distancing contributed to the revision of work standards in many industries, changed the conditions of employment and the requirements of employers to employees. The forced transition of thousands of institutions to remote work was a catalyst for the digital economy and led to the emergence and rapid growth of new clusters of professions of the future. The world after the pandemic will no longer be the same, and in the field of employment in the first place. Introduction COVID-19 has dramatically changed the job market and the workplace. According to statistics, tens of millions of people in the world have lost their jobs due to the coronavirus. However, economists are confident that the official unemployment figures do not reflect the real state of affairs. Hidden unemployment is also growing significantly in the world. It has always existed, but now the shadow army of citizens who are not employed, but also not accounted for as unemployed, has grown. Because when assessing the labor market on the basis of monthly surveys of the population, statisticians consider people without work only those who are actively looking for it and are ready to start immediately. The statistical office of the European Union admitted that the indicators of employment and unemployment, determined by the International Labour Organization (ILO), in this particular situation do not fully reflect what is happening. There is a direct relationship: where the volume of budgetary spending on anti-crisis support is higher, the loss of the labor market is less. This statement is asserted by ILO estimation.At the same time, there are signs of improvement in employment. The pandemic accelerated the digitalization of all spheres of society and sectors of the economy, identified the need for digital skills and led to the active formation of new clustersof professions of the future. Materials and Methods It should be noted that unemployment is the situation of actively looking for employment, but not being currently employed. Eurostat defines an unemployed person as someone aged 15 to 74 (in Italy, Spain, the UK, Iceland, Norway: 16 to 74 years), without work during the reference week, available to start work within the next two weeks (or has already found a job to start within the next three months) and actively having sought employment at some time during the last four weeks [1]. According to EUROSTAT, the unemployment rate increased by a quarter in the EU from March to August of the current year [1]. The situation has temporarily stabilized, but so far there is no hint of an increase in employment, and since the growth rate of COVID-19 cases does not decrease, unfortunately, there is no reason for optimism. And, as STATISTA states [2], this indicator more than tripled in the USA in April 2020 after the outbreak. The situation is gradually improving, however, the epidemiological situation in the country is still tense, so a rapid reduction in the unemployment rate is not worth waiting for. According to ILO, the number of hours worked globally during the peak of the first wave of COVID-19 fell by the equivalent of 500 million jobs, and earnings fell by 10% [3]. Experts say that $ 12 trillion was spent on subsidies to the population and business in the world, and most of these funds were provided by the governments of developed countries. As the ILO confirms, if not for these astronomical injections, mass layoffs would have punched a $ 3.5 trillion hole in family budgets. But money is rapidly running out even in developed countries, subsidies are melting, and all this promises a surge in unemployment. First of all, this may affect European states, where out of 190 million employees, one in three received state support to one degree or another [4]. According to economists, real unemployment in France, Italy, Spain and Great Britain is 3-5 percentage points higher than the official one. According to their calculations, every fifth European resident of those whose salaries were shifted onto the shoulders of taxpayers during the lockdown runs the risk of being left without a job when the authorities cut off anti-crisis funding. Companies on both sides of the Atlantic are confirming these concerns. They vied with each other to promise reductions. And they are doing this only now, six months after the start of the pandemic, since their internal reserves are depleted, a second wave of coronavirus looms on the horizon, and the authorities are gradually curtailing state support. Dozens of the largest companies reported mass layoffs and sending workers on indefinite unpaid leave in October 2010. For example, the Walt Disney Company announced the layoff of 28,000 theme park workers in the US, where no one else goes. European oil and gas concern Royal Dutch Shell will cut 9 thousand rates, and the German company Continental plans to cut or outsource 30 thousand employees worldwide. A catastrophic situation has developed in the air transportation industry. For example, in the United States, more than 150 thousand airline workers have already been fired or sent on leave without pay. At the end of October 2020, United announced its readiness to cut 12 thousand jobs [1]. And that's just big international business. The growth of hidden unemployment is also quite natural. Pensioners, students, and those who despaired of finding a job and abandoned their search are not considered to be economically active population and are not taken into account. Namely, during the pandemic, the share of the latter increased sharply, because people of many professions soberly assess their prospects, looking at boarded up shop windows and empty airports. And if they are not looking for work, then they are not considered unemployed. This embellishes the official statistics, but not for long. A good example is the USA. In America, unlike in Europe, employees on forced leave are immediately classified as unemployed, and therefore unemployment at the peak of the pandemic increased there not by a fraction of a percent, but soared from a historically insignificant 3.5% to 14.7%, or almost 23 million person. And by the fall, it had halved, partly because the economy revived, and partly because the Americans, who were looking for work in April but found nothing over the summer, had lost hope by September and did not get into the so-called "economically active population", in proportion to which unemployment is calculated. As a result, with the official September account of 12.6 million unemployed, the real number of Americans who have lost their earnings due to the virus exceeds 20 million, as follows from the calculations of specialists from the Economic Policy Institute [5]. Many are still employed, but with reductions, reduced employment and reduced wages. Others, having lost one permanent job with a good salary, are interrupted by various temporary earnings: couriers, taxi drivers, tutors. The number of jobs is growing, but the number of workers is still the same, and their income is even lower. Taking into account those who lost their salaries or switched to part-time work, more than 30 million people became victims of the pandemic in the United States. State statisticians themselves do not hide the fact that their bulletins do not reflect the real state of affairs on the labor market during the pandemic. According to Eurostat economists, in the second quarter of 2020, which saw the first wave of coronavirus infections and lockdowns, the gap between labor supply and demand in the EU was 14%, while official unemployment did not exceed 6.5% [1]. The longer the period for which people leave the labor market, the harder it is for them to return to it as competitive workers. The longer the break in your career, the lower your chances of finding a job at the same level. As a result, inequality is growing, fraught with social tension and political instability. And not only nationally, but also internationally: between rich countries and those that are poorer. According to Allianz economist's calculations, the virus has made 13 million Brazilians and almost two million Chileans unemployed, and real unemployment in Brazil is 10 percentage points higher than the official one, and 15 in Chile.And in Spain, according to the country's central bank, unemployment has reached 20% and risks staying at this level for at least two more years, since this country has suffered more than its neighbors not only from the virus, but also from a lockdown due to the fact its economy is stronger other European countries relies on the restaurant and hotel business. In the second quarter of 2020, 195 million people were employed in the EU. Professionals, which encompasses people highly skilled in domains such as science and engineering, health, teaching, business and administration, information and communications technology and legal, social and cultural field, is the most common occupational group among EU workers (21% of total employment). They are followed by 'technicians and associate professionals' (17%), 'service and sales workers' (16%), 'craft and related trade workers' (12%) and 'clerical support workers' (10%). At the other end of the scale, fewer than one in ten workers had an 'elementary occupation' (e.g. cleaners and helpers, labourers, food preparation assistants, street and related sales and service workers) (8%), or were working as 'plant and machine operators and assemblers' (7%), 'managers' (5%) or 'skilled agricultural, forestry and fishery workers' (4%) [1]. In the second quarter of 2020, we can see the fall in the number of temporary contracts corresponds to 80.5% of the total decrease in employment for this age group. Youth employment has been particularly affected. From the last quarter of 2019 to the second quarter of 2020, the share of temporary contracts in employment changed from 46.2% to 42.7% for young people aged 15-24 while it only decreased from 11.6% to 10.2% for the population aged 20-64. The decline in employment with temporary contracts was visible in all EU Member States, apart from Lithuania and Denmark. The most substantial shrinkages were found in Latvia, Bulgaria, Malta, Slovenia, Estonia, Greece and Slovakia, where the decrease exceeded 20%. The number of people employed part-time also fell over the same period, dropping from 33.8 to 31.0 million, a decrease of 2.8 million among people aged 20-64. Please take note that the group of people with temporary contracts overlaps the group of part-time workers. Fewer people employed part-time were also recorded in 20 out of 26 EU Member States for which data is available, with drops of more than 10% in Ireland, Spain, Malta, Portugal and Latvia. However, part-time employment increased by more than 10% in Hungary (27.7%) and Bulgaria (10.9%). In the USA, according to the Robert Half special report, workers are adapting and feel supported:77% of employees have been working from home since the pandemic emerged, 63% of professionals realize their job is doable from home and 97% of workers said their manager has been a source of support during this challenging time [5]. Nevertheless, the labor market is showing signs of improvement. Bright spots bring optimism: 1. More than 18 million unemployed professionals who were temporarily laid off due to work slowdowns or business closures expect to return to work. 2. The unemployment rate for college-degreed workers 25 and older, under 7%, is below the national unemployment rate, which is near 11%. 3. Many companies have learned that the remote work is a viable option, and employees enjoy having the flexibility: 79% said their job allows the windowed work, or the ability to block their day into chunks of business and personal time, and 73% of those workers said it leads to greater productivity. 4. Workers have become more comfortable using technology for remote work and 60% said the lack of a commute have improved their work-life balance. 5. Human resources leaders reported that the majority of organizations are using new virtual technology to interview candidates due to the COVID-19 pandemic. 6. Small business owners expect the recession to be short-lived, and nearly 40% anticipate better business conditions in the next 6 months. Employers must pay attention to employees' post-pandemic expectations: • 74% of professionals would like to work remotely more often than before the outbreak. More parents (79%) expressed this preference; • 55% would like staggered work schedules; • 79% think their companies should have better cleaning procedures; • 52% feel their employers should require workers to wear masks; • 46% want their employers to change the office layout in an effort to maintain social distancing. The situation is no better in Russia. Underemployment has increased dramaticallyin the country. At large and medium-sized enterprises in the first quarter of 2020, part-time employment was 3.8%, in the second quarter -already 6.3%. The most difficult situation has developed in Yakutia, the Samara region, and the Perm Territory [6]. A new trend for Russia is the rapid growth of registered unemployment, as in developed countries. We have had it for the first time since the 1990s. There was a five-fold increase in registered unemployment at the end of August: at the end of February,2020 it was less than 1%, but now it is already 4.8%. The reasons are an increase in the level of benefits to a living wage and a sharply simplified registration, less paperwork and can be done online. It is estimated that about 40% of the registered unemployed had no previous legal job. Registered unemployment grew the fastest in Moscow, St. Petersburg, and the Moscow region -7-8 times. But there the pre-crisis level was minimal, only 0.4% -0.6%, and in August 2020 it already ranged from 3 to 3.5%. This is a consequence of the crisis primarily in the service sector. The level of registered unemployment (7-8%) is twice as high in industrial regions with a large decline in industrial production. However, the leaders, as usual, are the republics of the North Caucasus and Tuva. The number of regions in which the unemployment rate was 10% or higher, increased over the year from six to 13. This is evidenced by the data of RIA Novosti research [7]. The unemployment rate in Russia at the end of June-August 2020 amounted to 6.3%. This is 1. This summer, job search took an average of 5.9 months. By August, 18.5% of the unemployed had been looking for work for 12 months or more, according to the survey data. In September, unemployment continued to rise. At the end of the month, the number of officially registered unemployed increased 5.5 times compared to September last year and reached 3.7 million people. The RIA Novosti research is based on data from Rosstat. It compared the number of unemployed by the standards of the International Labor Organization with the size of the labor force, and also determined the average time to find a job. According to the Ministry of Labour, by the end of 2021 there will be 71.9 million employed people in Russia Results and Discussion Even before COVID-19, workers and workplaces around the world were facing the potential for unprecedented levels of disruption. The tech revolution and the need for a green revolution have created an urgent need to increase access to the skills, tools and financial services workers need for the jobs of tomorrow. Technology has also exacerbated gender gaps, with the largest in roles that rely heavily on disruptive tech skills, with the share of women represented across cloud, engineering and data jobs below 30%. The pandemic has accelerated the transition to a digital economy -and for those who are able to work from home, the need for digital skills. For those who cannot work from home -such as workers in healthcare, emergency services or the food supply chain -there have been difficult decisions about whether to put yourself and your family at risk as you continue working, if you even have a job to go back to. And COVID-19 has had a disproportionate impact on workers in the informal economy, whose incomes have dropped as much as 81% in the first month of the pandemic in some regions, as well as on women, who account for 70% of health and social workers globally. Disruption to workplaces was happening well before COVID-19 -but the pandemic increased the need for large-scale, informed and collaborative action to prepare for the future of work. The World Economic Forum's Future of Jobs Report 2020 finds 84% of employers "are set to rapidly digitize working processes which includes a significant expansion of remote work -with the potential to move 44% of their workforce to operate remotely" [9]. For industries where working from home isnot an option, investments are needed to protect workers and jobs. Most urgently, this includes providing workers with PPE, as a recent Forum survey of manufacturing companies found, and possibly rethinking workspaces entirely. Then, employers need turn to challenges like reskilling, professional development and technical adoption. The Ministry of Labor of the Russian Federation notes that the Russian labor market is gradually recovering, and the number of offers from employers is growing. Already, more than 1.47 million vacancies are presented on the «Work in Russia» portal [10]. At the same time, as noted by job search agencies, employers are still looking for workers for project, part-time or part-time employment, often experimenting with remote work. According to experts, millions of citizens may leave office in the coming years and start working for themselves. This makes it easy to do the self-employment mode. Since the launch of the «My Tax» application, which quickly and easily allows you to apply for a self-employment regime, more than a million people have signed up. According to the director of the National Guild of Freelancers of Russia, the potential of the freelance market is 10 million people. At the same time, experts name both 5 and 15 million people, and the number of self-employed may rise to these figures. This difference in assessments is explained by the "shadow" nature of self-employment. According to experts, every tenth working citizen has an additional source of income that is not declared in any way. Many people have part-time jobs, but for various reasons they do not come out of the shadows. Someone simply does not know about the self-employment regime and the simplicity of its registration. Someone is not ready to officially advertise that they have a "hack". Since the launch of the «My Tax» application, the self-employed have earned 143 billion rubles and paid more than 3.5 billion rubles to the budget. These numbers will gradually increase due to the whitewashing of the additional income market. The most common activities in the «My Tax» application are taxi, delivery of goods, apartment rentals, tutoring, and renovation work. Marketing and IT services are also popular. The number of designers, programmers, analysts, copywriters and other intellectual and creative professions does not exceed 10-15% of the self-employed, and their number is expected to grow. According to the Future of Jobs Report 2020, today we can see the emergence of clusters of professions of the future, such as data and artificial intelligence, engineering and cloud computing, and so on. Employers in the COVID-19 era are faced with a host of new challenges: keeping their businesses running amid economic uncertainty, managing remote workers and maintaining staff morale. There are some ways to boost employee morale: to show workers they are valued, to focus on employee wellness and to pay the top performers well. In an age of social distancing and teleworking, hiring isnot getting easier. There are some tips for finding new talent in the COVID-19 environment. Businesses will be flooded with applicants who have been laid off during the pandemic. Sorting through hundreds or potentially thousands of resumes to find needle in the haystack can be overwhelming. The employers should be specific about their must-have requirements in the job description to discourage underqualified applicants. There are not extra hours to invest in the hiring process even in a good economy. Today the employers have less time, because they have fewer people on staff and more challenges in keeping the business running. And it is necessary to block the calendar during less busy times of the week to limit interruptions and focus on hiring tasks. The fact that many jobs can be done remotely now means that the employers can extend their candidate search beyond geographic boundaries. A salary range must takes into account the requirements of the job and the market the candidate lives in. While there are many more job seekers out of work than there were several months ago, employed professionals remain a key segment of the candidate supply. Working with a recruiter widens the net to include these passive job seekers. The best candidates may already be on the payroll, that is why the employers should consider opportunities to advance these workers and hire new employees with fresh perspectives to backfill their vacancies. Conclusions 1. Remote work is now the new norm as many people remain sheltered in place, and remote hiring and onboarding may also be here to stay. 2. Employees have specific expectations about the post-pandemic workplace. 3. Hiring isnot getting any easier, despite a larger pool of available talent. 4. The COVID-19 pandemic has had a major impact on skills and distance learning. 5. The economic consequences of the pandemic will see inequality accelerate. 6. The jobs of the future will be a combination of technical and human skills. Taking into accountall types of unemployment and its real scale, an extremely dangerous economic chain is built: • reduction in the number of employed leads to a drop in the income of the population; • people spend less, and the engine of the economy -consumption -starts to junk; • government revenues are also falling because taxes on employees are being cut; • government spending, on the contrary, is growing, since the unemployed need social support: benefits and housing; • the growth of wages stops and is replaced by a decline due to the surplus of labor; • there is a very real threat of deflation. As a result, the prospect of normalizing monetary policy, raising interest rates and increasing interest in savings as a source of investment, through which the economy develops and grows, becomes very illusory and certainly distant.
5,265.4
2021-01-01T00:00:00.000
[ "Economics" ]
Species-targeted sorting and cultivation of commensal bacteria from the gut microbiome using flow cytometry under anaerobic conditions There is a growing interest in using gut commensal bacteria as “next generation” probiotics. However, this approach is still hampered by the fact that there are few or no strains available for specific species that are difficult to cultivate. Our objective was to adapt flow cytometry and cell sorting to be able to detect, separate, isolate, and cultivate new strains of commensal species from fecal material. We focused on the extremely oxygen sensitive (EOS) species Faecalibacterium prausnitzii and the under-represented, health-associated keystone species Christensenella minuta as proof-of-concept. A BD Influx® cell sorter was equipped with a glovebox that covered the sorting area. This box was flushed with nitrogen to deplete oxygen in the enclosure. Anaerobic conditions were maintained during the whole process, resulting in only minor viability loss during sorting and culture of unstained F. prausnitzii strains ATCC 27766, ATCC 27768, and DSM 17677. We then generated polyclonal antibodies against target species by immunizing rabbits with heat-inactivated bacteria. Two polyclonal antibodies were directed against F. prausnitzii type strains that belong to different phylogroups, whereas one was directed against C. minuta strain DSM 22607. The specificity of the antibodies was demonstrated by sorting and sequencing the stained bacterial fractions from fecal material. In addition, staining solutions including LIVE/DEAD™ BacLight™ Bacterial Viability staining and polyclonal antibodies did not severely impact bacterial viability while allowing discrimination between groups of strains. Finally, we combined these staining strategies as well as additional criteria based on bacterial shape for C. minuta and were able to detect, isolate, and cultivate new F. prausnitzii and C. minuta strains from healthy volunteer’s fecal samples. Targeted cell-sorting under anaerobic conditions is a promising tool for the study of fecal microbiota. It gives the opportunity to quickly analyze microbial populations, and can be used to sort EOS and/or under-represented strains of interest using specific antibodies, thus opening new avenues for culture experiments. BUx8pvnE8Zxeu2HSzKY8-e Video abstract Video abstract in recent years, and countless associations have been reported between microbiota composition and specific health conditions. This is especially true for the human gut ecosystem, for which microbial signatures have been associated with metabolic syndrome, inflammatory bowel diseases (IBD), and response to cancer immunotherapy to mention just a few. This offers new fundamental and applied research avenues, with the ultimate goal to develop new, complementary tools for treating these conditions [1][2][3]. In particular, 16S rRNA gene amplicon or shotgun metagenomic analysis conducted on fecal samples collected from cohorts of patients vs. controls highlighted decreased occurrence of several commensal bacterial species in pathological conditions [4]. There is thus a growing interest in using cultured, well-characterized strains to complement deficiencies in the gut microbiota, referred to as "next-generation probiotics" (NGP) [5]. This is the case for Faecalibacterium prausnitzii, which accounts for about 5-10% of dominant microbial communities within the healthy gut microbiota [6], and has been associated with a number of favorable outcomes in various pathologies including lower risk of postoperative recurrence of ileal Crohn's disease [7] and an improved response to immune check point blockers [8,9]. The phylogeny of F. prausnitzii is complex, comprising at least 3 different phylogroups, and possibly represents several species that remain to be described taxonomically [10][11][12]. Relative proportions of the different phylogroups in one same individual seem to vary depending on specific disease condition, with phylogroup IIb strains being depleted in Crohn's disease patients [13,14]. It has consequently been proposed to use corresponding relative abundances as disease biomarker [15]. Other NGP candidates can be found within the family Christensenellaceae [16]. Relative abundancy of these heritable bacteria is inversely correlated to host body mass index and the type species Christensenella minuta has been demonstrated to reduce weight gain in germfree mice colonized with fecal microbiota collected from obese individuals [17]. Recently, it has also been reported that C. minuta DSM33407 protected from diet-induced obesity and regulated associated metabolic markers such as glycemia and leptin in a diet-induced obesity mouse model [18]. In this context, and knowing that specific biological properties of gut bacteria, including host beneficial properties, can vary significantly from one strain to another [19], there is a major interest in building collections of different commensal strains of target species of interest identified via NGS studies. However, this approach is still hampered by the fact that retrieving target species from clinical samples (usually fecal material) can be difficult. Extreme oxygen sensitivity (EOS) (F. prausnitzii) or under-representation of the target species in the community (C. minuta) can be important limitations, with the addition of specific nutritional requirements rendering target species difficult to cultivate in synthetic media. Flow cytometry (FCM) coupled with cell-sorting has the potential to circumvent most if not all these limitations. With constantly increasing technological performances, FCM can be used for bacterial or even viral cell populations' analysis with or without subsequent sorting [20,21]. With the objective to use FCM and cell-sorting to analyze, sort, and cultivate bacterial species of interest from fecal samples, we adapted a cell sorter and associated workflow to conduct sorting experiments under strictly anaerobic conditions. We then evaluated the impact of sorting as well as non-specific and specific staining methods on the viability of several representative strains of the EOS species F. prausnitzii. To test the feasibility of the specific targeting of bacteria using anaerobic sorting, we isolated new F. prausnitzii strains from fecal samples using antibodies raised against available reference strains. Finally, we applied this method to sort and cultivate novel strains of C. minuta, which is more tolerant to oxygen exposure but usually found at low abundance (in contrast to F. prausnitzii), demonstrating the potential of this approach. Evaluation of polyclonal antibodies against reference strains of F. prausnitzii and Christensenella spp. A recent publication suggested that the species F. prausnitzii comprises at least two phylogroups whose contribution to human health might be different [10]. Therefore, we used representatives of both phylogroups to generate polyclonal antibodies: strain DSM 17677 (A2-165, phylogroup IIb) and a mix of the closely related strains ATCC 27766 and ATCC 27768 (phylogroup I). Antibodies raised against the ATCC strains were consistently effective in detecting > 90% of target bacteria. They showed cross-reactivity with strain A2-165 but the intensity of the staining was much lower, thus allowing the definition of a phylogroup-specific gating region ( Fig. 1 and Additional files 1 and 3). Staining efficiency was more variable for antibodies directed against strain A2-165, with the proportion of stained bacteria ranging from 50% to > 90% depending on the experiment. These antibodies also slightly reacted with the ATCC strains but here again, a phylogroup-specific gating region could easily be defined ( Fig. 1 and Additional files 1 and 3). We did not observe any fluorescence of unstained F. prausnitzii cells when exciting with the Violet 405 (450/50 nm) or Red 670 (670/30 nm) lasers whereas significant auto-fluorescence was observed for the 3 different strains when exciting with the Blue 540/30 nm laser (Additional file 1). However, in this case, the signal was still 1 to 2 log lower than the signal used for gating live cells after staining with SYTO 9 contained in the LIVE/DEAD ™ kit (Additional file 3), so auto-fluorescence in that channel is not likely to interfere with the viability staining. We also generated polyclonal antibodies by injecting rabbits with heat-inactivated bacteria of the publicly available C. minuta strain DSM 22607. These antibodies were then tested against a pure culture of the parental strain and the related species "Christensenella massiliensis" and "Christensenella timonensis. " Nearly all cells within the pure culture of C. minuta were indeed stained ( Fig. 2 and Additional files 2 and 3). Limited cross-reactivity was observed with "C. massiliensis, " for which, depending on the experiment, 1.2 to 14.3% of the bacteria were stained by C. minuta antibodies ( Fig. 2 and Additional file 2). There was no cross-reactivity with "C. timonensis. " Impact of staining and anaerobic sorting on F. prausnitzii and C. minuta viability We then tested the effect of the sorting procedures on viability of the EOS species F. prausnitzii [22] by comparing two conditions: (i) sorting performed under anaerobic conditions, (ii) sorting performed under normal atmosphere [23]. The recovery of unlabeled F. prausnitzii after sorting under anaerobic conditions was approximately 20% for the three tested strains, while no colony could be observed if the sorting was performed under normal atmosphere (Fig. 3). We then evaluated the effect of different staining methods (SYTO 9 and propidium iodide in the LIVE/ DEAD ™ BacLight ™ Bacterial Viability Kit and the 4 strain specific antibodies) on cultivability compared to unstained bacteria (see sorting gates in Additional file 3). These experiments showed no significant impact of any of the tested staining on F. prausnitzii cultivability during sorting in anaerobic conditions Fig. 4). Several colonies were still observed after anaerobic sorting and cultivation of the propidium iodide (PI)-stained fraction, corresponding to approximately 0.1 to 1% of cultivable bacteria in this fraction (Fig. 4). The cultivability of C. minuta was very high, and remained unaffected after LIVE/DEAD ™ and antibody staining. Unstained and pre-immune controls We did not detect any auto-fluorescence in the Red 670/30 nm channel for samples collected from healthy volunteers (HV) 1 to 4. Additional signal observed when the same samples were stained with the preimmune fluorescent antibodies was limited, with percentages of stained events ranging from 0.01 to 0.21% (Additional file 4). The signal detected after staining with Alexa Fluor ™ 647-antibodies directed against the ATCC strains of F. prausnitzii was clearly distinguishable from the background (Additional file 4, HV1 and HV4). The situation was markedly different in the 450/50 nm channel, with auto-fluorescent events being clearly detected as distinct populations accounting for 0.03 to 1.96% of the events in the "bacteria" gate (Additional file 4). However, samples in which events were detected after staining with Alexa Fluor ™ 405-antibodies directed against F. prausnitzii A2-165 displayed populations that could clearly be distinguished from the background auto-fluorescence from unstained bacteria (Additional file 4, HV1 and HV4). Evaluation of antibodies' specificity and enrichment rates for fractions sorted after staining with polyclonal antibodies and for the 450 nm-auto-fluorescent fraction As shown in Fig. 5A, we observed a very substantial enrichment in the sorted fractions, with 91.6% of the 16S rRNA gene amplicon sequences being annotated as F. prausnitzii in the fraction sorted using the antibody directed against strains ATCC 27766 and ATCC 27768, and 75.4% in the fraction sorted using the antibody directed against strain A2-165. The enrichment rates calculated on the basis of the normalized sequences were of the order of 100 for both antibodies and there was no significant enrichment of other zOTUs in the F. prausnitziisorted fractions ( Fig. 6A and B). Since auto-fluorescent events were detected in samples collected from HV4 when exciting with the 405 nm laser, we also sorted and then performed 16S rRNA gene amplicon sequencing and analysis of the sorted fraction from HV2+HV4 mix. When excluding zOTUs to which less than 0.05% of the normalized sequences were affiliated in the original fecal material, the most enriched bacteria in the sorted fraction were affiliated to the Methanobacteriaceae (Figs. 5A and 6D). Several zOTUs affiliated to genera Bacteroides, Eubacterium, and Faecalibacterium, and to the Christensenellaceae R-7 group were also enriched by more than 5 times in the sorted fraction compared to the original pooled material (Fig. 6D). For the fraction sorted from the pool of 2 samples spiked at approx. 2% with the C. minuta collection strain Fractions of bacteria recovered in culture after various staining. Fresh cultures of the three F. prausnitzii and C. minuta reference strains were stained with SYTO 9 and propidium iodide, and with specific antibodies and were then sorted in increasing amounts on mGAM plates to calculate the percentage of cultivability. Experiments were performed two times in triplicates (see Additional file 5 for sorting gates), 71.7% of the sequences were indeed annotated as C. minuta (Fig. 5B). The enrichment rate calculated on the basis of the normalized sequences was of 16.8, and it was of 16.7 for a zOTU affiliated to the genus Faecalibacterium (Fig. 6C), suggesting that these antibodies can potentially crossreact with species other than C. minuta. Use of polyclonal antibodies for sorting and cultivation of new F. prausnitzii strains from fecal material Based on the results observed with pure cultures of F. prausnitzii collection strains, we chose to combine LIVE/DEAD ™ staining with specific polyclonal antibodies to perform sorting and cultivation experiments from frozen fecal material. In this series of experiments . Bacteria presenting auto-fluorescence in the 450/50 nm channel after excitation with the 405 nm laser were also sorted and analyzed by sequencing (A). One million bacteria were sorted for each population to be sequenced, and a 100% identity threshold corresponding to the zero radius OTU (zOTU) definition was used to delineate OTUs [24]. These experiments were performed once for F. prausnitzii (A) and once for C. minuta (B) with fecal samples collected from 5 healthy volunteers, Live (i.e., SYTO 9-positive and PI-negative) bacteria ranged from 27.5 to 36.5%. Similarly to what was observed in preliminary experiments, auto-fluorescent bacteria accounted for 0.22 to 1.18% of the events for 3 of the 5 volunteers when exciting with the 405 nm laser ( Fig. 7 and Table 1). Interestingly, these 3 volunteers were also those for which OTUs affiliated to the Methanobacteriaceae family were detected in the 16S rRNA gene amplicon repertoire (Fig. 8). As for preliminary experiments, we did not observe any significant auto-fluorescence when exciting with the Red 640 nm laser (data not shown) and there was only little staining with the pre-immune antibodies conjugated with Alexa Fluor ™ 647 ( Table 1). The percentage of bacteria individually stained with the anti-F. prausnitzii A2-165 antibodies represented up to 0.69% of total bacteria for HV5, falling to 0.38% when taking into account only live bacteria (Table 1). Percentages were higher for anti-F. prausnitzii ATCC 27766 +27768 antibodies, with up to 3.04% of total and 1.29% of live bacteria being stained for HV7. For every fecal sample, 240 events stained with anti-F. prausnitzii A2-165 or anti-F. prausnitzii ATCC 27766 + ATCC 27768 antibodies were sorted and plated on mGAM-CRI plates, resulting in variable numbers of colonies depending on the fecal samples and on the gated regions. Forty-two colonies presenting morphologies compatible with F. prausnitzii (i.e., exclusion of large colonies that appeared in less than 48 h) were screened with species-specific primers, resulting in a total of 10 PCR-positive colonies isolated from 4 different donors in the Ab 66/68-gated events and 5 PCR-positive colonies isolated from 3 different donors in the Ab A2-165-gated events (Table 1). To assess the heterogeneity of the new isolates, we screened the colonies that appeared positive for species specific PCR using the RAPD, which allows discrimination of closely related strains of the same species [25]. We were able to distinguish 7 isolates presenting different RAPD profiles that were further analyzed using 16S rRNA gene sequencing to confirm their identity. Six of seven isolates were confirmed as F. prausnitzii whereas the unique isolate collected from HV6 was assigned as Ruthenibacterium lactatiformans, which belongs to the Oscillospiraceae family along with F. prausnitzii (Additional files 6 and 7). Use of polyclonal antibodies for sorting and cultivation of new C. minuta strains from fecal material Since C. minuta is usually present in only very low amounts in fecal material compared to F. prausnitzii [6,16], we first performed an experiment in which C. minuta was spiked in different amounts for better delineation of the sorting gate. Taking advantage of the very small size of cells from this bacterial species, we adjusted the gating strategy that consisted in selecting antibodies-stained bacteria among Live bacteria presenting FSC/SSC parameters similar to those of C. minuta DSM 22607 (Fig. 9). Twenty-five of 30 (83%) and 105 of 107 (98%) colonies recovered from the 0.01% and 0.1% spiked material, respectively, were confirmed as C. minuta by species-specific PCR, thus validating the selection strategy. We therefore used it to isolate C. minuta strains from fresh fecal samples from 8 healthy volunteers (HV10-HV17). As expected, Table 1 Results of FCM analysis, sorting and cultivation when targeting F. prausnitzii in samples HV5 to HV9 Fig. 8 16S rRNA gene amplicons analysis of fecal samples used to sort and cultivate F. prausnitzii (HV5 to HV9) or C. minuta (HV10 to HV17). Only the genera with which the species we focused on in this study are affiliated are shown for clarity events that potentially correspond to C. minuta were scarce, ranging between nearly 0 and 1.0% of the Live bacteria among different samples. Colonies with C. minuta-compatible morphology (Fig. 9) that developed 48 to 72 h after plating were subjected to PCR analysis using species-specific primers. Between 13 and 64% of these colonies from 4 out of 8 samples were indeed positive in this assay (Table 2). Ten different RAPD profiles were observed but three of the isolates actually corresponded to species other than C. minuta, as demonstrated by partial 16S sequencing. We therefore ended with 7 different C. minuta isolates collected from 3 different donors and presenting 6 different RAPD profiles (Additional files 8 and 9). It should be noted that 16S gene amplicon repertoire sequencing detected C. minuta only for HV10 and HV17 fecal samples, representing 0.67 and 0.01% of OTUs, respectively (Fig. 8). were considered (C, black circle). Bacteria stained with anti-C. minuta antibodies were then selected from these regions and directly sorted on cultivation plates (E and F, antibodies-stained frequencies relate to total events). Colonies circled in green tested positive for C. minuta-specific PCR. These experiments were performed once for each spiked dilution Table 2 Results of FCM analysis, sorting and cultivation when targeting C. minuta in samples HV10 to HV17 Correlating sequencing results with 405 nm auto-fluorescence results In order to better assess the relevance of auto-fluorescence measurement for detecting archaea, we compared relative abundance of auto-fluorescent events detected in cytometry to relative abundance of Methanobacteriaceae calculated from 16S rRNA-encoding genes sequencing results for the 13 donors (HV5 to HV 17) for which both data sets were available. This resulted in a R 2 value of 0.68, with a systematic under-estimation of sequencing compared to FCM results (Fig. 10). Discussion The recent description of FCM and bacterial cell sorting under anaerobic conditions [23] prompted us to explore this technology for targeted enrichment and culture of species of interest from fecal material. We first focused on the commensal species F. prausnitzii which is usually found in significant numbers in fecal samples, but is very sensitive to oxygen exposure. A first batch of antibodies was raised against the type strain A2-165 that belongs to F. prausnitzii phylogroup IIb, and a second batch was raised against a mix of the closely related strains ATCC 27766 and ATCC 27768 that belong to phylogroup I recently proposed as "Faecalibacterium moorei" [10]. Interestingly, polyclonal antibodies generated with type strains from one phylogroup presented only limited reactivity against type strains of the other phylogroup. This could be due to lower affinity binding, or to the limited presence of common epitopes. In addition, the presence of extracellular compounds that mask specific epitopes cannot be excluded [26]. The presence of such an extracellular matrix may depend on the growth phase, which could also explain the observed variations in staining efficiency against strain A2-165. In a similar way, polyclonal antibodies generated using type strain C. minuta DSM 22607 were only poorly reactive or even completely unreactive against type strains of "C. massiliensis" and "C. timonensis, " respectively. Both for the three F. prausnitzii strains and the C. minuta strain, bacteria sorted after antibody staining remained cultivable and LIVE/ DEAD ™ staining correlated well with cultivability, with only a few colonies obtained from bacteria stained by PI. We recently reported similar cultivability results after LIVE/DEAD ™ staining for a limited number of anaerobic commensal species, suggesting that this labeling has real value for enriching anaerobic commensal bacteria to be sorted for cultivation [27]. However, one should remember that exceptions have also been reported [28] and that staining efficacy can potentially be affected if bacteria form endospores, which is the case for a variety of gut commensal species [29]. Since the ultimate goal of the sorting experiments was to cultivate stained bacteria rather than to measure their relative abundancy in the samples, we chose to use a stringent gating strategy that was common to all samples. Sorting and sequencing experiments confirmed the good species specificity of polyclonal antibodies directed against the two F. prausnitzii phylogroups, for which nonspecific enrichment was almost absent. The fact that we were not able to culture F. prausnitzii strains from fecal samples collected on HV5 and HV6 for which F. prausnitzii OTUs were detected by sequencing can be due to the specificity of the two antibodies, which probably do not cover the whole variety of possible strains. Since stained events were detected with both antibodies especially for HV5, another explanation could be that some strains require specific nutrients that were not present in our complemented mGAM medium. Concerning the two pairs of F. prausnitzii strains that were isolated from donors HV7 (strains 281 and 282) and HV8 (strains 275 and 276) using the two different polyclonal antibodies, it was interesting to note that they presented different but still related RAPD profiles (Additional Fig. 6) but that they presented 100% 16S rRNA-encoding gene identities with each other (Additional Fig. 7). Whether differences observed in RAPD profiles reflect evolution of one commensal strain through gain and/or loss or several genes [30] or are due to technical biases will be confirmed when sequencing the complete genomes. These results also call into question the specificity of our polyclonal antibodies. It cannot be excluded that phylogroup-specificity observed with polyclonal antibodies directed against type strains will be challenged by new strains that will react with both antibodies. Autofluorescence of methanogenic archaea has already been reported: it is due to the redox cofactor F 420 and has been proved useful for fast and reliable quantification of methanogenic archaea in biogas digesters using flow cytometry [31]. Although performed with a limited number of samples, our work tends to confirm that it could also be used to monitor methanogenic archaea in fecal samples. This could be of interest for the development of microbiota-based biomarkers since methanogenic archaea are considered major contributors to carbohydrate metabolism and their absence or presence in various amounts has been reported to be associated with several phenotypes, including severe acute malnutrition [32] or a lean phenotype [33,34], to mention just a few. Interestingly, the presence of members of the Christensenellaceae family in the gut microbiota was reported to be associated with the presence of methanogenic archaea [17], with both groups being late colonizers of the gut ecosystem [35], which could be due to syntrophy via interspecies hydrogen transfer between Christensenella and Methanobrevibacter species [36]. In our limited number of samples, we did not observe any correlation between the presence of Christensenellaceae and the presence of methanogenic archaea when evaluated by FCM or 16S repertoire sequencing. C. minuta has been reported as a keystone species that comprises on average 0.01% of the fecal microbiota [16]. This low abundancy could explain why culture studies that used non-specific methods to cultivate a large diversity of gut commensal species did not succeed at cultivating C. minuta strains [11], thus highlighting the need for methods that can enrich cultivated fractions with specific species of interest. This includes antibodies, but also a number of additional stains such as for instance fluorescent analogs of glucose that have recently been shown to be taken up by members of the Christensenellaceae-R7 group [37]. In conclusion, this proof-of-concept study confirms that FCM is well adapted for complex bacterial microbiota studies. When used in conjunction with appropriate staining and associated controls, it gives a general overview of microbiota composition and variations in longitudinal studies [38], including bacterial load which is an important piece of information [39]. In addition, the use of more specific staining such as antibodies is a promising strategy to target, sort, and cultivate species of interest from these complex ecosystems. Recent studies demonstrated that these antibodies can be generated using a reverse genomics approach [40], which opens important avenues since approximately 70% of the gut microbial species still lack cultured representatives [41]. Due to the lack of detailed knowledge of their reactivity, it remains difficult to use polyclonal antibodies such as those described in this study for an analysis of the relative abundance of specific species of interest. However, in the future, better characterized monoclonal antibodies or antibody cocktails may offer an interesting alternative to molecular biology-based methods for longitudinal monitoring of commensal species of interest. This should be accompanied by specific technological developments in the field of FCM to allow simple, commercially available solutions enabling routine sorting experiments in controlled atmosphere conditions, which will be of strong interest for commensal bacteria but also for cellular biology applications necessitating oxygen conditions that are close to in vivo conditions [42,43]. Polyclonal antibodies Rabbit polyclonal antibodies (pAb) were produced in New Zealand rabbits using a standard 53 days protocol (Covalab). Rabbits were immunized with a 50/50 mix of phylogenetically related strains F. prausnitzii ATCC-27766 and ATCC-27768, or pure cultures of F. prausnitzii A2-165 and C. minuta DSM 22607. At day 0, animals received intradermal injection of 0.5 ml of a 1 × 10 9 suspension of heat-inactivated bacteria + 0.5 ml of complete Freund's Adjuvant. At day 14 and day 39, they received sub-cutaneous injection of 0.5 ml of a 2x10 9 suspension of heat-inactivated bacteria + 0.5 ml of incomplete Freund's Adjuvant. Serum samples collected at day 39 were tested for immune-reactivity. In case of good reactivity (which was the case for F. prausnitzii A2-165), animals were sacrificed at day 53 for serum collection. If reactivity of day 39 sera was too low (F. prausnitzii ATCC 27766 + ATCC 27768 and C. minuta DSM 22607), animals were boosted at days 60, 74 and 88 with subcutaneous injection of 0.5 ml of a 2x10 9 suspension of heat-inactivated bacteria + 0.5 ml of incomplete Freund's adjuvant before sacrifice at day 108. After rabbit bleeding, sera were harvested and IgGs were purified on protein A and labeled with Alexa Fluor ™ 647 or Alexa Fluor ™ 405 using the protein labeling kit (Thermo Fisher Scientific) as recommended by the manufacturer. IgGs from a nonimmunized rabbit were also coupled with Alexa Fluor ™ 647 as negative control. Viability staining procedure Viability staining was performed using the LIVE/DEAD ™ BacLight ™ Bacterial Viability Kit (SYTO 9/Propidium Iodide, Thermo Fisher Scientific) as recommended by the manufacturer. After staining, bacteria were washed in PBS and then analyzed within 30 min using an Influx ® (Becton-Dickinson) cell sorter equipped with a 200 mW-488 nm laser, a 120 mW-640 nm laser, and a 100 mW-405 nm laser. Anaerobic sorting of single strains The BD Influx ® cell sorter used for anaerobic sorting has been described by Thompson et al. [23]. Briefly, anaerobic sorting was achieved by eliminating oxygen from the sort stream and cell deposition areas of the cell sorter by using a customized glove box. Paraffin oil was used to cover samples in both conditions to prevent oxygen entry to the sample during the transfer from the anaerobic chamber, where the samples were prepared, to the flow cytometer. Durations of the sorting steps were normalized to 30 min. Oxygen concentration in the glove box was monitored using a ToxiRAE PRO detector (RAE, France). For sorting experiments, reduced mGAM-CRI plates were transferred from the anaerobic chamber to the cell sorter glove box using sealed bags. Nitrogen was then injected and anaerobic sorting experiments were started when the oxygen concentration was measured below 0.7%. Analysis and sorting followed by cultivation of stained and unstained bacteria was used to evaluate the impact of the process on F. prausnitzii and C. minuta cultivability. Bacteria used for sorting experiments were anaerobically cultivated for 48 h at 37 °C on mGAM-CRI plates. One colony was then sub-cultivated in mGAM-CRI broth for 24 h at 37 °C. One milliliter of bacterial cultures were then washed in reduced PBS and diluted 1:100 before staining with LIVE/DEAD ™ BacLight ™ Bacterial Viability Kit as described above. Polyclonal antibodies generated using the mix of F. prausnitzii ATCC 27766 and ATCC 27768 bacteria were conjugated with Alexa Fluor ™ 647 whereas those generated with F. prausnitzii A2-165 were conjugated with Alexa Fluor ™ 405. Polyclonal antibodies generated using C. minuta DSM 22607 bacteria were conjugated with Alexa Fluor ™ 647. Staining with 1/100th antibodies (final concentration: 10 μg/ml) and viability kit was performed in anaerobic conditions for 30 min in the dark and then bacteria were washed in reduced PBS before analysis. After washing, stained bacteria were suspended in reduced PBS containing 0.5 mg/l resazurin, 2.1 mM soldium sulfure, and 2.8 mM L-cystein HCl, and the suspensions were covered with 750 μl of paraffin oil to prevent oxygen exposure. The tubes were taken out of the anaerobic chamber and bacteria were analyzed and sorted within 30 min. Sorting speed was adjusted at 1000 events per second and four series of 1, 3, 10, 30, 100, 300, or 1000 events were sorted on one single spot for each tested condition. Once sorting experiments were achieved, plates were re-introduced in sealed bags and transferred in the anaerobic chamber where they were incubated at 37 °C for 2 to 3 days before observation. Percentages of recovery were calculated by taking into account the smallest number of sorted bacteria resulting in the growth of colonies visible to the naked eye. The following formula was then applied: percentage of recovery = (n/N × 1/B) × 100 n: number of colonies counted per row (or for 2 rows when only one bacterium was deposited) N: number of sorted spots (8 for the first 2 rows for which only one bacterium was deposited per spot, 4 for the other rows for which higher numbers of bacteria were deposited) B: number of bacteria sorted on each spot. Unstained and pre-immune controls We performed control experiments by analyzing a series of 4 fecal frozen samples collected from healthy volunteers (HV) 1 to 4. Samples were left unstained to evaluate potential auto-fluorescence, or stained with pre-immune antibodies conjugated with Alexa Fluor ™ 647 dye to evaluate non-specific staining. The same 4 samples were also stained with both anti-F. prausnitzii antibodies in a separate tube. Briefly, washed bacteria were incubated in PBS with SYTO 9/PI added with 1/100th conjugated antibodies dilution (final concentration: 10 μg/ml) for 30 min. Bacteria were then washed once with reduced PBS, covered with paraffin oil to protect them from oxygen and analyzed by flow cytometry. Sorting and DNA extraction from antibodies-enriched fractions To test the specificity of the F. prausnitzii antibodies, we sorted antibodies-stained fractions collected from the pool of frozen fecal samples HV2 and HV4. Bacteria stained by the polyclonal antibodies directed against C. minuta being almost undetected in the mix, we decided to spike the same pool of 2 fecal samples with C. minuta DSM 22607 at approx. 2% relative to the bacterial counts measured by flow cytometry. This allowed sorting of sufficient material for subsequent sequencing and evaluation of antibodies specificity. Events presenting autofluorescence when exciting with the 405 nm laser were also sorted for further identification. One million events were sorted for each of the 3 antibodies as well as for the auto-fluorescent events. DNA was extracted from the sorted cells using the mericon ™ DNA Bacteria Kit (QIA-GEN) with adjustments as follows. Bacterial cell pellets were resuspended in 20-40 μl Fast Lysis Buffer, depending on the volume of the pellet, by brief, vigorous vortexing. The samples were placed into a thermal shaker (800 rpm) set to 100 °C for 10 min. Samples were then allowed to cool at room temperature for 2 min, before centrifugation (13,000 × g, 5 min). 20-40 μl of the supernatant were transferred to a 1.5 ml microcentrifuge tube and purified using the NucleoSpin ™ Gel and PCR Cleanup Kit (Macherey-Nagel) according to manufacturer's instructions. Library preparation and 16S rRNA gene amplicon data analysis For native fecal samples, library preparation and sequencing were performed with 24 ng template DNA as described in detail previously [44] using a robotized platform (Biomek400, Beckman Coulter). For the samples extracted with the mericon ™ DNA Bacteria Kit (QIA-GEN), 1-8 μL template DNA were used for PCR. The V3-V4 region of 16S rRNA genes was amplified in duplicates for 25 cycles with DNA from fecal samples, or with 35 cycles with DNA from cell sorted samples, following a two-step protocol [45] using primers 341F-785R [46]. After purification using the AMPure XP system (Beckman Coulter), sequencing was carried out with pooled samples spiked with 25% (v/v) PhiX standard library in paired-end modus (PE300) using a MiSeq system (Illumina, Inc.) according to the manufacturer's instructions. Raw reads were processed using an in-house developed pipeline (www. imngs. org) [47] based on UPARSE [48]. In brief, sequences were demultiplexed and trimmed to the first base with a quality score < 3. Pairing, chimera filtering and operational taxonomic units (OTUs) clustering (97% sequence identity) was done using USEARCH 11.0 [49]. Sequences with less than 350 and more than 500 nucleotides and with an expected error > 2 were excluded from the analysis. Remaining reads were trimmed by ten nucleotides on each end to avoid GC bias and non-random base composition. Only OTUs occurring at a relative abundance > 0.25% in at least one sample were kept. To highlight differences in specificities of the antibodies raised against F. prausnitzii that includes different phylogroups, a zero-radius approach [24] using the UNOISE algorithm (USEARCH 11.0) [50] was chosen, increasing the taxonomic resolution with which the molecular species in sorted bacterial populations were delineated. Sequence alignment and taxonomic classification was conducted with SINA v1.6.1, using the taxonomy within the SILVA release 128 [51]. Downstream analysis was performed in the R programming environment using Rhea (https:// lagko uvard os. github. io/ Rhea/) [52]. Normalization of OTU and ZOTU tables was performed by dividing through sample size and subsequent multiplication by the number of reads in the smallest sample, to account for differences in sequencing depth. Isolation of bacterial DNA from stool samples DNA was isolated using a modified protocol according to Godon et al. [53]. Snap frozen samples were mixed with 600 μl stool DNA stabilizer (Stratec biomedical), thawed, and transferred into autoclaved 2-ml screw-cap tubes containing 500 mg 0.1 mm-diameter silica/zirconia beads. Next, 250 μl 4 M guanidine thiocyanate in 0.1 M Tris (pH 7.5) and 500 μl 5% N-lauroyl sarcosine in 0.1 M PBS (pH 8.0) were added. Samples were incubated at 70 °C and 700 rpm for 60 min. A FastPrep ® instrument (MP Biomedicals) fitted with a 24 × 2 ml cooling adaptor filled with dry ice was used for cell disruption. The program was run 3 times for 40 s at 6.5 M/s. After each run, the cooling adapter was refilled with dry ice. An amount of 15 mg polyvinylpyrrolidone (PVPP) was added and samples were vortexed, followed by 3 min centrifugation at 15.000 × g and 4 °C. Approximately, 650 μl of the supernatant were transferred into a new 2 ml tube, which was centrifuged again for 3 min at 15.000 × g and 4 °C. Subsequently, 500 μl of the supernatant was transferred into a new 2 ml tube and 50 μg of RNase was added. After 20 min at 37 °C and 700 rpm, gDNA was isolated using the NucleoSpin ® gDNA Clean-up Kit from Macherey-Nagel. Isolation was performed according to the manufacturer's protocol. DNA was eluted from columns twice using 40 μl elution buffer and concentration was measured with NanoDrop ® (Thermo Scientific). Samples were stored at
8,351
2022-02-03T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Challenges and requirements of exchanging Product Carbon Footprint information in the supply chain . The reduction in greenhouse gas (GHG) emissions is of high importance to society. Companies therefore have an increasing interest in understanding and reducing the GHG emissions of their supply chains and to generate data to track and prove this, for example by calculating product carbon footprints (PCFs). Besides serious gaps in PCF data within companies and in LCA databases, there is still missing experience and knowledge on how to consistently prepare and exchange these data. Based on our experience as LCA practitioners in the industry, we discuss the key challenges and requirements such as data formats, data quality, confidentiality concerns and comparability issues of PCF data. Aiming to contribute practical recommendations to ongoing initiatives working to enable PCF-exchange along value chains, we scope approaches that match industry requirements. Motivation The frame to reduce GHG emissions as a company is set by the Paris Agreement [1] and several green deals have been announced globally, striving to push the impact of humanity into the safe zone of our planetary boundaries. The European Green Deal [2], for example, stimulates markets towards climate neutrality and circularity, with policies and programs underpinned by life cycle assessment (LCA). The ultimate goal is to empower public and private consumers by introducing digital "product passports" containing environmental performance indicators such as product carbon footprint (PCF) or information on recycled content and substances of concern. This initiative will require the individual actors in the value chain to collaborate and exchange the relevant data. The accounting dilemma The generation of meaningful LCA data is an effort along the value chain. Reliable information is mostly only available to the company running the respective process and knowledge on up-/downstream processes is limited (cf. Fig. 1). These external processes in most cases contribute the largest part of a footprint. Thus, companies acting as isolated entities have little chance to generate PCFs with a reasonable level of data certainty. LCA results and PCFs currently rely on a large number of assumptions, estimations, and multiple data sources, commonly representing industrial averages, rather than supply chain specifics. Consumers as well as companies understandably hesitate to base their decisions on these indicators. To overcome this obstacle and produce coherent footprint information, exchange of reliable information across industry players is key. With this paper, the authors aim at illustrating a promising pathway towards trusted PCF sharing mechanisms. In a first step, the current state regarding the variety and the resulting challenges of existing guidance documents for PCF assessments is presented. This is complemented by a discussion of the trade-offs between transparency and confidentiality inherent to different data exchange formats. In a second step, a future PCF sharing approach and its key factors for increasing trust while maintaining confidentiality -thereby circumventing current challenges -are described. This approach reflects the point of view of a cross-industrial panel of LCA experts within the International Sustainability Practitioners Network (ISPN) [3]. For this contribution, a qualitative analysis of inputs from the broader industrial and academic network of the ISPN was performed. Two criteria were defined for selecting relevant information: 1) aspects describing the drawbacks of current as well as needs for future PCF sharing practices; and 2) key attributes of promising IT solutions. The existing basics The basic standards and rules for managing sustainability information and performing carbon footprint calculations have already been set. For instance, the ISO developed and is still developing such standards [4][5][6] that support practitioners in valuable data generation, as illustrated in Figure 2. Fig. 2. Standard landscape In addition, further guidance documents have been published, such as the Environmental Footprint (EF) method [7] or the Pathfinder framework [8]. The EF has been introduced by the European Commission for improving the validity and comparability of environmental performance evaluation and for sharing results via a digital product passport † . The Pathfinder has been developed by the World Business Council of Sustainable Development (WBCSD) [8] with a focus on refining the methodology for assessing and sharing product carbon footprint information. A need for further harmonization of the current guidance documents remains as comparability of assessment results is not guaranteed. This requires the commitment of industry players to develop these rules under such framework. Data exchange options Successful performance of LCAs and making use of results hinges on the availability of good quality data. In the wake of large projects such as the Product Environmental Footprint (PEF) and Environmental Product Declaration (EPD) programs, as well as industry and NGO initiatives such as Catena-X [9] and WBCSD's pathfinder (also cf. Section 6), the need for a solid data foundation becomes a fundamental requirement. The question to the projects at hand is how these data exchange may be standardized for future LCAs. An important aspect is the level of granularity required to be consistent, acceptable for all parties and effective. In LCAs, several levels of dataset granularity and transparency exist (cf. Fig. 3). These are: unit processes, with the highest possible detail on processes; aggregated processes, which contain all information to conduct LCIA and impact indicator results, which are the most 'compact' datasets. † Sustainable Products Initiative (SPI) of European Commission A known obstacle in exchanging LCA data is the need for protection of confidential, business relevant information even up to intellectual property (IP) to secure competitive advantages. LCA data have been suspected to enable retro-analysis of the underlying processes, thus violating this requirement. Unit processes have the drawback of allowing direct insights into IP of companies. Such datasets can be integrated in further models and modified flexibly, enabling transparency and detailed assessments. Adding complexity, this can be perceived as a drawback or benefit. The aggregated processes obscure most details of any process they are based on. However, they can be used in any LCIA calculation, e.g. with experimental or customized impact assessment methods. As a drawback, they are rather verbose and require precise matching of the elementary flows with the complementing LCIA datasets, or else one may derive aberrant indicator results. The least detail is contained in calculated indicator results. The simplicity of these single figures enables interpretation by non-experts and gives the least opportunity for retro analysis, thus protecting contributors' sensitive/confidential information. The lack of transparency reduces the reluctancy to share information along the value chain. However, the missing transparency has to be overcome. Outlook on future data exchange in networks An exemplary poll (non-exhaustive) among the wider life cycle management community taken at the LCM conference 2021 reveals the current opinion regarding the most important aspects of meaningful PCF sharing along the value chain. The findings are represented by a word cloud in Fig. 4. In summary, the community considers collaboration as key activity towards successfully defining the right level of transparency needed to generate trust. Based on these findings, a variety of measures can be taken, as described in the following paragraphs. One vision to overcome the lack of transparency in sharing calculated indicator results is to establish a PCF data exchange infrastructure that guarantees fast and at the same time safe data transfer, as illustrated in Fig. 5. Ideally, the respective IT ecosystems are connected to internal accounting systems and allow efficient performance measurement, standardized calculation and reporting, supplier engagement and certification. Such systems will need well defined interfaces and data exchange formats including sector or even industry wide product-specific unique identifiers to allow precise data mapping. This allows direct connection of data points to ensure seamless data updates across supply networks, e.g. data on energy suppliers used across value chains. Fig. 4. Poll results for important aspects of meaningful PCFs at the LCM conference 2021 The data would have to follow standardized formats for level of aggregation, emission flows and applied impact methodologies. Standards and product category rules should provide clear guidance to PCF modelers, users and verifiers on the application of methods. Results and underlying decisions for system boundaries, allocations and other scoping parameters, as well as quality indicators have to be provided on a mandatory basis in a machine-readable format. This enables representative use of upstream PCFs as emission factors for downstream assessments in large quantities. An efficient trust mechanism, including a certification scheme based on regular 3 rd party audits could overcome the lack in transparency when exchanging calculated indicator results. Dedicated infrastructure/ecosystems (distributed or central) for exchange of PCFs and verifications of 3 rd party audits would then allow the use of supplier PCFs as emission factors in own assessments with ease. Current initiatives for exchange of PCF impact indicators Various initiatives for sharing PCF information along the value chain have been started recently. Fig 6. names just an exemplary selection of programs for sharing impact indicators that are known to members of the ISPN forum at the time of writing. These initiatives all strive to enable or facilitate the effective exchange of carbon footprints in one or another way, but show differences in what they focus on, indicated under "Main Focus" in Fig 6. The figure only shows projects and programs from associations/non-profit organizations, not leveraging business models on data. There are many more proprietary solutions from individual stakeholders, which are partly listed further below. One of the main challenges for practitioners working with PCFs or emission factors provided by suppliers, is to remain consistent in methodology and to make use of the received data in a representative way. The first most obvious option to guarantee this, is to define strict rules and limit the freedom of making own decision for a practitioner to an absolute minimum by further standardizing the methodology. Catena-x (automotive industry), Together for Sustainability (Chemical Industry) [10] and the Pathfinder initiative (WBCSD) focus on this. The second option is to request more descriptive predefined meta data from suppliers for their PCFs. It creates transparency about assumptions and the methodology in use, thus allowing the receiving practitioners to make informed decisions on how to/whether to include a supplier's PCF into their own PCF calculation as emission factors. This can be augmented with analytics-based indicators/quality values to support practitioners, receiving PCFs from suppliers, in judging if the provided PCF is compatible with their methodology or level of ambition in terms of quality. ESTAINIUM [11] and the Asset Administration Shell (AAS) Project of the ZVEI [12] have a focus on this. The first option is tempting, since it guarantees a high level of compatibility of PCFs within one PCF sharing scheme. The second option allows the combination of different sharing schemes and the use of PCFs from existing corporate PCF programs and environmental product declarations ("downward compatibility"). A balanced approach or combination is yet to be found. The first drafts for frameworks have just recently been circulated within the community. Another challenge is the efficient and save exchange of PCFs and PCF certifications along the value chain based on digital infrastructures. Catena-X, ESTAINIUM and the AAS have a strong focus on the sharing infrastructure and the other initiatives might have that on the agenda. However, a digital infrastructure approach has not been published yet. It will determine the ease at which the exchange of PCFs can be communicated and to what degree we will be able to trust them. If or how PCFs shall be certified, has a distinctive impact on the sharing mechanism and the necessary data formats. Some EPD programs or industry specific programs have already certified large numbers of footprints, but this still does not scale to the degree where these certification schemes could serve as trust mechanisms for entire supply chains of several industries. Several initiatives work on this end as well. Conclusion The pressure for coherent PCF data has increased from regulators, investors and from consumer side and the need for specific data compared to using industrial averages has tremendously increased. As shown above, there are many activities ongoing, pointing towards more coherent PCF data and a more network-like exchange. We have to learn from the wide variety of initiatives and quickly converge for compatibility of data formats and methods to provide comparability of results. To that end, sharing technologies and verification schemes must be agreed upon to facilitate convenient exchange. Researchers/research institutions might take a mediating role or a reviewing/control function for initiatives set up by industry. The industry has to agree with the policy makers on a common level of ambition which works for different sectors as well as large and small companies.
2,888.2
2022-01-01T00:00:00.000
[ "Computer Science" ]
Aspergillus flavus grown in peptone as the carbon source exhibits spore density- and peptone concentration-dependent aflatoxin biosynthesis Background Aflatoxins (AFs) are highly carcinogenic compounds produced by Aspergillus species in seeds with high lipid and protein contents. It has been known for over 30 years that peptone is not conducive for AF productions, although reasons for this remain unknown. Results In this study, we showed that when Aspergillus flavus was grown in peptone-containing media, higher initial spore densities inhibited AF biosynthesis, but promoted mycelial growth; while in glucose-containing media, more AFs were produced when initial spore densities were increased. This phenomenon was also observed in other AF-producing strains including A. parasiticus and A. nomius. Higher peptone concentrations led to inhibited AF production, even in culture with a low spore density. High peptone concentrations did however promote mycelial growth. Spent medium experiments showed that the inhibited AF production in peptone media was regulated in a cell-autonomous manner. mRNA expression analyses showed that both regulatory and AF biosynthesis genes were repressed in mycelia cultured with high initial spore densities. Metabolomic studies revealed that, in addition to inhibited AF biosynthesis, mycelia grown in peptone media with a high initial spore density showed suppressed fatty acid biosynthesis, reduced tricarboxylic acid (TCA) cycle intermediates, and increased pentose phosphate pathway products. Additions of TCA cycle intermediates had no effect on AF biosynthesis, suggesting the inhibited AF biosynthesis was not caused by depleted TCA cycle intermediates. Conclusions We here demonstrate that Aspergillus species grown in media with peptone as the sole carbon source are able to sense their own population densities and peptone concentrations to switch between rapid growth and AF production. This switching ability may offer Aspergillus species a competition advantage in natural ecosystems, producing AFs only when self-population is low and food is scarce. Background Aflatoxins (AFs) are a group of polyketide metabolites produced by several toxigenic species of Aspergillus such as A. flavus and A. parasiticus after infections of seeds with high protein and lipid contents, e.g. peanut, corn and walnut [1][2][3]. AFs are toxic and carcinogenic, posing serious threats to both animal and human health [4]. Extensive studies carried out in A. flavus and A. parasiticus lead to the identification of a 70 kb DNA cluster consisting two specific transcriptional regulators (aflR and aflS), and 26 co-regulated downstream metabolic genes in the AF biosynthetic pathway [5][6][7][8]. Expressions of aflR and aflS are further regulated by global regulators such as the CreA transcription factor and the VelB/VeA/LaeA complex, and possibly by a cell surface-localized Gprotein coupled receptor complex [2,9,10]. Various nutritional and environmental factors including carbon sources [11], nitrate [12], light [13], temperature [14,15], pH [14,16], and oxygen availability [17][18][19] affect AF productions and expressions of AF biosynthesisrelated genes [9,20,21]. It has been known for a long time that sugars and related carbohydrates support both fungal growth and AF production. However, peptone, a mixture of protein degradation products, is a preferred carbon source for fungal growth, but not for AF production [11,[22][23][24][25]. Many studies have been carried out to elucidate how various carbon sources affect AF biosynthesis. Transition from peptone mineral salts (PMS) medium to glucose mineral salts (GMS) medium leads to AF biosynthesis, a process requiring de novo transcription and translation [24]. Comparisons of a large collection of carbon sources reveal that sugars that are normally oxidized through the hexose monophosphate or glycolytic pathway such as glucose, raffinose and mannose are efficient carbon sources for AF productions [23], while lactose and most amino acids excluding aspartate are considered to be unsuitable carbon sources for AF production [11,26]. AFs are usually produced in parallel with fatty acid biosynthesis following the rapid growth and sugar utilization phase, as common precursors such as acetyl-CoA and malonyl-CoA derived from glucose catabolism are utilized in both pathways [18]. As many carbohydrates are able to induce AF production, believe that utilization of readily metabolized carbohydrates may result in elevated energy status which in turn induces AF biosynthesis [23]. Wiseman and Buchanan (1987) note that, although mycelia grow well in media with low concentrations of suitable sugars, AFs are produced only when sugar concentrations are higher than 0.1 M, and in which reduced mycelial growth and inhibited TCA cycle activity are observed [27]. Addition of TCA cycle intermediates inhibits AF production, suggesting that glucose may regulate AF productions through inhibition of the TCA cycle [25,26]. Recent studies have revealed cell density-dependent sclerotium formation and AF production in media with glucose and sorbitol as the carbohydrate sources, which is regulated through non-cell autonomous factors [28,29]. In nature, seeds with high protein and lipid content, such as peanut and cotton, are more susceptible to high AF production than starchy seeds like rice and sorghum [1]. It has also been shown in maize that mycelial growth and AF production occur primarily in the embryo and the aleurone layer where mainly storage proteins and lipids are accumulated [30,31]. Removal of oil from ground cotton seeds greatly enhances AF production, suggesting that lipids are not essential for optimal AF biosynthesis [32]. Fatty acids may stimulate or inhibit AF production through the presence of various oxidationderived oxilipins [33][34][35][36]. The influence of protein and peptone on AF biosynthesis remains largely unknown. In this study we investigated how AF production by Aspergillus was influenced when peptone was used as the sole carbon source. Contrary to expectations, we observed spore density-and peptone concentrationdependent AF production with peptone as the sole carbon source. AFs were only produced in the PMS medium when initial spore densities were 10 4 spores/ml or lower. In contrast, mycelia cultured in the PMS medium with higher initial spore densities or with increased peptone concentrations grew rapidly but without AF production. Spent media experiments showed that no inhibitory factors were released into the culture media. Metabolomic analyses revealed that, in addition to inhibited AF biosynthesis, mycelia grown in peptone media with high initial spore densities showed enhanced sugar utilization and repressed lipid biosynthetic metabolism. Results Spore density-dependent AF production in PMS media PMS has long been considered to be a non-conducive medium for AF production in both A. flavus and A. parasiticus [23][24][25]. To investigate the mechanism underlying peptone's influence on AF biosynthesis, the well-studied A. flavus A3.2890 [37][38][39] from the China General Microbiological Culture Collection Center (CGMCC) was used to conduct our experiments. It was indeed the case that A. flavus did not produce AFs when cultured at the commonly employed initial spore density of 10 5 or 10 6 spores/ml. However, when various spore densities of A. flavus were tested to initiate cultures, a density-dependent AF production was observed. When the initial spore density was gradually decreased, increasing amounts of AFs were detected in media after 3day culture, as shown by thin-layer chromatography (TLC) and high pressure liquid chromatography (HPLC) analyses ( Figure 1B & D). At 10 1 spores/ml, the amount of AFs produced was significantly lower, comparable to that of the 10 4 spores/ml culture. The maximal AF production was observed in the PMS medium inoculated with 10 2 spores/ml. This differs from GMS cultures, where increasing amounts of AFs were produced when initial spore densities were increased from 10 1 to 10 6 spores/ml ( Figure 1A & C). We also observed that in GMS media, AFB1 was the major toxin ( Figure 1C), while in PMS media, AFG1 was the primary toxin produced ( Figure 1D). These data suggest that AF biosynthesis is regulated differentially in these two media. Since most A. flavus strains produce only AFB1 [40][41][42], we examined if the A3.2890 strain used was indeed A. flavus. By using the protocol developed by Henry et al (2000) [43], fragments of the internal transcribed spacer (ITS) region of rRNA, β-Tubulin and Calmodulin genes from the A. flavus A3.2890 strain were amplified and sequenced, and then compared with corresponding sequences in the Genbank, and confirmed that A3.2890 is indeed A. flavus (see Additional files 1, 2, 3 and 4). It is very likely that the strain we used belongs to the type IV A. flavus, which produces both AFBs and AFGs, as reported recently [44]. The time course of AF production To assess the production and possible degradation of AFs during the cultural period with various initial spore densities, we examined AFG1 contents in the PMS medium during a five-day culture period, with 10 6 or 10 4 spores/ml. We observed that, in the culture initiated with 10 4 spores/ml, a significant amount of AFG1 was detected on the day two, reached the maximum level on the day three, and subsequently decreased gradually. In contrast, almost no AFs were detected in the culture initiated with 10 6 spores/ml during the entire five-day culture period ( Figure 1E). It has been shown previously that peptone from different suppliers may induce different enzyme activities in Candida albicans [45]. The peptone initially used in this study was purchased from Beijing Aoboxing Biotech. To ensure the result observed is a general phenomenon, peptone from Sigma and Shuangxuan Microbe Culture Medium Products Factory was tested, and same results were observed (see Additional file 5). To examine if cultures with high initial spore densities lead to a similar AF accumulation in mycelia, we used the TLC method to analyze AF contents in mycelia cultured for three days in either PMS or GMS media, with 10 4 or 10 6 spores/ml. The results showed greatly reduced AF content in mycelia in culture initiated with 10 6 spores/ml, similar to the AF content of the media. In contrast, increased AF production was observed in mycelia cultured in GMS media with 10 6 spores/ml, as compared to that with 10 4 spores/ml (see Additional file 6). High initial spore density in PMS media led to rapid mycelial growth To exclude the possibility that the reduced AF production in PMS media initiated with high initial spore densities was caused by inhibited fungal growth, mycelium dry weights were determined during a five-day culture period. A. flavus cultured in GMS media with an initial density of 10 4 or 10 6 spores/ml showed a similar growth curve, with a continuous increase in dry weight during the five-day incubation. Higher initial spore density led to slightly faster mycelial growth, and an increased mycelium dry weight (Figure 2A). A. flavus cultured with 10 4 spores/ml in PMS media showed a similar growth curve to that in GMS media with the same spore density ( Figure 2B). However, a much sharper exponential growth phase was observed in the first two days in PMS culture initiated with 10 6 spores/ml ( Figure 2B). The mycelium dry weight reached the maximum level on the 4 th day and decreased significantly afterwards, suggesting no inhibition of growth in the high density PMS culture. Instead, A. flavus cultured in PMS media with a high initial spore density grew faster and degenerated earlier ( Figure 2B). No inhibitory factor was released from the high density culture into the media We examined whether inhibitory factors were released into the media by A. flavus grown in PMS media with high initial spore densities. The experiment was performed by adding filter-sterilized spent media collected from 3-day cultures with 10 4 or 10 6 spores/ml to fresh GMS media inoculated with 10 6 spores/ml. Filtersterilized fresh PMS or GMS media were used as controls. The addition of 1 ml fresh PMS medium (P0) to GMS cultures enhanced production of both AFB1 and AFG1, as compared to the addition of fresh GMS medium (G0) ( Figure 2C), which is in agreement with a previous report [46]. As showed in Figure 2C, addition of 1 ml spent media from both high (without AF production) and low (with AF production) density cultures to the GMS culture promoted AF production. No significant difference in AF production was observed in the high density culture. The experiment was extended further to add 5 ml spent media from high (P6) and low (P4) density cultures. If inhibiting factors were present in the spent media, we would expect to see reduced AF productions when compared to addition of 1 ml spent media. However, we observed that more AFs were produced in both P4 and P6 cultures, and no significant difference was observed between P4 and P6 samples ( Figure 2D). Lower levels of AFs were produced in cultures with spent PMS media than those with fresh PMS media ( Figure 2C & D), which could be explained by nutrient consumption during the three-day incubations. These data together show that there seems to be no inhibitory factor released from the high density culture to the media. Increased peptone concentrations inhibited AF production To examine if the lack of AF production in PMS media with high initial spore densities is caused by rapid mycelial growth, and consequent depletion of nutrients, the peptone concentration in media from the original 5% was increased to 15% to see if AF production could be restored. We observed, conversely, that mycelia cultured with increased peptone concentration showed greatly reduced AF production, regardless of initial spore ) were added to GMS media inoculated with 10 6 spores/ml. AF contents were measured after cultured at 28°C for 3 days. The spent media were prepared from 3-day PMS cultures with the initial spore densities of 10 4 (P4) or 10 6 (P6) spores/ml. All data were the mean ± SD of 3 HPLC measurements from mixed three independent samples. densities (10 4 or 10 6 spores/ml) ( Figure 3A, P4+ and P6+). We then examined the mycelial growth in media with 5%, 10% and 15% peptone, and observed increased mycelium dry weights when the peptone concentrations were increased ( Figure 3B), suggesting that high concentrations of peptone promoted mycelial growth and at the same time inhibited AF biosynthesis. For each of the peptone concentrations, it was observed that cultures with higher initial spore densities showed an increase in mycelial growth. Taken together, these studies revealed that high concentrations of peptone promoted mycelial growths but inhibited AF production, suggesting that A. flavus grown in the peptone medium is able to sense the peptone concentrations and is able to shift between fast growth and AF production. It has been reported previously that carbon sources affect the pH of culture media [14,16]. If AF production in media correlates with the pH changes was examined during incubation. We found that, as reported by Buchanan and Lewis (1984) [25], the pH of cultures in GMS media was decreased ( Figure 3C, G4 and G6), while pH of cultures in PMS media was increased during the 55hr cultures ( Figure 3C, P4, and P6). Higher initial spore density led to faster acidification or alkalization of the GMS and PMS media during the cultures, respectively ( Figure 3C). Interestingly, we observed that when the peptone concentration was increased to 15%, the pH of the media increased in the same way as the 5% peptone media ( Figure 3C, P4+ and P6+). AF production was inhibited in the medium with 15% peptone, while AF production was active in the medium with 5% peptone, which suggests no direct connection between AF production and pH changes in our incubation system. High initial spore densities in PMS media repressed the expression of AF biosynthesis-related genes To further study how initial spore densities affect AF production in A. flavus, expression of AF biosynthesisrelated genes was examined by quantitative reverse transcription PCR (qRT-PCR) in mycelia initiated with 10 4 or 10 6 spores/ml for two days. We observed that the expression levels of two transcriptional regulators (alfR and alfS), and three AF biosynthesis genes (aflO, cypA and ordA) from the AF biosynthesis gene cluster were substantially lower in mycelia initiated with 10 6 spores/ ml, as compared to those initiated with 10 4 spores/ml ( Figure 4A). The differences were even more pronounced on the day three ( Figure 4B), suggesting transcriptional activation of AF biosynthesis in cultures initiated with the low spore density. We noted that nadA, which is involved in the conversion of AFG1 [47], showed increased expression in the culture initiated with , PMS media with the initial spore density of 10 4 spores/ml; P6, PMS media with the initial spore density of 10 6 spores/ml; G4, cultured in GMS media with the initial spore density of 10 4 spores/ml; G6, cultured in GMS media with the initial spore density of 10 6 spores/ml; P4+, PMS media with 15% peptone, cultured with the initial spore density of 10 4 spores/ml; P6+, PMS media with 15% peptone, cultured with the initial spore density of 10 6 spores/ml, St, AF standard. (B) Higher concentrations of peptone promoted mycelial growths. The total mycelium dry weights were measured after a 3-day culture, with initial spore densities of 10 4 or 10 6 spores/ml. (C) No direct correlations between AF productions and pH changes. In GMS media the pH was gradually decreased during the 55-hr culture, where a higher initial spore density led to faster acidification of the medium. In PMS media the pH was increased during culture, where a higher initial spore density led to rapid alkalization of the medium. Note that increased peptone concentrations did not cause a significant change in the pH of PMS media (P6 and P6+). 10 6 spores/ml, compared to those initiated with 10 4 spores/ml on the day three ( Figure 4B). The density effect was present in most Aspergillus strains tested To elucidate if the density effect is a general phenomenon in AF-producing strains, we obtained A. flavus NRRL 3357, A. parasiticus NRRL 2999 and A. nomius NRRL 13137 from the Agricultural Research Service (ARS) culture collection in United States Department of Agriculture (USDA), and performed experiments in parallel with A. flavus A3.2890. Fresh spore suspensions were prepared in the same way as for A. flavus A3.2890, and inoculated in PMS or GMS liquid media with initial spore densities from 10 2 spores/ml to 10 6 spores/ml. After three-day cultures, AFs were extracted from media and analyzed by TLC. As shown in Figure 5, in GMS media, all strains showed increased AF productions when initial spore densities were increased from 10 2 to 10 6 spores/ml, excluding A. flavus NRRL 3357. As reported previously, only AFB1 and AFB2 were produced by A. flavus NRRL 3357 [48], while for all other strains AFB1 and AFG1 were the major AFs produced. In PMS media, similar to what was showed above in A. flavus A3.2890, we observed that high initial spore densities inhibited AF biosynthesis in A. parasiticus NRRL 2999 and A. nomius NRRL 13137, especially when initial spore densities were 10 5 spores/ml or higher ( Figure 5). However, no AF biosynthesis was observed in A. flavus NRRL 3357 in PMS media, no matter the initial spore density. It seems somehow the A. flavus NRRL 3357 strain has lost the density sensing machinery in evolution. Mycelia grown in PMS media with high initial spore densities showed reduced TCA cycle intermediates and fatty acid accumulations, but enhanced PP pathway products To determine metabolic differences in A. flavus grown in PMS media with high or low initial spore densities, metabolites in mycelia cultured for 2, 3, 4 and 5 days were analyzed by gas chromatography time-of-flight mass spectrometry (GC-Tof-MS) using methods described previously [49,50]. Multi-variate analyses showed that mycelia inoculated with 10 4 spores/ml clustered separately from mycelia inoculated with 10 6 spores/ml, suggesting evident metabolic differences between these two cultures ( Figure 6A & B). Striking differences in levels were observed in 24 metabolites on the 3 rd day ( Figure 6C & D, and Table 1). In PMS cultures initiated with 10 6 spores/ml, a condition without AF production, the level of three TCA cycle intermediates, namely malic acid, fumaric acid and succinic acid, accumulated significantly less than those in cultures initiated with 10 4 spores/ml This suggests that the TCA cycle was more active in the high density culture. Similarly, levels of four fatty acids, palmitic acid, stearic acid, oleic acid and linoleic acid, were reduced in cultures initiated with the high spore density (Table 1), indicating that fatty acid biosynthesis was generally inhibited in the high density culture. In contrast, many sugar metabolites including ribitol, glucopyranoside, gluconolactone-6-P, glycerol, butanediamine, ethylamine and galactose, were accumulated more in the high density cultures ( Table 1), suggesting that the PP pathway was active. In addition, nucleotides and compounds involved in amino acid metabolism were less abundant in cultures initiated with the high spore density (Table 1), which may be the consequence of the rapid mycelial growth. Addition of TCA cycle intermediates did not affect AF biosynthesis To test if reduced TCA cycle intermediates in mycelia are the primary cause of reduced AF biosynthesis in the high initial spore density culture, malic acid, fumaric acid and succinic acid were added to the PMS medium at the concentrations of 0.5 mM or 5 mM, and 0.5 or Figure 4 High initial spore densities repressed the expressions of AF biosynthesis genes in A. flavus. qRT-PCR was used to analyze expressions of AF production regulators (aflR and aflS) and AF biosynthesis genes (aflO, cypA, ordA and nadA) by A. flavus A3.2890 cultured in PMS media with 10 4 or 10 6 spores/ml for 2 (A) or 3 days (B). The relative expressions were quantified by the expression level of the β-Tubulin gene. Note the expression of nadA was not repressed in the high initial spore density culture. 5 mM NaCl was added to the culture as a control, and then performed liquid incubation with the final spore densities of 10 4 or 10 6 per ml using freshly prepared A. flavus A3.2890 spore suspensions. TLC analyses were performed for AFs extracted from the media. We observed that none of these treatments had any significant effect on AF production. No AF production was observed in any of the high initial spore density cultures ( Figure 7). These results suggest that the inhibited AF biosynthesis in high initial spore density cultures was unlikely caused by reduced levels of TCA cycle intermediates. Discussion As a group of highly toxic natural compounds, AFs in nature are produced mainly in seeds with high lipid and protein content [1,3]. Previous reports show that peptone is not a suitable carbon source for AF production [23][24][25]. Our present study demonstrates that peptone was in fact conducive for AF production, as long as the initial spore density of A. flavus was reduced. Mycelia grown in peptone media responded not only to the initial spore density, but also to peptone concentration. Higher initial spore density and higher concentration of peptone inhibited AF biosynthesis. We also showed that no AF biosynthesis inhibitor was released into the media in the culture with the higher initial spore density. qRT-PCR analyses revealed that culture with a high initial spore density repressed expression of both the transcriptional regulators and the biosynthesis genes in the AF pathway gene cluster. Metabolomic studies showed that, in high density cultures, the TCA cycle and PP pathway were active, while the fatty acid biosynthesis pathway was repressed. Spore density-and peptone concentration-dependent AF biosynthesis in PMS media In nature, many organisms, especially fungal species, are able to produce compounds to suppress the growth of other organisms in their neighborhood [51]. Regulated production of these compounds is expected to have physiological and ecological advantage for these organisms. It has been shown previously that lower glucose content supports fungal growth but not AF accumulation, suggesting that the first priority of the fungus is growth when food availability is low [27]. In our study we observed that mycelia grown in peptone media showed spore density-and peptone concentrationdependent AF production in A. flavus. High initial spore density or high peptone concentration promoted rapid mycelial growth without AF biosynthesis, which may allow the fungus to prioritize propagation when the competition pressure is low, and when sufficient food is available. In contrast, active AF productions were observed in cultures initiated with lower spore densities and lower concentrations of peptone. Additional comparative studies using several AF-producing strains including A. flavus, A. parasiticus and A. nomius from the USDA ARS culture collection showed that the densitydependent AF biosynthesis in PMS media was present in all strains tested except A. flavus NRRL 3357. This particular strain did not produce any AFs in PMS media, as reported previously [52]. Furthermore, a positive correlation between AF production and initial spore density in GMS media was also not observed for this strain, implicating a different regulation mechanism is evolved. In natural ecosystems where the self population density is low and food is scarce, AF production may confer competitive advantages, through inhibition of the growth of other organisms. It would be interesting to examine if other fungal species also employ this survival strategy. We showed that no soluble AF biosynthesis inhibitor was released from the high spore density culture to media by using spent medium experiments, suggesting that A. flavus A3.2890 is somehow able to sense the population density and adjust their growth and AF production through cell-autonomous machinery. Unlike Candidia albicans and Dictyostelium, where density factors are diffusible to media [53][54][55], we hypothesize that A. flavus may use a cell surface component to perceive such cultural density and nutrient signals. The possible role of G protein-mediated signaling [56] in this process is worth exploring. Alternatively, it has been reported that oxidative stress is a prerequisite for AF production 10 Ion abundances Purine Pentanediamine A B D C Figure 6 Metabolites with different contents in cultures initiated with high or low spore densities. (A) A PLS scores plot, performed using SIMCA-P V11.0, for metabolites extracted from mycelia cultured for 2, 3, 4 and 5 days in PMS media with initial spore densities of 10 4 (black) and 10 6 (gray) spores/ml, with 3 replicates in each treatment. (B) Scatter loading plots obtained from PLS analyses of the entire GC-Tof-MS dataset. (C and D) Total ion chromatographies of metabolites extracted from mycelia of A. flavus grown in PMS media for 3 days with the initial spore densities of 10 4 (C) and 10 6 spores/ml (D). Metabolites with significant differences in quantity between (C) and (D) are labeled. [57]. It is plausible that the rapid growth in PMS media with high initial spore densities may lead to reduced intracellular oxygen availability and subsequently decreased oxidative stress, which could prevent AF production. It will be interesting to examine why this density-sensing machinery is active only when peptone, not glucose, is used as the carbon source. High initial spore densities repressed expression of AF biosynthesis-related genes including aflS and aflR Transferring A. parasiticus mycelia from PMS to GMS media resulted in AF production, which is inhibited by cycloheximide or actinomycin D treatments, suggesting both de novo transcription and translation are required for AF biosynthesis [23,24]. In this study, we observed Figure 7 Additions of TCA cycle intermediates can not restore the AF biosynthesis in high initial spore density cultures. In A. flavus A3.2890 mycelia grown in PMS media initiated with 10 4 and 10 6 spores/ml, 0.5 mM or 5 mM TCA cycle intermediates, fumaric acid, malic acid and succinic acid, were added at the beginning of the culture. AFs were extracted from media and analyzed by TLC after 3-day cultivations. that high initial spore densities promoted mycelial growth but inhibited AF production, which is similar to the high temperature cultures in GMS media where no AFs are produced [58]. High temperature culture (37°C) specifically represses the expressions of AF biosynthesis genes without affecting expression of the transcriptional regulators aflR and aflS in the AF pathway gene cluster [20,59,60]. However, we found that high initial density cultures inhibited the expression of both the transcriptional regulators (aflR and aflS) and downstream AF biosynthesis genes simultaneously, suggesting a different manner of regulation. Further study is needed to elucidate if the density-dependent AF biosynthesis is regulated through antagonistic signaling pathways that coordinate vegetative growth, conidiation and AF production [2]. Cultures with high initial spore densities in PMS media trigger a metabolic shift from AF production to sugar metabolism Although primary and secondary metabolism share common transcriptional and translational machinery, secondary metabolism often commences during idiophase, when normal growth and development have ceased [61]. The present study revealed a novel low cell density-dependent metabolic switch toward AF production in A. flavus. We observed that the AF and lipid biosynthesis were active in mycelia initiated with a low spore density in the PMS medium. In contrast, the TCA cycle was inhibited, as shown by the accumulation of TCA cycle intermediates in low spore density cultures, which is in agreement with previous results showing that the TCA cycle is repressed during active AF biosynthesis, to allow a greater acetyl-CoA shunt toward AF biosynthesis [26]. By adding three TCA cycle intermediates to cultures, we showed that increased TCA cycle intermediates did not restore AF biosynthesis in the high initial spore density culture, nor did additional TCA cycle intermediates promote AF biosynthesis in the low spore density culture, suggesting that the spore densityregulated AF biosynthesis in the PMS medium is not likely influenced by TCA cycling directly. The enhanced TCA cycling might be the consequences of inhibited AF biosynthesis in the high spore density culture. Since AF production shares a subset of biosynthetic steps with fatty acid metabolism, accumulation of AFs and lipids often occur in parallel [18,62]. This parallel biosynthesis trend was observed in our Metabolomic studies. All four fatty acids detected, palmitic acid, stearic acid, oleic acid and linoleic acid, were accumulated in the low spore density culture, together with AF biosynthesis. The density-dependent metabolic switch from active TCA cycling in high initial spore density cultures to active AF biosynthesis in low initial spore density cultures may represent a shift in metabolic priority that allows A. flavus to produce AFs in the protein-rich environment only when its own population density is low. Conclusions Our studies demonstrate that A. flavus grown in media with peptone as the carbon source is able to detect its own population density and nutrient availability, and is able to switch between fast growth and AF production. High initial spore density or high peptone concentration led to rapid mycelial growth and inhibited AF production, while low initial spore density or low peptone concentration promoted AF biosynthesis. Inhibited AF biosynthesis in the high initial spore density culture was accompanied by active TCA cycling and rapid mycelial growth. Supplements of TCA cycle intermediates did not restore AF biosynthesis, suggesting the inhibited AF biosynthesis was not caused by depletion of TCA intermediates. Our spent medium experiments showed that the density-sensing factor regulates AF biosynthesis in a cell-autonomous manner. Expression analyses showed that the density factor acts at the transcriptional level to regulate the expressions of both aflR and aflS transcription regulators and downstream AF biosynthesis genes. Interestingly, Most Aspergillus strains including A. parasiticus and A. nomius tested were shown to be densitydependent AF biosynthesis in PMS media. Only A. flavus NRRL 3357 did not exhibit the density-dependent AF biosynthesis, suggesting different regulation machinery is evolved in this strain. We believe that a cell density-or peptone availability-dependent metabolic switch may provide A. flavus with a competitive advantage in the natural ecosystem. Whether or not the perception of population density and peptone availability are regulated through the same signaling pathway will require further study. Fungal strain and growth conditions The primary strain used in this study, A. flavus A3.2890, was obtained from CGMCC, located in the Institute of Microbiology, Chinese Academy of Sciences. A. flavus NRRL 3357, A. parasiticus NRRL 2999 and A. nomius NRRL 13137 strains were obtained from the ARS culture collection in USDA. The GMS medium was prepared as previously described [63], which contains 50 g/L glucose, 3 g/L (NH4) 2 SO4, 2 g/L MgSO4, 10 g/L KH 2 PO4, and 1 ml/L trace element mixture. The pH was adjusted to 4.5 before autoclaving. The PMS medium was identical to GMS except the glucose was replaced by 5% peptone, and pH was adjusted to 5.2, as described previously [24]. All cultures were prepared by following Park's protocol [64] with minor modifications. Sixty μl of A. flavus spore suspensions stored at −80°C in glycerol was pre-cultured on potato-dextrose agar plates at 37°C for 4 days. Mature spores on the surface were harvested and resuspended in sterile distilled water containing 0.05% Tween 20 (Sigma, St. Louis, USA), diluted to a series of spore densities after counting with a haemacytometer. Five ml of spore suspensions of desired density were added to 45 ml PMS or GMS liquid media, cultured on a shaker (180 rpm) at 28°C in the dark.. The pH of the culture media was measured at different time points following inoculation, during a 55-hr culture period. The three brands of peptone used in this study were purchased from Sigma ( Determinations of fungal dry weights and AF contents For the determination of fungal dry weights, mycelia grown in 50 ml media were harvested at different time points (48,72,96, 120 hrs after inoculations) by filtration through two layers of filter paper, washed by sterilized water, and then freezer-dried before weighing. The filtrate was sterilized by passing through a 0.22 μm membrane, which was used for spent media experiments and AF quantifications. For extraction of AFs from media, an equal volume of chloroform was added and the mixture was vortexed and extracted ultrasonically for 15 min. After centrifugation for 6 minutes at 11498.6xg, the organic phase (lower phase) was filtered through a 0.22 μm membrane and dried under nitrogen gas flow, and re-dissolved in fixed volumes of chloroform. The extracts were analyzed by TLC as previously described [65], except the developing solvent was changed to CHCl 3 :H 2 O (9:1, v/v). The AF levels were quantified by HPLC (Agilent 1200, Waldbronn, Germany), equipped with a reverse phase C18 column (150 mm in length and 4.6 mm internal diameter, 5 μm particle size; Agilent), eluted by gradient elution, starting with a mixture of 25% methanol, 20% acetonitrile and 55% water for 3 min, then changed to a 38% methanol water solution for 0.1 min, eluted with 38% methanol for 2.9 min, detected by a DAD analyzer at 360 nm. Quantification was performed by calculating the amount of AF in samples from a standard calibration curve. For the detection of AFs from the mycelia, dried mycelia were ground to a powder, then extracted with acetone with solid-to-liquid ratio 1:10 (g/ml) for 30 minutes, the extract was analyzed by TLC as described above. Metabolomic analyses by GC-Tof-MS Mycelia harvested from the 2 nd to the 5 th day with a 24hr interval were lyophilized and extracted by ultrasonication for 40 min with 1.5 ml mixed solvents including methanol, chloroform and water (5:2:1, v/v/v), in which 100 μl of 1 mg/ml heptadecanoic acid (C17:0, Sigma, St. Louis, USA) was added as an internal standard. After the centrifugation at 11,000 g for 10 min, 1 ml of supernatant was transferred to a tube with 400 μl chloroform and 400 μl water, vortexed for 15 sec, centrifuged at 11498.6xg for 10 min, and then 400 μl chloroform phase was transferred to a new glass vial, and dried under the nitrogen gas flow. The pellet was re-dissolved in 50 μl 20 mg/ml O-methylhydroxylamin hydrochloride (Sigma, Steinheim, Switzerland) in pyridine, vortexed and incubated at 37°C for 120 min. Afterwards, 100 μl N-methyl-N-trimethylsily trifluoroacetamide (Sigma, Steinheim, Switzerland) was added immediately to the mixture, vortexed and incubated at 37°C on a shaker (150 rpm) for 30 min, The silyl-derivatized samples were analyzed by GC-Tof-MS after cooling to the room temperature using an Agilent 6890 gas chromatography coupled to a LECO Pegasus IV GC-Tof-MS (LECO, USA) with the EI ionization. The column used was VF-5 ms (30 m in length; 250 μm internal diameter, 0.25 μm film thickness; Varian, USA). The MS was operated in a scan mode (start after 4 min; mass range: 50 -700 m/z; 2.88 sec/scan; detector voltage: 1400 V), in which helium was used as the carrier gas (1 ml/min) with a constant flow mode, a split injector (340°C, 1:50 split) and a flame ionization detector (340°C). The samples were subjected to a column temperature of 100°C for 3 min, raised to 150°C at a rate of 10°C/min, then to 250°C at 5°C/min, finally to 360°C at 10°C/min, and held for 15 min at 360°C . Sample components were identified by comparison of retention times and mass spectra with reference compounds, and matching to the NIST mass spectral database. Metabolite peak areas representing the abundance of metabolites were normalized to the internal standard (heptadecanoic acid). Multivariate analysis was performed using SIMCA-P V11.5 (Umetrics, Sweden) [66,67]. All GC-Tof-MS analyses were conducted with three replicate cultures, mixed before extractions, and measured three times to get the average contents. Expression analyses using qRT-PCR Mycelia were harvested, frozen and ground in liquid nitrogen. Total RNAs from the mycelia were extracted using Trizol (Invitrogen, USA), and polyA mRNAs were purified using PolyAT Rack mRNA Isolation System (Promega, Madison, WI) according to the manufacturers' manual. All cDNAs were synthesized by reverse transcription reaction performed with ReverTra Ace (Toyobo, Japan) at 42°C for 1 h, and then 85°C for
8,643.6
2012-06-13T00:00:00.000
[ "Biology", "Environmental Science" ]
Impact of Foreign Direct Investments on Serbian Industry The empirical researches to date on the impact of foreign direct investments (FDI) on the host country were either focused on the overall macroeconomic impact of FDI on exports of the host country or have analyzed the direct contribution of foreign affiliations or looked for spillover effects. By using a mixed model, the method of least squares (OLS) with fixed effects, in the paper has been proved that FDI had a positive effect on the growth of Serbian exports. FDI led to productivity growth in a large number of companies in the economy. At the same time the research conducted at the company level indicates the existence of horizontal and vertical spillover effects. FDI through vertical spillover effects had a positive effect on major suppliers while have led to significant displacement from the market of main competitors. Introduction When a multinational company decides to enter a new market through direct investment in a particular country, it performs a direct transfer of modern technology to its affiliation.Modern technology in the form of modern machinery and equipment and intangible productive assets are essential for foreign affiliation as a comparative advantage over domestic enterprises which have better business contacts and more information on the domestic market.At the same time, the presence of foreign affiliates of multinational companies in the host country accelerates the pace of technological change and the intensity of the adoption of technological knowledge in an indirect way, through spillover effects, which occur on the basis of the diffusion of technology from affiliates to domestic companies.Recent researches for developed and transitory economies are not conclusive about FDI influences on competitiveness of the host country and its export performances, and FDI spillover effects as well. In the paper, the influence of FDI on the Serbian manufacturing will be investigated through research on the macro and micro level.At the macro level, the level of industry, research will be aimed at identifying the impact of FDI on the export of Serbian manufacturing industry.The research covers the period from 2006 to 2013, data were collected from data base of the Statistical Office of the Republic of Serbia, and National Bank of Serbia.What factors beside FDI had an impact on exports?Through which channels and to what extent foreign investments had impact on exports?These are just some of the questions that should be answered through the research on the macro level. Research at the micro level, the enterprise level, will be aimed at identifying horizontal and vertical spill-over effects, as well as development trends of privatized enterprises and green-field investment.The research covers the period from 2000 to 2013, data were collected from the Serbian Business Registers Agency (SBRA).Is there any growth of net profit after acquisition?What happened to main competitors after entrance of foreign affiliation on the market, were they driven out of the market?Did major suppliers have benefit from cooperation with foreign affiliation?These are just some of the questions that should be answered through research at the micro level. The scientific goal of the research is scientific explication of the relationship between FDI and the Serbian manufacturing industry with elements of scientific predictions.The aim of the research is to identify the impact of FDI on the export of Serbian manufacturing industry and horizontal and vertical spill-over effects. LiteratureReview Empirical research on the impact of FDI on the host country may be divided on the studies that have focused on the overall macroeconomic impact of FDI on exports of the host country and on those studies that have analyzed the direct contribution of foreign affiliations or looked for spillover effects. When it comes to overall macroeconomic impact of FDI on exports of the host country, the results of available empirical researches are inconclusive.Horst (1972) came to conclusion of negative impact of US FDI on US manufacturing exports to Canada.In an attempt to investigate the impact of FDI on using annual data from 1970-98, Sharma (2003) ) analyzed Indian exports in period from 1970-1998 and found no statistically significant evidence of the impact of FDI on exports.On the other hand, some studies found positive effect of FDI on export performance of host countries, as found by O'Sullivan (1993) in Ireland and Blake and Pain (1994) in the United Kingdom. A large number of studies that have appeared in recent years have focused on the research of the impact of FDI on productivity growth of local enterprises through spillover effects.Research with the help of the econometric models of comparative data and the panel data referred to the developed countries, developing countries and transition economies.First researchs (Caves 1974;Globerman 1979;Blomstrom and Persson 1983) used a model of comparative data and mainly came to the conclusion about the positive effects.The existence of positive spillover effects may have incurred as a result of investment of multinational companies in industries with high productivity.The main objection to this research is that specific industry effects and time effects have not been taken into consideration. The availability of panel data has allowed researchers to address shortcomings of comparative data usage.Panel data among other things, have enabled researchers to take into account the time lag required for domestic companies to absorb the spillover effects.Results obtained using panel data were pretty much different from the initial results obtained using comparative data.Using panel data led to the conclusion about the negative or negligible effect (Aitken andHarrison 1997, Djankov andHoekman 1996;Konings 2000).Some of the research using panel data showed the existence of positive effects or assuming certain factors such as absorption capacity (Kinoshita 2001, Girma 2005) and the level of technological gap between foreign and domestic companies (Kokko,Tansini and Zejan 1994;Basile,Castellani and Zanfei 2003).In a detailed review of the research on the spillover effects related to FDI, Gorg and Strobl (2004) concluded that the results of productivity spillover does not depend on whether data at the level of industries or companies were used but on whether methods of comparative data or panel data were used.Of the 40 studies, in 19 there has been a statistically significant conclusions about the existence of positive spillover effects, 15 studies found no significant spillover effects and 6 studies found evidence of negative spillover effects.As explanation for this phenomenon, some researchers believe that many researchs have been using data with excessive levels of aggregation making spillover effects much more difficult to detect, which does not mean they don't exist.Also, spillovers may simply depend on some of the characteristics of host country, the type of FDI prevailing in the country, which lead to different results for different countries. The lack of positive horizontal spillover effects using panel data forced the researchers to search for possible vertical spillover effects related to FDI.These studies are based on the belief that domestic companies which are vertically (up or down) associated with foreign affiliation have the benefits of this cooperation.Variables for detecting vertical spillover effects are designed using input-output table.Some studies have found evidence of positive vertical spillover effects (Schoors and van der Tol (2002) for Hungary, Javorcik, Saggi and Spatareanu (2004) for Lithuania; Blalock and Simon (2009) for Indonesia.Other studies, however, have come to the different results.Tytell and Yudeva (2005), who explored the Russian industrial enterprises found negative vertical downward and upward effects.Merlevede and Schoors (2007) came to the conclusion about the positive vertical spillovers upward while when it comes to the downward effects (to suppliers), positive effects were found only in the case of export-oriented sectors. Literature overview is suggesting that the initial studies based on the model of comparative data came to the conclusion about the positive horizontal spillovers while most research based on panel data come to a conclusion about the negative or neglecting spillover effects.Based on the results of previous research, it can be concluded that the evidence of positive spillover effects are very weak. Methodology Research of the impact of FDI on the Serbia's export was done on the basis of the model created by Vuksic (2005): The variables are as follows: dependent variable InEX is the natural logarithm of real exports, the independent variables are the natural algorithms of productivity index (lnPD), unit labour costs (lnULC), the real effective exchange rate (lnREER), domestic investment (LnI) and stock of FDI (lnFDI). The constant αj denotes specific fixed effects of industry branch, while j = 1.21 means different industries, and t denotes different years (from 2006 to 2013 in the case of study in the paper). As can be seen from the model, domestic investment and the stock of FDI are used with one year time -lag of.This can be explained by the fact that it takes some time for new investments to become effective.In the case of FDI, the use of data with a time -lag should also help to avoid the problem of simultaneity between the variables of exports and FDI.Using stock of FDI rather than annually FDI inflow should also contribute in solving this problem. The FDI stock should also better show the importance of the presence of foreign capital in a particular branch of industry, which is important since it can cause technology spill-over effects.If the inflow of foreign capital was used as a variable in the model, it is possible that there is a significant inflow of foreign capital in the beginning of the period and after that there is no inflow of new foreign capital in the industry.In this way, the value of this variable would be 0 for all the years when there was no new inflow of foreign capital, which would totally ignore the strong presence of foreign capital that has been invested, which is a potential source of significant positive spill-over effects. There is a potentially significant variable which is not included in the above model -the export market.The reason that it is left out is simple -it is very difficult to measure this variable.Using GDP growth of countries of export destination turned out to be insignificant because the use of this indicator doesn't cover different export tendencies of various export sectors. These model specifications are modifications and extensions of the model estimates on the aggregate, macroeconomic level, which were presented by Sun (2001) and Zhang and Song (2000).Both papers were using the natural logarithm of real exports as the dependent variable and the logarithm of the FDI stock with a lag of one year.Sun (2001) also used domestic investment, and the real effective exchange rate was included in both papers as an independent variable.Productivity and unit labour costs are added as variables, because they are expected to be important indicators of the competitiveness of the export industry (Vuksic, 2005, p.20). The Impact of FDI on Serbian Exports In the study of the impact of FDI on export of Serbia's manufacturing industry, the data in the period from 2006 to 2013 were used.Data refer to the 23 branches of the processing industry by the National Classification of Economic Activities (NCEA).Data on exports, the average monthly gross wages, gross added value (GVA), gross investment in fixed assets, number of employees and the producer price index have been collected from data base of the Statistical Office of the Republic of Serbia, while the data on the real effective exchange rate and the stock of FDI have been from sources of National Bank of Serbia.The index of labor productivity (P) for each branch of industry is calculated as the ratio of GVA and the number of employees (E) in the industry. The index of unit labour costs (ULC) is constructed as in Carstensen and Toubal (2004), as multiplication ratio of the average monthly gross wage (BP) and the total number of employees (E) with GVA in industry i in time interval t.Gross investments in fixed assets had an upward trend in the period 2006-2008.With the advent of the global economic crisis it decreased, which is interrupted in 2010, since when there is a steady growth which was particularly noticeable in 2012.Productivity tends to increase in the reporting period (particularly noticeable increase in productivity in 2008) with the exception of 2009 and 2011 when there was a slight decrease in productivity. JTR Productivity growth should have a positive impact on export growth.The real effective exchange rate (defined so that the increase in the index means real depreciation), had decline in 2007 followed by a slight growth (dinar depreciation as a result of the advent of the global economic crisis), which is interrupted in 2011.At the same time labour costs show a fairly volatile movements in the reporting period, in 2013 increase in the index was recorded, which should have a negative impact on the export trend.In Table 1 data on exports, the FDI stock, productivity, unit labor costs and gross investments by industry are presented.Average export value of FDI stock and gross investment are in millions RSD while average value of productivity and unit labour costs are in index values. From the table, it can be seen that only one branch of industry (base metals) recorded negative growth in real exports during the period.In this industry, the mean value and increase in cumulative level of FDI were at a higher level than in most other industries, so we can conclude that in the case of this industry FDI haven't had a positive impact on export growth (although these trends almost certainly have been affected by the trends in the world market and the suspension of production at the steelworks in Smederevo).It is significant that this industry recorded the highest growth in unit labour costs. The highest growth in real exports was recorded in industry of motor vehicles and trailers (70.67%), manufacture of tobacco products (34.04%) and in metal production (24.65%).In the case of the production of motor vehicles and trailers the second largest increase in the FDI stock was recorded (an average of 134% per year).So, in this case we can talk about the positive impact of FDI on the growth of exports.Besides that, the highest average growth of cumulative level of FDI was recorded in the manufacturing of coke and refined petroleum products (in this branch there was a moderate average growth in real exports of 13.24%) and in manufacture of other transport equipment (which recorded an average growth of exports 24.43%).The highest average value of the FDI stock was recorded in the case of the production of food and drink, this industry has the second highest average value of total exports. The highest average productivity growth was recorded in the production ofmotor vehicles and trailers (36.93%) and in the manufacturing of coke and refined petroleum products (34.98%).Those two industries have recorded the highest average growth of FDI stock, so it is clear that foreign investment had a positive impact on productivity.Negative average productivity growth was recorded in the case of only one branch of industry (food and drink), which has quite high FDI stock. The highest average growth in unit labour costs was recorded in the manufacturing of basic metals and textile yarn and fabrics industry.Both sectors have recorded relatively high average growth of FDI stock.On the other hand, the greatest average reduction in unit labour costs of 15.48% was observed in the case of those industries which recorded the highest average growth of FDI stock -production of coke and refined petroleum products. Three branches of industry recorded a negative average growth of gross investment: food products and beverages, chemicals and chemical products, rubber and plastic.With the exception of food products and beverages, the other two industries had relatively low levels of growth in FDI stock.The highest average growth in gross investments was recorded in the case of motor vehicles and trailers in which, as already was noted, the second largest increase in the average level of FDI stock was recorded. Table 1 Appendix Table 2 shows the correlation coefficients between variables analyzed.The maximum value of the correlation coefficient is 0.619 between exports and FDI stock (strong positive correlation), which indicates that FDI contributed to the increase in exports.Exports were also strongly positively correlated with employment and gross investments, which suggests that the industries that have recorded significant export had more employees and the higher level of gross investments compared to other industries.FDI stock is in a relatively strong positive correlation with the number of employees and somewhat milder positively correlated with the level of gross investments.The correlation coefficient, which is even more noteworthy, is between the level of gross investment and employment that is intuitively clear (higher gross investments mean higher employment).Other correlation coefficients have negligible value.So, considering correlation coefficients, it can be concluded that FDI had a positive effect on the growth of Serbian industrial export, increase in employment and increase in the level of gross investments.At the same time, there is no evidence that FDI contributed to the reduction of unit laboru costs. The positive impact of FDI on exports is also manifested through productivity when FDI and gross capital formation were not included in the model (model 1).Increase in productivity of 1% leads to an increase in exports of 1.294% with significance level of 1%.The positive impact of FDI on productivity is in line with the results of research conducted by Vuksic (2005) and Škudar (2004).It seems that the productivity is the channel through which FDI influenced better export performances of Serbian industry. Real effective exchange rate haven't had a major impact on exports, as in all models, the impact of the real effective exchange rate on exports is not statistically significant.Unit labor costs were statistically significant in the model in which FDI are not included.The increase in unit labor costs of 1% leads to a reduction in real exports of 0.021%. The obtained results are in line with the results obtained by Vukšić ( 2005) in his research on the impact of FDI on exports of the Croatian processing industry.In the case of Croatia, the increase in FDI of 1% led to an increase in exports of 0.09%, which is a somewhat stronger impact than in the case of Serbia.In the case of Croatia, unit labor costs have proved to be a more important factor in determining exports than in case of Serbia. 5.The Effect of FDI on Spill-over effects at the Micro level Blomstrom and Kokko (1997) identified the four channels through which spillovers materialize: demonstration effect, vertical connection, the effect of training and the effect of competition.The demonstration effect is reflected in stimulation of domestic companies to improve their production methods based on the exposure to superior technology of multinational companies.Vertical connection is the establishment of direct relations between companies engaged in complementary activities that are beyond the pure market transactions (Lall, 1980).Training and improving the skills of employees at all levels and research and development efforts of foreign affiliates combined with mobility of work force are very important source of potential spill-overs. The entrance of foreign affiliate on the domestic market raises the level of competition and performs competitive pressure on domestic companies to introduce new technological solutions and improve production efficiency in order to maintain its position in the market and in order to survive. Investigation of the influence of FDI at the macro level was completed with research at the micro level, the level of enterprise.In the random sample of 40 foreign-owned companies in Serbia, a survey was made to identify the main competitors and suppliers.After identifying them, the total sample consisted of 70 companies -40 foreign-owned companies, 15 main competitors and 15 major suppliers.For the selected sample, data were collected on sales, cost of sold goods, cost of materials, net profit, total equity, total assets and number of employees in the period of 2000-2013.The data were seasonally adjusted, as a basic year2000 was used.Data were collected from the Serbian Business Registers Agency (SBRA).The goal of the data collection is to explore trends in the number of employees, sales revenue, productivity, net profit, return on equity and return on assets in the case of purchased companies by foreign owners (acquisitions), the newly built production facilities (green-field investment), the main competitors in the market and major suppliers. Figure 3 shows the average percentage change in the number of employees in the case of acquisitions, green-field investments, the main competitors in the market and major suppliers. The year of acquisition by foreign owners (in the case of acquisitions), the year in which production started in new plants (in the case of green-field investments) and the year of acquisition and production start (in the case of main competitors and major suppliers) was taken as a base year (t). As can be seen from the figure 3 in the case of acquisition, a declining trend in the number of employees started even before acquisition (3 years prior to the acquisition, companies in the sample had on average 20% more employees than in the year of the acquisition) carried on after the purchase of the company by a foreign owner.Purchased companies had on average 13% less employees after three years and 24% after six years.Only 23% of purchased companies had more employees after six years than in the year of acquisition, while most saw a drastic drop in the number of employees.In the case of green-field investments the impact on employment is a clear, as opening of new plants create jobs and in the event of an increase in production volume, the number of employees is increasing over the years, as it turned out on the selected sample.Already after the first year, the number of employees increased 2.3 times on average, which was followed by steady, but slower growth (6 years from the start of the production number of employees was on average 3.13 times higher than in the base year).Only one company in the sample has recorded fewer workers six years after the start of business operations in relation to initial number of employees. In the case of main competitors, after a slight initial drop in the period before the advent of foreign investors in the market, there is a sharp drop in the number of employees in the following years after the appearance of a foreign company in the market.On the average, three years before the appearance of a foreign company, competitors had 9.6% more workers than in the base year, while only one year after the entry of foreign companies into the market they had 20% fewer employees than in the base year.In the sixth year after the entry of foreign companies into the market, main competitors had only 51.46% of the employees in the base year.On the basis of those findings it can be said that there were a negative horizontal spill-overs relating to staff reductions within main competitors. In the case of the major suppliers, there is an increasing trend in the number of employees.This trend existed in the years prior to entrance of the foreign companies in the market (three years before the acquisition, suppliers had on average 54.5% of workers in the base year) and continued at a similar pace after acquisition.Three years after the acquisition the number of workers has increased 36.3%, in the sixth year following the acquisition 64.7% compared to the base year.Only one supplier in the sample had a slight decrease in the number of employees in the sixth year following the acquisition compared to the base year.So, we can talk about the existence of vertical positive spillover effects, but it is difficult to fully mark it off with the existing trend in the time before the entrance of a foreign investor. It can be concluded that in the case of acquisition there was a decrease in the number of employees, in the case of green-field investments there was a creation of new jobs and increase in the number of employees with the increase in production volume, in the case of main competitors a negative horizontal spill-over effects were recorded and with major suppliers mild positive vertical spill-over effects were recorded. Figure 4 shows the average percentage change in sales revenue in the case of acquisitions, green-field investments, the main competitors in the market and major suppliers. As can be seen from the figure 4, in the case of acquisitions, there is increasing trend of sales, which existed even before the acquisition.Three years before the acquisition, average sales revenue was 28%onlycompared to the revenue in the year of acquisition, two years after the acquisition it was higher by 87% than in the base year and in the third year after the acquisition it was 2.14 times higher than in the base year.Only 9% of the sample has not had higher sales in the third year from acquisitions compared to the base year.In the coming years, sales have stabilized, with the tendency of slight increase.In the sixth year after the acquisition sales revenue was 2.24 times higher compared to the base year. In the case of green-field investments, average sales growth compared to the base year is drastic, but it is understandable given the great potential to improve the production process, productivity, quality and characteristics of the products, distribution channels and possibility to penetrate new markets, make better position on existing market, better inform customers about the products that company offers.Already after one year from the start of production, sales revenues were 15.3 times higher than in the base year, in the third 21.8 times higher, after six years from the start of production even 14.88 times higher compared to the base year.In the case of major competitors, there is a tendency of moderate sales growth up to three years from the entry of foreign companies into the market.Three years before entrance of foreign companies in the market, the average sales revenues were 60% compared to the base year, three years after the entry of foreign companies into the market revenues were 53% higher than in the base year (only 25% of the sample had lower sales in the third year compared to the base).Regardless that growth of sales revenue was slower in relation to acquisitions, this suggests that the main competitors had enough market space and opportunities for growth and development.After the third year from the entry of foreign companies into the market, a decline in the average income of sales started, which by the sixth year of the entry of foreign companies into the market was only 37% higher in comparison to the base year (37.5% of the sample had lower sales than in the base year).For this reason, we can talk about the negative horizontal spill-over effects, as measured by extrusion of main competitors from the market. In the case of the major suppliers, there is a trend of continuous increase in sales revenues, which is slightly slower compared to the acquisition.Three years before the entrance of foreign companies on the market, the average income from sales for suppliers were 64% compared to the base year, one year after the entry of foreign companies into the market it was 28% higher, and in the third year, 53% higher compared to the base year (the same in the case of the main competitors).Unlike competitors who have experienced a decrease in revenues from sales after the third year, suppliers continued to increase sales revenue.In the sixth year from the entry of foreign companies into the market, they had a 2.2 times higher sales than in the base year (all companies in the sample had higher sales in sixth than in the base year). Figure 5. The average percentage change in productivity Source: Author's calculations based on data provided by the SBRA Thus, it can be concluded that in the case of acquisitions, greenfield investments and major suppliers, there is a trend of sales growth, while in the case of main competitors there are negative horizontal spill-overs, which are reflected through the displacement from the market and in a drop in revenues from sales after the third year from the entry of foreign companies on the market. Figure 5 shows the average percentage change in productivity in the case of acquisitions, green-field investments, the main competitors in the market and major suppliers. -1000 As can be seen from the figure 5, in the case of acquisitions there is a sharp improvement in average labor productivity.This trend was also present before the acquisition, three years before the acquisition by a foreign company, productivity was only 22% compared to the base year.The decline in productivity was recorded only in the first year after the acquisition and it can be explained by organizational changes and changes in the production process (restructuring process).In the second year after the acquisition, productivity was 10 times higher than in the base year, and in the sixth year following the acquisition as much as 26 times higher than in the base year (only one company from the sample recorded lower productivity than in the base year).This suggests that the availability of specific resources that foreign companies possess have allowed privatized enterprises to improve their productivity drastically.A part of Productivity growth can be partially explained by better organization of work processes and reduced number of employees. In the case of green-field investments, there is also a growing trend of productivity.This is particularly noticeable after the second year of the production start when labor productivity was 37% higher than in the base year and in the fifth year it was 4.3 times higher than at the beginning.In the sixth year there was a slight drop in productivity in comparison to the previous year, but 18% of the sample in that year only had lower productivity compared to the base year. In the case of major competitors, there is also an upward trend of labour productivity.In the years preceding the entry of foreign companies into the market, competitors have registered negative productivity (negative added value, sales revenues were lower than the sum of the cost of materials and the cost of sold goods), but after the entrance of foreign companies to the market, competitor's productivity began to grow dramatically, especially in the second year after the entry of foreign companies into the market.For example, in the second year after the entry of foreign companies into the market labour productivity was 2.4 times higher than in the base year, and in the sixth year 3.6 times higher than in the base year.So, we can talk about the positive horizontal spillovers effect. In the case of major suppliers, there is also a growing trend of average productivity, which is moderate in relation to the acquisitions, greenfieldinvestments and main competitors.Three years before the appearance of foreign companies to the market, the average productivity was 88% compared to the base year, three years after the entry of foreign companies to the market, it was 16% higher compared to the base year.In the sixth year it was 2.06 times higher than in the base year.In this case we can talk about the positive vertical spillovers. Based on the above findings, it can be concluded that acquisitions, green-field investments, main competitors and major suppliers recorded growth of labour productivity.The highest growth in labour productivity was recorded in the case of acquisition, as a result of specific resources which foreign companies posses, like modern organizational methods and improvement of the production process.In the case of main competitors, there was a positive horizontal and in the case of the major suppliers the vertical positive spillover effects. Figure 6 shows the average percentage change in net profit in the case of acquisitions, green-field investments, main competitors in the market and major suppliers.was 66% compared to the base year, a year before the acquisition it was 10% higher than in the base year.After the first year from the acquisition there was a decline in net profit, which amounted to only 35% compared to the base year, but after that there is a strong upward trend in net profit, which in the sixth year was even 2.17 times higher than in the base year.The growth in net profit of the foreign companies has implications on the repatriation of profits and income account of the current account balance. In the case of green field investments, there is also a trend of growth of net profit.A high increase in net profit is due to improvement of distribution channels, penetration of new markets, better positioning on existing markets and customers became informed better about the products that the company offers.In the first year after starting the production, average net profit was 98% higher than in the base year, in the fourth year it was three times higher even, and in the sixth 3.89 times higher than in the base year. At the same time in the case of main competitors there is an opposite trend with respect to acquisitions and green field investments, because there is a trend of decreasing net profit.In the years preceding the entry of foreign companies to the market, there was a trend of growth in net profit (initially average net profit was negative, most of the companies in the sample have recorded losses), but the growth trend stopped with the entry of foreign companies to the market.After the first year from the entry of foreign companies to the market, competitors have achieved only 55% of the net profits compared to the base year.After stabilizing in the next two or three years there was a sharp drop in net profit in the fifth and sixth year, when again negative average net profit was recorded (55.5% of the sample recorded loss).From those facts it can clearly be concluded that there were negative horizontal spillovers, which are reflected in the displacement from the market and the reduction of net profit. In the case of the major suppliers there is a trend of moderate growth in net profit which existed before the entry of foreign companies into the market.Three years before the entrance of foreign firm to the market average net profit of suppliers was 47% compared to the base year, three years after the entry of foreign companies on the market it was 25% higher compared to the base year and in the sixth year 55% higher than in the base year (only 25% of the sample had lower profits in the sixth year compared to the base year).In this case we can speak of a moderate positive vertical spillover effects. Based on the above, it can be concluded that acquisitions and green field investments recorded a significant growth in net profit, while main competitors recorded fall in net profit (negative horizontal spillovers) and major suppliers recorded moderate growth in net profit (moderate positive vertical spillovers). Figure 7 shows the average change in the rate of return on equity in the case of acquisitions, green field investment, the main competitors in the market and the major suppliers.This is also the first graph in which it makes sense to make a direct comparison between different types of investments. From the figure 7, it can be seen that in the case of acquisition it is not possible to observe the appropriate trend, since the movement of the average rate of return on equity (ROE) is quite volatile.A year before acquisition average ROE was 17.7, in the year of acquisition it reached the lowest value in the reporting period -1.1.In the first three years after the acquisition, ROE had a growing trend, maximum has been in the third year when the average ROE was 21.2 (companies with such ROE are considered attractive for investments) and over 70% of the sample had a higher ROE than in the base year.to other companies and has a value of over 40, suggesting that it is a very fast returning companies. In the case of the main competitors, there is a decreasing trend of ROE, especially after the entry of foreign companies into the market, since it generally has a negative value.Major suppliers on the other hand, have a moderate but steady increase in ROE, which highest value was recorded in the sixth year, 23.1.Based on the above, it can be concluded that the greenfield investments and major suppliers had the highest return on equity, somewhat weaker acquisitions while main competitors recorded losses especially after the third year of the entry of foreign companies into the market. Figure 8. The average change in the rate of return on assets In the case of the average change in the rate of return on assets (ROA), the results are somewhat different in relation to the comparison with the average ROE.The main suppliers had the best result as compared to their assets (which is quite small compared to companies with which they are compared) they had higher profit compared to green-field investment and acquisitions.As in the case of ROE, ROA in the case of main competitors had the lowest values. - Conclusion The study proved that FDI had a positive effect on the growth of Serbian exports.When all the variables were included in the model, the increase in the FDI stock of 1% leads to an increase in exports of 0, 035%.When gross investments are excluded from the model, the impact of FDI is even more powerful, the increase in the FDI stock of 1% leads to an increase in exports of 0.149%.Results of previously performed correlation analysis indicate the existence of a strong positive correlation between FDI and exports (Pearson correlation coefficient is 0.619). The positive impact of FDI on exports is also manifested through productivity when FDI and gross capital formation are not included in the model.Increase in productivity of 1% leads to an increase in exports of 1.294%.It seems that the productivity is the channel through which FDI had impact on better export performance of Serbian industry.This is confirmed by research at the micro level when acquisitions, green -field investments, main competitors and major suppliers recorded productivity growth. Gross fixed capital formation had a strong impact on exports (increase in gross investment of 1% leads to an increase in exports of 0.240% with significance level of 1%), while real effective exchange rate haven't had a major impact on exports, and unit labour costs were statistically significant in the model in which FDI were not included. The research conducted at the company level indicates the existence of horizontal and vertical spillover effects.Positive spillover effects are slight increase in the number of employees of major suppliers, sales growth of major suppliers, labour productivity growth of main competitors and major suppliers, the increase in net profit, the rate of return on equity and the rate of return on assets of major suppliers.It can be concluded that FDI through vertical spillover effects had a positive effect on major suppliers. Negative spillovers are reflected in the reduction of number of employees of main competitors, crowding out of the market and a drop in revenues from sales, net profits, the rate of return on equity and the rate of return on assets of competitors of foreign affiliates.It can be concluded that FDI have led to significant displacement from the market of main competitors. Thus, FDI have a positive impact on the growth of Serbian exports and labour productivity, as well as a positive impact on the major suppliers.In addition, the research on the micro level has shown the tendency of decrease in the number of employees, productivity growth, increase in sales revenue, net profit, the rate of return on equity and the rate of return on assets in the case of acquisitions.This suggests that the change of ownership and availability of specific resources that foreign companies possess had a positive impact on privatized companies.In the case of green-field investments the trend of growth of number of employees, productivity, sales revenue, net profit, the rate of return on equity and the rate of return on assets was recorded.This gives us sufficient grounds to accept the general hypothesis, FDI have had a positive impact on Serbian manufacturing industry. It is important for the decision makers that the research pointed that export performance of the Serbian manufacturing industry can be improved by attracting more FDI into this sector.In order to achieve this more active investment-promoting policy measures are needed.However, the policy makers should also try to target specific export-oriented green-field FDI and implement other measures that make potential positive spillover effects more probable.Basically, host countries may condition FDI incentives on mandatory measures or use the incentives to encourage investors to behave in a certain way.Performance requirements may include export orientation of production, which have already been mentioned, but they may relate to the training of local workers and technology transfers, as well.The most important measures are those that strengthen the host countries' own capabilities.Only countries with high level of human capital have enough absorptive capacity to profit from high techology and procedures disseminated by foreign investors. The contribution of the research can be summarized as follows: the analysis of the influence of FDI to Serbian manufactuiring was conducted on the macro and micro level.On the macro level the mixed model was used with method of the least squares, with fixed effects, on the sample of 23 brances in manufacturing in the period 2009-2013.On the micro level the sample were 40 largest foreign companies in manufacturing in order to identify horizontal and vertical spillover effects. If one talks about limits of the research than such limits can be pointed: no clear answer and precise solution how to channel FDI to the most qualitative one e.g.greenfield investemsnts into manufacturing.From this, one can see that possible topic for further investigation can be the investigation of the main factors for FDI attraction in manufacturing.Also important could be to envisage noneconomic factors behind FDI, like historical, cultural, political and social factors. Figure 3 . Figure 3. Average percentage change in the number of employees Figure 4 . Figure 4. Average percentage change in sales revenue Figure 6 . Figure 6.Average percentage change in net profit Figure 7 : Figure 7: The average change in the rate of return on equity Table 2 . Correlations between variables Source:Author's calculation Table 3 : The results of model assessment Table 1 . The average values of variables according to industry
9,781.8
2017-01-01T00:00:00.000
[ "Economics" ]
Bird communities in two fragments of Cerrado in Itirapina , Brazil The Cerrado domain is a mosaic of vegetation types at the local scale, and this environmental heterogeneity leads to high regional bird diversity. Therefore, we aimed to survey quantitative and qualitatively the bird fauna of two fragments of Cerrado and to compare them with an adjacent protected area (Estação Ecológica de Itirapina), in order to assess the heterogeneity of bird diversity in the region. The present study was conducted during 12 months from October 2006 to September 2007 in the municipality of Itirapina, Southeastern Brazil. Altogether we recorded 210 bird species. Fifty-six of them had never been detected in Estação Ecológica de Itirapina, and eleven species are new records for the whole Itirapina region. The list also includes six species that are endangered in Sao Paulo State and five endemic species of the Cerrado domain. Most species were recorded in less than 50% of the visits and exhibited low relative abundance. Primarily insectivorous species were the most common, followed by omnivores. Frugivorous birds were poorly represented. Carnivores were more abundant than usually observed in fragments. The similarity among fragments was higher than between fragments and the protected area. Considering the vegetation heterogeneity in the Cerrado domain, our results reinforce the importance of conserving fragments in order to sample this diversity. Introduction Detecting environmental quality and monitoring biodiversity is important for the conservation of natural areas.In that sense, bird inventories are commonly used for this purpose (Vielliard, 2000;Marini, 2001;Antunes, 2005;Piratelli et al., 2008).Conservation strategies of natural areas require knowledge on the distribution and abundance of species, and studies on bird communities can provide suitable data (Naeve et al., 1996). Studies on Cerrado demonstrate high biodiversity (Klink and Machado, 2005).The domain is formed of a mosaic of different vegetation forms that varies from grasslands ('campo limpo') to forests ('cerradão'), including gallery forests (Coutinho, 2006).This environmental heterogeneity is important to keep high levels of species richness and endemic species (Machado et al., 2004).However, the Cerrado is one of the world's threatened biodiversity hotspots (Myers et al., 2000).About 60% of its vegetation has already been removed (Machado et al., 2004) and the remaining areas are isolated in forest fragments (Durigan et al., 2007). While grasslands are predominant in EEI (Motta-Junior et al., 2008;Granzinolli, 2009) more forested areas prevail in these two fragments.Therefore, this paper aimed to accomplish qualitative and quantitative bird surveys of the two fragments, as well as to compare the results with those from EEI.Such outcomes could be used in future wildlife monitoring and to better understand the influence of fragmentation on bird communities (Palmer et al., 2008).This study also proposes some local management strategies for biodiversity conservation. Study area We have studied two fragments: A (22° 13' S and 47° 48' W) and B (22° 14' S and 47° 49' W), inside the Estação Experimental de Itirapina.This area is basically used for the extraction of Eucalyptus spp.(Myrtaceae) and resin from Pinus spp.(Pinacea).Fragment A, of 150 ha, is composed predominantly of 'cerrado sensu stricto', 'cerradão' (a tall woodland) and gallery forest (Delgado et al., 2004).Fragment B, of 350 ha, is more heterogeneous, concerning the vegetation.It is a more forested fragment than A, being composed of 'cerradão', 'cerrado sensu stricto', gallery forests (Delgado et al., 2004), Eucalyptus spp. with native understory and regeneration areas with native and exotic species (pers.obs.).Fragments A and B are about 1.5 km and 2 km, respectively, distant from EEI, that is of 2,300 ha.Both fragments are surrounded by pine and eucalyptus plantations and are 750 m apart from each other. Itirapina region has a Cwa climate according to the Köppen system (Zanchetta, 2006), with annual rainfall average of 1,459 mm.The rainy season is usually from October to March and the dry period occurs from April to September.The average annual temperature is 21.9 °C.The hottest months are January and February and the coolest, June and July.The altitude of the region is about 740 m (Zanchetta, 2006). Methods Qualitative and quantitative methods were performed.The qualitative survey was done to generate the fullest species list possible that occur in the area (Vielliard and Silva, 1990) and was conducted during 12 months (October 2006to September 2007).For this purpose we walked on pre-existent tracks during mornings (6:00 to 11:00 AM), evenings (3:00 to 6:00 PM) and nights (6:00 to 9:00 PM) recording all bird species, through auditory and/or visual contacts, without standardising the vegetation types.Sampling effort during the qualitative survey in fragment B (61 visits) was higher than in fragment A (30 visits) because of its size and vegetation heterogeneity.Rarefaction curves were plotted using the statistical package EstimateS 8.0 (Colwell, 2006).The same program was used to estimate fragment richness using the first-order Jackknife richness estimator (Heltshe and Forrester, 1983).Species composition in the fragments, compared with that from EEI, was analysed using Jaccard's similarity index (Krebs, 1999).Bird lists from EEI were extracted from data review by Motta-Junior et al. (2008).Species frequency of occurrence (FO) was calculated from Vielliard and Silva (1990) and the analysis of trophic structure was done using species groups based on Motta-Junior (1990) and Sick (1997). The quantitative survey was also carried out over 12 months (October 2006 to September 2007) using the point counts method, developed by Blondel et al. (1970) and adapted by Vielliard and Silva (1990) for Neotropical studies.For this purpose, 18 sampling points were chosen in each fragment in pre-existing tracks, each containing four or five points, with the minimum distance of 200 m from each other (Vielliard and Silva, 1990).In each of these sampling places all auditory and/or visual contacts with individuals within a 100 m range were recorded over 20 minutes.Points were visited during morning (6:00 to 11:00 PM), starting minutes before sunrise.On each day, eight to ten random points were visited in each fragment, totalising 216 samples in each area (18 points visited for 12 months). Results During the qualitative survey, 210 species of birds distributed in 51 families, were recorded (see Table 1).In 91 days of field visits (476 hours of observation) 158 species in the fragment A and 201 in B were recorded (see Table 1).First-order Jackknife estimated a richness of 186 ± 7.22 species to fragment A and 223.61 ± 5.3 species to fragment B. Rarefaction curves from both fragments indicate a similar pattern (see Figures 1 and 2).The similarity among species composition of fragments was 70.95%, while the similarity between fragments and EEI was 53.66%. Species frequency of occurrence ranged from 3.1% (one contact) to 96.9% (31 contacts) in fragment A and 1.5% (one contact) to 90.8% (59 contacts) in B (see Table 1).In both fragments, most of the species were recorded in up to 50% of the field visits (see Table 2), namely 119 species in A (75.3% of the total recorded in the fragment) and 158 species in B (78.61%), however some species were recorded in only one visit.This occurred with 30 species (18.9%) in A and 22 (13.92%) in B. Species with frequency of occurrence over 75% were considered residents in the studied areas (Almeida et al., 1999).This frequency was 19 species (12.03%) for fragment A and 14 species (6.97%) for B (see Table 1). In both fragments, results were similar in relation to the distribution of trophic categories (see Table 3).Predominantly insectivorous birds were more representative both in A and B (49.37 and 48.26% respectively), followed by predominantly omnivorous (17.72 and 16.42%, respectively).The other categories, in order of prevalence piscivores, detritivores, malacophages, carnivores, nectarivores and gramnivores -were poorly represented and together summed only 27.21% of birds in fragment A and 29.36% in B. The predominantly frugivorous birds were represented by only 5.70% of species in A and 5.97% in B. During quantitative surveys, 111 species of birds in A and 120 in B were recorded, in 3,647 and 3,474 contacts, respectively (see Table 1).PIA ranged from 0.005 (one contact) to 1.282 (277 contacts) in A and 0.005 (one contact) to 1.023 (221 contacts) in B (see Figures 3 and 4).In A, 26 species (23.42% of the total registered during quantitative survey) had PIA values lower than 0.019 (less than four contacts), and the same occurred with 36 species (29.75%) in B, where values lower than 0.014 represent less than three contacts. Lower similarity between fragments and EEI, compared to the similarity among fragments, may be related to size and vegetation differences among areas and also the presence of a large number of exclusive bird species in each one of them.The fragments have 56 exclusive species, which happens to 77 species recorded only in EEI.Belonging to this group of birds apparently sensitive to environmental changes, there is a group that includes endemic species to the Cerrado and/or threatened ones, occurring in more open vegetation forms, as Rhea americana, Culicivora caudacuta, Alectrurus tricolor and Polystictus pectoralis.Differences between study duration, frequency of visits and sampling method may also explain the disparity of results.For example Willis's survey (2004) was conducted over 21 years in contrast to the 12 months of this study. Frequency of occurrence The high percentage of species with low FO is commonly found in other studies (Almeida et al., 1999;Donatelli et al., 2007).Almeida et al. (1999) attribute the results to the occurrence of wandering, occasional or migratory species.Concerning the fragments studied, especially due to the considerable species number recorded in up to 25% of visits, this explanation can be attributed to some groups, such as diurnal raptors, represented by nine of the 11 species with low frequency in A and 13 of the 14 species in B. Low FO values can also be related to the reduced detection of certain species.This was true to some hummingbird species: nine of the 11 species showed low frequencies (seven species with up to 25% of frequency) in fragment A and the same happened with 13 of the 14 species recorded (9 species below 25% of frequency) in B. Species with inconspicuous vocalisations (Conirostrum speciosum, Eucometis penicillata, Piranga flava and Nemosia pileata) and/or which were recorded in low relative abundance in study areas (Penelope superciliaris, Piaya cayana and Tapera naevia) may also explain these results (Aleixo and Vielliard, 1995).Only 26 species (see Table 1) are resident (FO values higher than 75%, according to Almeida et al., 1999) to the studied areas and ten of them occur in both fragments.Turdus leucomelas, Pitangus sulphuratus, Patagioenas picazuro, Cyclarhis gujanesis and Zonotrichia capensis can be included in this group because they are species adapted to disturbed registered species (Chiroxiphia caudata and Habia rubica) are bioindicators (Piratelli et al., 2008).According to these authors, both species have some ecological characteristics (e.g.needs for nesting and foraging in mixed flocks) that lead them to prefer less altered areas. Some species (n = 56) had never been registered in EEI (review by Motta-Junior et al., 2008) (see Table 1), but one (Leucochloris albicollis) had already been registered during a brief visit to fragment B (Willis and Oniki, 2003).This number represents a considerable increase to the bird list for the protected areas (EEI and Estação Experimental de Itirapina), now with a total of 287 species.Some species may not have been previously recorded as they are difficult to detect or even occur in the region in low relative abundance (low FO and/or low PIA values).This is the case, for example, of the five records of hummingbirds (Anthracothorax nigricollis, Thalurania glaucopis, Hylocharis sapphirina, Heliomaster squamosus, Calliphlox amethystina). However, some of the new species recorded were found only in forested habitats, which are scarce in EEI (Granzinolli, 2009) and very important for birds in Cerrado (Piratelli and Blake, 2006).These species possibly would not be detected in more open vegetation areas, which prevail in EEI.In this group there are thirteen Atlantic Forest endemics (Brooks et al., 1999) (see Table 1).There were also, while not restricted, other species associated with forest habitats (Sick, 1997), like Eucometis penicillata, Habia rubica, Leptopogon amaurocephalus and Pachyramphus polychopterus.These results reinforce the importance of keeping forest fragments and environmental heterogeneity for the conservation of local biodiversity, especially for Cerrado.This contributes to bird conservation in a landscape scale (Bakker et al., 2002). Within the list, there are eleven species (see Table 1) registered for the first time for the whole Itirapina region (Willis, 2003a;Willis and Oniki, 2003;Motta-Junior et al., 2008): Rostramus sociabilis, Micrastur semitorquatus, Leptotila rufaxilla, Hylocharis sapphirina, Elaenia spectabilis, Legatus leucophaius, Pachyramphus castaneus, Cantorchilus leucotis, Polioptila dumicola, Eucometis penicillata and Haplospiza unicolor.Even when a specific area is visited for a long time, recording new species is common in bird surveys, as shown by some other authors (Rodrigues et al., 2005;Motta-Junior et al., 2008).Despite the intensive sampling effort, rarefaction curves did not stabilize (see Figures 1 and 2).Jackknife estimator data indicate that bird richness is probably higher than observed for both studied areas.In light of that, new visits to the fragments are likely to allow for additional records. Similarities among areas The similarity among bird communities of fragments is probably explained by the similarities in vegetation and proximity between areas, which may favour the displacement of individuals.The matrix can also play an important role, since the occurrence of a similar matrix in distinct areas selects similar communities (Sisk et al., 1997).In addition, a considerable portion of the species common to both areas is often found in forest fragments areas and also occurred in high frequencies in other studies (Donatelli et al., 2007). Trophic categories Almost 50% of registered species were classified as predominantly insectivores and omnivores represented just over 15% of species (see Table 3).The percentage of birds considered frugivores was low for both fragments, corroborating studies conducted in fragments in Sao Paulo State (Motta-Junior, 1990;Manica et al., 2010).Frugivores are among the most sensitive groups to forest fragmentation (Willis, 1979) because they need large areas to reach their food needs (Pizo, 2001).The higher percentage of predominantly insectivorous birds was also recorded by Motta-Junior (1990), Donatelli et al. (2004Donatelli et al. ( , 2007)); and Manica et al. (2010).However Motta-Junior (1990) and Donatelli et al. (2007) found frugivores as the second more representative category.Omnivores were more abundant than frugivores, as observed in Motta-Junior (1990) andManica et al. (2010) and this appears to be a common pattern in fragments (Willis, 1979).Apparently, a more varied diet (e.g.omnivory) tends to be favoured in disturbed environments. Although with low values, the importance of carnivores in both fragments was high when compared to other studies (Motta-Junior, 1990;Donatelli et al., 2007;Manica et al., 2010).Raptor birds have a large range (Del Hoyo et al., 1994) and probably the proximity to EEI allowed the record of most species.Therefore, these birds must use both Itirapina protected areas, which again highlight the importance of the fragments to the local avifauna.The presence of 17 carnivore species in the fragments can indicate its quality.They are top predators (Sick, 1997) and thus need a structured environment, concerning trophic chains. Gramnivores were ranked in the fourth position, probably because both fragments have large amounts of exotic herbaceous species, used by these birds (pers.obs).Nectarivores occupied the fifth place, which can be partially attributed to the presence of Pyrostegia venusta (Ker Gawl.)Miers (Bignoniaceae), which produces flowers visited by this category (hummingbirds, mainly).However, the importance of this plant species for nectarivorous birds must be observed in other areas.Flowers of Eucalyptus spp.were also apparently important for these birds, but also requires more observations, although Willis (2002Willis ( , 2003b) ) has already reported the importance of the resource.Nectarivores are important in natural areas because some species are flower pollinators.Impacts on pollinator populations can affect all communities' structure (Murcia, 1996). Ponctual index of abundance For the PIA values, a pattern similar to other studies that have used the same methodological standards (e.g.time in each sampling point) was observed (Vielliard and Silva, 1990;Aleixo and Vielliard, 1995;Almeida et al., 1999).The results indicated the presence of few species with high values and many species with medium to low values.Some birds had high and similar values in both fragments, such as Patagioenas picazuro, Tangara cayana, Cyclarhis gujanensis, Turdus leucomelas, Thraupis sayaca and Pitangus sulphuratus, common species in disturbed environments (Sick, 1997) and with long or constant vocalisation scope (Almeida et al., 1999).The PIA of P. picazuro was higher than 1.0, data also found by Almeida et al. (1999) and Donatelli et al. (2007).This species has been expanding in Sao Paulo State, aided by deforestation (Willis and Oniki, 1987) and is currently one of the most common birds in eastern Brazil (Oniki and Willis, 2000).There is difference in PIA values of some species when the fragments are compared (see Table 1).This is the case, for example, of Tolmomyias sulphurescens, Thamnophilus caerulescens, Euphonia chlorotica, Turdus rufiventris, Conopophaga lineata and Lathrotriccus euleri, which occurred in higher relative abundance in B, and to Coryphospingus cucullatus, Casiornis rufa, Colaptes campestris, Synallaxis frontalis and Tyrannus savanna, which occurred at higher relative abundance in A. These differences may be related to the occurrence of certain birds in forest vegetation areas, which occur in B, and species associated with more open environments, such as those with highest relative abundance in A. This highlights the importance of keeping the Cerrado mosaic, since species are related to certain vegetation types (Boecklen, 1986). Final Considerations and Recommendations To improve the quality of the studied areas, this paper suggests the establishment of an ecological corridor between fragments, given the short distance that separates them, and the similarity among bird communities.It includes the use of native plant species that produce fruits.Frugivorous birds, including the largest ones, still occur within Sao Paulo State, but they need large areas to reach their food needs.So, in addition to these corridors between fragments, it could also attract frugivorous bird species, currently rare, to these fragments.Moreover, these fragments can act as a damping area to the Estação Ecológica de Itirapina, reducing the disruption in this area. This paper also proposes discussing the withdrawal of native understory in Pinus spp.plantation, used by birds.Certain areas of forest adjacent to the fragments show native understory which is, at least apparently, important for certain bird populations.The testing of this hypothesis is therefore suggested.In case of positive results, lack of removal of native vegetation can be a simple way of management in areas of planting pine species. Conclusion Our results reinforce the importance of forest fragments for certain populations of birds and therefore for the conservation of birds locally.Environmental heterogeneity is an important factor for the conservation of biodiversity and therefore, is essential to the maintenance of forest fragments in areas of activities affecting the soil cover.Moreover, the results also highlight that forest fragments are important to research, since they contribute to understanding species distribution on a landscape scale, providing data for management strategies. Figure 2 . Figure 2. Rarefaction curve from fragment B during the qualitative survey conducted between October 2006 and September 2007 in Itirapina, Sao Paulo State, Brazil. Figure 1 . Figure 1.Rarefaction curve from fragment A during the qualitative survey conducted between October 2006 and September 2007 in Itirapina, Sao Paulo State, Brazil. Figure 3 . Figure 3. Ponctual Index of Abundance (PIA) in descending order of the bird species recorded during the quantitative survey conducted between October 2006 and September 2007 in fragment A in Itirapina, Sao Paulo State, Brazil. Figure 4 . Figure 4. Ponctual Index of Abundance (PIA) in descending order of the bird species recorded during the quantitative survey conducted between October 2006 and September 2007 in fragment B in Itirapina, Sao Paulo State, Brazil. Table 2 . Frequency of occurrence (FO) classes percentage of bird species in fragments A and B, Itirapina, Sao Paulo State, during 91 visits to areas in 12 months of study (October 2006 to September 2007). Table 3 . Number of species and relative percentage of nine trophic categories to the fragments studied (A and B) and analysed
4,582.2
2010-08-01T00:00:00.000
[ "Environmental Science", "Biology" ]
Solvability of an in nite system of nonlinear integral equations of Volterra-Hammerstein type Abstract: The purpose of the paper is to study the solvability of an in nite system of integral equations of Volterra-Hammerstein type on an unbounded interval. We show that such a system of integral equations has at least one solution in the space of functions de ned, continuous and bounded on the real half-axis with values in the space l1 consisting of all real sequences whose series is absolutely convergent. To prove this result we construct a suitable measure of noncompactness in the mentioned function space and we use that measure together with a xed point theorem of Darbo type. Introduction Integral equations create a very signi cant part of nonlinear analysis and applied mathematics ( [1][2][3][4]). Many researchers, not only mathematicians, are interested in the study of the solvability of integral equations and the applicability of such equations to di erent problems arising in the description of real world events ( [2,3,[5][6][7][8][9]). The results obtained in the theory of integral equations are useful and widely utilized in many branches of technical sciences, as mechanics or engineering and exact sciences as physics, for example. In the theory of integral equations the special and exceptional branch is created by in nite systems of integral equations. On the one hand such systems are very interesting subject of the study for researchers specialized in the theory of integral equations but on the other hand systems of integral equations play very crucial role in applications. In this paper we deal with an in nite system of nonlinear integral equations of Volterra-Hammerstein type. In [10] we showed that such a system has a solution belonging to the space BC(R+, c ) of functions dened, continuous and bounded on the real half-axis and with values in the sequence space c . In the present paper we prove a stronger result. Namely, we show that in nite system of integral equations of Volterra-Hammerstein type has at least one solution in the space BC(R+, l ) consisting of all functions de ned, continuous and bounded on the interval R+ with values in the sequence space l . Of course each such solution belongs to the space BC(R+, c ) considered in [10]. Let us mention that paper [10] was the rst one in which the study of solvability of in nite systems of integral equations de ned on an unbounded interval was carried out. All known up to now results have been obtained in the space of functions de ned on a bounded interval (see [11][12][13][14], for example). Notation, de nitions and auxiliary facts In this section we recall some facts which will be utilized in the paper. Let us start with establishing some notation. The symbols R and N stand for sets of real and natural numbers, respectively. Moreover, we put R+ = [ , ∞). The letter E means a Banach space normed by norm || · || E and with zero vector θ. The symbol B(x, r) denotes the closed ball in E centered at x with radius r. In the special case when x = θ we write Br instead of B(θ, r). Moreover, if X is a subset of E then we denote by X the closure of X and by Conv X the closed convex hull of the set X. The symbols X + Y, λX (λ ∈ R) will stand for the usual algebraic operations on subsets X and Y of E. For a nonempty and bounded set X ⊂ E we denote by diam X the diameter of the set X. The symbol ||X|| E will stand for the norm of the set X ⊂ E i.e., we have The fundamental notion that we use in this paper is the concept of a measure of noncompactness. We recall now the de nition of a measure of noncompactness which was introduced in monograph [15]. To this end let us denote by M E the family of all nonempty and bounded subsets of E and by N E its subfamily consisting of all relatively compact sets. De nition 2.1. A function µ : M E → R+ will be called a measure of noncompactness in E if it satis es the following conditions: (i) The family ker µ = {X ∈ M E : µ(X) = } is nonempty and ker µ ⊂ N E . The set ker µ de ned in axiom (i) is called the kernel of the measure of noncompactness µ. Let us observe that the intersection set X∞ appearing in axiom (vi) is a member of the family ker µ [15]. This simple observation plays an essential role in our further considerations. Now we present some properties of measures of noncompactness [15]. We say that µ is sublinear if it satis es the following additional conditions: Moreover, we say that a measure of noncompactness µ has maximum property if If the measure of noncompactness µ is such that (x) ker µ = N E then it is called full. A sublinear measure of noncompactness which is full and has maximum property is said to be regular measure of noncompactness [15]. Let us mention that the rst measure of noncompactness was de ned in 1930 by Kuratowski [16] in the following way α(X) = inf {ε > : X can be covered by a nite family of sets X , X , . . . , Xm such that diam X i ≤ ε for i = , , . . . , m}. The measure α(X) is called the Kuratowski measure of noncompactness. It is known (see [15]) that the Kuratowski measure of noncompactness is a regular measure. In the similar way was de ned the Hausdor measure of noncompactness ( [17,18]): It can be shown that the measure χ(X) is also regular and that both de ned above measures α(X) and χ(X) are equivalent. But despite of these similarities it turns out that the Hausdor measure of noncompactness χ is more convenient in applications than the Kuratowski measure. The main reason is that in some classical Banach spaces we can nd explicit formulas describing the Hausdor measure of noncompactness but we do not know such formulas for the Kuratowski measure of noncompactness in any Banach space [15]. And now, taking into account our further investigations, we recall the formula expressing the Hausdor measure of noncompactness in the space l (cf. [15]). So, let us call to mind that the space l consists of all sequences whose series is absolutely convergent, i.e. It is normed by the following norm Then we have the following formula for the Hausdor measure of noncompactness of any bounded set X ∈ M l : (2.1) To prove our main result we will also need a xed point theorem involving a measure of noncompactness. The basic theorem in this subject is known xed point theorem proved by Darbo [19]. We will use a modi ed version of Darbo theorem formulated below (cf. [15]). Theorem 2.2. Let µ be an arbitrary measure of noncompactness in the Banach space E. Assume that Ω is a nonempty, bounded, closed and convex subset of E and Q : Ω → Ω is a continuous operator such that there exists a constant k ∈ [ , ) for which µ(QX) ≤ kµ(X) for an arbitrary nonempty subset X of Ω. Then the operator Q has at least one xed point in the set Ω. Measures of noncompactness in the space BC(R + , l ) In [10] we investigated measures of noncompactness in the space BC(R+, E) consisting of all functions dened, continuous and bounded on R+ with values in a xed Banach space E. Let us pay attention to the fact that the space BC(R+, E) is a generalization of the well-known and often used classical Banach space BC(R+, R), therefore measures of noncompactness in the space BC(R+, E) must be more complicated then known measures in BC(R+, R). And now we recall some basic facts about the space BC(R+, E) and measures of noncompactness in this space. Let us start with assuming that E is a given Banach space with the norm || · || E whereby we will assume that E is an in nite dimensional space. Consider the Banach space BC(R+, E) equipped with the supremum norm ||x||∞ de ned in the standard way ||x||∞ = sup{||x(t)|| E : t ∈ R+}. In [10] we de ned three formulas for measures of noncompactness in the space BC(R+, E) and each such formula is a sum of three components. The rst and the second component are the same in each formula and we start to present them. So, let us x a set X ∈ M BC(R+,E) and numbers T > and ε > . For any function x ∈ X we de ne the modulus of continuity ω T (x, ε) of the function x on the interval [ , T] by the classical formula Next, let us de ne and nally, we put ω (X) = lim T→∞ ω T (X). Notice that both above limits exist (for details see [10]). The quantity ω (X) is the rst component of each of mentioned earlier measures of noncompactness in BC(R+, E). Next, to obtain the second component, assume that γ = γ(X) is a measure of noncompactness in the space E. Fix number t ∈ R+ and denote by X(t) the socalled cross-section of the set X at the point t: Further, for a xed T > let us put Notice that the above limit exists (see [10]). The obtained quantity γ ∞ (X) is the second component of all three formulas for measure of noncompactness in the space BC(R+, E). Now we introduce the third component of the measure of noncompactness in BC(R+, E) which describes the behaviour of the set of functions at in nity. We can do it in three ways and we will describe each of them. So, for a xed T > let us de ne Next, notice that there exists the limit a∞(X) = lim T→∞ a T (X) (3.4) and the quantity a∞(X) is considered as the third component of the measure of noncompactness in the space And now let us consider the functions γa, γ b , γc de ned on the family M BC(R+,E) as follows In [10] we proved that under some assumptions on γ the functions γa, γ b , γc are measures of noncompactness in the space BC(R+, E). More precisely, we have the following result. for an arbitrary set X ∈ M BC(R+,E) , where χ denotes the Hausdor measure of noncompactness in the space BC(R+, E). For other properties of the above introduced measures of noncompactness we refer to [10]. We recall only that Theorem 3.1 remains valid if in the construction of the component γ ∞ we replace the Hausdor measure of noncompactness χ by an arbitrary regular measure of noncompactness µ equivalent to the Hausdor measure χ [10]. Now, we are going to present formulas (3.7), (3.8) and (3.9) in the special case, for E = l . The space l is one of the Banach sequence spaces and we deal with this space (and, in general, with sequence spaces) because it is strictly connected with the form of solutions of in nite systems of integral equations (see Theorem 4.2). Therefore, we will work in the Banach space x is continuous and bounded on R+}. The fundamental fact in our investigations is that every function x ∈ BC(R+, l ) can be regarded as a function sequence for t ∈ R+, where for any xed t the sequence (xn(t)) is a real sequence being an element of the space l . Obviously, it means that According to the formula for the norm in the space BC(R+, E) given earlier we have Now, we are going to present explicitly the consecutive components ω (X), χ ∞ (X) and a∞(X) of the measure of noncompactness χa(X) for any set X ∈ M BC(R+,l ) . So, let us start with ω (X). Fix arbitrarily numbers T > and ε > . For any x = x(t) = (xn(t)) ∈ X we have Hence, we get Finally, we obtain Next, we are going to de ne the second component occuring in the formula for the measure χa(X). To this end let us assume that X ∈ M BC(R+,l ) and t ∈ R+ is arbitrarily xed. Using (2.1) we have Next, for a xed T > utilizing (3.1) we get Finally, in view of (3.2) we obtain the following expression: And now let us write the third component of the measure χa(X). Thus, x an arbitrary number T > . Then, on the basis of (3.3), we have Next, by (3.4) we obtain Finally, based on Theorem 3.1 we get that the function is a measure of noncompactness in the space BC(R+, l ), where ω (X), χ ∞ (X) and a∞(X) are given by formulas (3.10), (3.11) and (3.12), respectively. Observe, that keeping in mind formulas (3.5) and (3.6) together with Theorem 3.1 we obtain that the functions and are also measures of noncompactness in the space BC(R+, l ), where Theorem on the existence of solutions of in nite systems of integral equations on the real half-axis Let us consider the following in nite system of nonlinear quadratic integral equations of the Volterra-Hammerstein type for t ∈ R+ and for n = , , . . .. In [10] we proved that system (4.1) has at least one solution in the space BC(R+, c ) := {x : R+ → c : x is continuous and bounded on R+}. In this paper we prove the other result, namely we show that the system (4.1) has at least one solution in the space BC(R+, l ). For the convenience, from now on the space BC(R+, l ) will be denoted by BC . At the beginning we provide a lemma which we will utilize in the proof of our main result. Proof. The proof is an immediate consequence of the Cauchy condition for real sequences. In what follows we will investigate the solvability of system (4.1) in the space BC under the below listed assumptions. (i) The function sequence (an(t)) is an element of the space BC such that lim (iii) There exists a constant K > such that (vi) There exists a function l : R+ → R+ such that l is nondecreasing on R+, continuous at and the following condition is satis ed transforms the space R+×l into l and is such that the family of functions {(gx)(t)} t∈R+ is equicontinuous at every point of the space l i.e., for each arbitrarily xed x ∈ l and for a given ε > there exists δ > such that ||(gy)(t) − (gx)(t)|| l ≤ ε for every t ∈ R+ and for any y ∈ l such that ||y − x|| l ≤ δ. (ix) The operator g de ned in assumption (viii) is bounded on the space R+ × l . More precisely, there exists a positive constant G such that ||(gx)(t)|| l ≤ G for any x ∈ l and for each t ∈ R+. (x) There exists a positive solution r of the inequality A + F G K + G K r l(r) ≤ r such that G K l(r ) < , where the constants F, G, K were de ned above and the constant A is de ned in the following way Now we can present our main result concerning the solvability of in nite system of integral equations (4.1). Our proof will be conducted in several steps. At the begining we show that operator Q transforms the space BC into itself. Thus, let us take x = x(t) = (xn(t)) ∈ BC . Obviously this means that ∞ n= |xn(t)| < ∞. Then, for arbitrary t ∈ R+, using assumption (vi), we get Thus we have that (Fx)(t) ∈ l . Moreover, from (4.2) we infer that the function Fx is bounded on the set R+. Further, we are going to show that the function Fx is continuous on the interval R+. To this end, let us take arbitrary t ∈ R+ and ε > . It follows from the continuity of the function x that the below given condition is satis ed Thus, let us choose δ > according to (4.4). Then, for t ∈ R+ such that |t − t | ≤ δ we obtain Next, let us choose a number δ > according to assumption (v). Then for Thus, taking δ = min{δ , δ } we can deduce that the following estimate is satis ed for any t ∈ R+. Therefore, we can write for any t ∈ R+. It means that Fx is continuous on R+. Linking above established facts we obtain that the operator F transforms the space BC into itself. Next, we are going to show that the operator V maps the space BC into BC . So, let us take an arbitrary function x = x(t) = (xn(t)) ∈ BC . We are going to prove that a function Vx ∈ BC . We start with showing the boundness of the function Vx on R+. To this end observe that for an arbitrary number t ∈ R+, using assumptions (iii) and (ix), we get Therefore we have that for any t ∈ R+ the following inequality holds This means the function Vx is bounded on R+. To prove the continuity of the function Vx on the interval R+ let us x ε > , T > and t ∈ [ , T). In virtue of (4.4) we can choose a number δ > according to the continuity of the function x = x(t) at the point t . Next, take t ∈ [ , T) satisfying |t − t | ≤ δ (without loss of generality we can assume that t > t ). Then, keeping in mind assumptions (iv), (viii) and (ix) and using the Lebesgue monotone convergence theorem [20], we arrive at the following estimates: where ω T k (ε) denotes a common modulus of continuity of the function sequence t → kn(t, s) on the set [ , T] (according to assumption (ii)). Obviously ω T k (ε) → as ε → . Hence, on the basis of (4.7) we derive the following inequality ||(Vx)(t) − (Vx)(t )|| l ≤ ω T k (ε) G T + K G ε, which means that the function Vx is continuous on [ , T). The number T was choosen arbitrarily therefore we deduce that the function Vx is continuous on the real half-axis R+. Futher on, we are going to prove that the function Q maps the space BC into itself. In order to prove this fact, notice that we can treat the space BC as a Banach algebra with respect to the coordinatewise multiplication of sequences. Therefore, take any function x ∈ BC and consider a function Qx. Keeping in mind the de nition of the operator Q and established facts that the function Fx and the function Vx are continuous on R+ we obtain that the function Qx is also continuous on R+. Similarly, taking into account the boundness of functions Fx and Vx on the set R+ we infer that Qx is also bounded on R+. In order to show that Qx : R+ → l let us notice that using asumption (i) and (4. so for any xed t ∈ R+ we obtain that (Qx)(t) ∈ l and hence Qx : Finally, linking all the above established properties of the function Qx we derive that the operator Q transforms the space BC into itself. In what follows we show the existence of a number r > such that Q transforms the ball Br in the space BC into itself. For arbitrary t ∈ R+, utilizing estimates (4.2) and (4.6) as well as assumptions (x) and (vi) we obtain This yields to estimate Taking into account the last inequality and assumption (x) we deduce that there exists a number r > such that the operator Q transforms the ball Br into itself. In what follows we show that the operator Q is continuous on the ball Br . In order to prove this fact x arbitrarily x ∈ Br , ε > and take a function y ∈ Br such that ||x − y|| BC ≤ ε. Fix t ∈ R+. Then, in view of assumption (vi) we have This means that F is continuous on the ball Br . Further, let us consider the function δ = δ(ε) de ned for ε > in the following way δ(ε) = sup {|gn(t, x) − gn(t, y)| : x, y ∈ l , ||x − y|| l ≤ ε, t ∈ R+, n ∈ N}. This proves the continuity of the operator V on the ball Br . Now, linking the continuity of the operators F and V on the ball Br and keeping in mind the representation of the operator Q written at the beginning of the proof we infer that the operator Q is continuous on Br . And now we have the last step of our proof in which we show that the inequality from Theorem 2.2 is satis ed for any set X ⊂ Br and for measure of noncompactness χa de ned by formula (3.7) for γ = χ l . To this end take a nonempty subset X of the ball Br and x numbers ε > and T > . Choose t, s ∈ [ , T] such that |t − s| ≤ ε and consider a function x = x(t) = (xn(t)) ∈ X. Then, proceeding similarly as in (4.5) we obtain the following estimate: where Obviously, in view of assumption (v) we have that ω (f , ε) → as ε → . Taking supremum with respect to t, s ∈ [ , T], |t − s| ≤ ε on the left side of (4.8), we get the following estimate Next, let us take t, s as above. Assuming additionaly that t > s and following the estimate (4.7) we get where the function ω T k (ε) was introduced earlier. Consequently this implies the following inequality: Now, take any function x ∈ X and t, s ∈ R+. Keeping in mind the representation of the operator Q we derive the following estimate: where we denoted a(t) = (an(t)). Finally, taking limit as T → ∞, we get ω (QX) ≤ G K l(r ) ω (X). (4.11) Notice that we obtained the estimate for the rst component ω (X) of the measure of noncompactness χa(X) expressed by formula (3.7). In what follows we obtain two consecutive estimations for the second and the third component of the measure of noncompactness χa(X). To this end, similarly as before, x a set X ⊂ Br and a function x ∈ X. Take an arbitrary number T > and x t ∈ [ , T]. Then, for any n ∈ N, utilizing estimates (4.2) and (4.6) (for series from i = n to ∞), we get Further, taking supremum over all x = (x i ) ∈ X, we derive the following evaluation Passing with n → ∞ and utilizing assumptions (i), (vii) and Lemma 4.1, we get Finally, taking supremum over t ∈ [ , T] on both sides of the above inequality and letting with T → ∞, in view of formula (3.11) we deduce the following estimate Now, we are going to estimate the third component a∞(X) of the measure of noncompactness χa(X). Assume, as earlier, that X ⊂ Br , x ∈ X and T > . Moreover, take t ≥ T. Then, keeping in mind inequalities (4.2) and (4.6), we have Taking supremum over t ≥ T and x = (xn) ∈ X, we get Letting with T → ∞ and utilizing assumptions (i) and (vii), we derive the following inequality a∞(QX) ≤ l(r ) G K a∞(X). (4.13) Finally, combining (4.11), (4.12), (4.13) and formula (3.7) for γ = χ, we get the following inequality Hence, in view of the previously established fact that the operator Q is a continuous self-mapping of the ball Br , utilizing Theorem 2.2 we conclude that the in nite system of Volterra-Hammerstein integral equations (4.1) has at least one solution x(t) = (xn(t)) in the space BC = BC(R+, l ). Obviously x ∈ Br . The proof is complete. Hence we infer that the functions fn (n = , , . . .) satisfy assumption (vi) with the function l(r) de ned by the equality l(r) = γ max{ , r}. Further on, we are going to verify assumption (viii). To this end, let us rst notice that for a xed natural number n the function gn = gn(t, x , x , . . .) de ned by (5.2) on the set R+×R ∞ takes real values (n = , , . . .). Thus we have proved that there is satis ed assumption (viii). Finally, gathering all the above obtained constants A, F, G, K and taking into account the function l(r) = γ max{ , r} indicated in the above calculations, we conclude that the inequality from assumption (x) has the form β + γ π r max{ , r} ≤ r. (5.4) Further, let us assume that we are looking for a solution r of inequality (5.4) such that r ≤ . In such a case inequality (5.4) has the form β + γ π r ≤ r. (5.5) Assuming that γ < /π we infer that, for example, the number r = β/( −γπ ) is a solution of inequality (5.4) provided β < ( − γπ )/ . It easily seen that in this case we have that GK l(r ) < which proves that assumption (x) is thoroughly satis ed. Now, applying Theorem 4.2 we deduce that in nite system of integral equations (5.1) has at least one solution x = x(t) = (xn(t)) in the space BC = BC(R+, l ).
6,133.4
2019-12-06T00:00:00.000
[ "Mathematics" ]
Lived Experience , Historical Consciousness and Narrative : A Combinatory Aesthetics Ethic In this paper I introduce the conception of an “aesthetics ethic” conditioning historical consciousness and writing. The aesthetic ethic is a concept that touches on epistemological, cognitive, aesthetic, experiential, linguistic and ontological qualities that are very much in accord in both historiography and historical novelization. By way of this synthesis, I posit a strong, binding amalgamation that links these two genres. There are a number of transacting ideas and methodological approaches in this work. The focus in this paper is the aesthetic ethic, proper. The aesthetic ethic is a dynamic, densely deliberative field comprising individual and community historical experience, embedded within profoundly aesthetic and conscious contexts, in which history is first lived, and historical writing by historians and historical novelists is then composed. The aesthetics ethic constitutes an environmentality studded with manifold elements of subjectivity, objectivity and intersubjectivity; imagination and artfulness; intentionality and actualization; enunciation and circumscription; reference and contrivance; experience and conjecture; intellection and apperception; contingency and modality. The aesthetics ethic will, I hope, prove to be a useful map revealing details about how historians and historical novelists perceive (one of the source meanings of aesthetic) common facets of historical consciousness amidst a true kinship (one of the source meanings of ethic) of overlapping interests, methods and aims. By way of this overall amalgamation, the synthesis I have referred to is effected, linking the writings and interpretations of historians and historical novelists in important ways. I refer to a number of important analysts in this study, perhaps most importantly John Dewey, Hayden White and Frank Ankersmit. As well, an important phase of analysis is my study of Daniel Wickberg’s theory of “histories of sensibilities”. Introduction IN THIS PAPER I will introduce the conception of an "aesthetics ethic" conditioning historical consciousness and composition.The aesthetics ethic constitutes an environmentality studded with manifold elements of subjectivity, objectivity and intersubjectivity; imagination and artfulness; intentionality and actualization; enunciation and circumscription; reference and contrivance; experience and conjecture; intellection and apperception; contingency and modality.It is a dynamic, densely deliberative field and dappled experiential and communicative ground comprising individual and community historical experience embedded within profoundly aesthetic and conscious contexts, in which history is first lived, and then historical writing of two unique but related varieties of historical writing is constructed: that of historians and historical novelists. The aesthetics ethic will, I hope, prove to be a thoroughgoing map revealing details about how historians and historical novelists perceive (one of the source meanings of aesthetic) common facets of historical consciousness amidst a true kinship (one of the source meanings of ethic) of overlapping interests, methods and aims.The audiences of these writers return the favor of this visionary, historically-attuned accord, and the sum of these attitudes, perceptions, assumptions and beliefs synchronize in fruitful ways.By way of this synthesis, I will, in a turn perhaps reminiscent of the work of Hayden White, posit a strong, binding amalgamation (I will as often as not refer to historians and historical novelists as a single group of "historical writers")-and this is in fact an essential motivation of this essay.I hope that we will see the aesthetics ethic as the threshold where historical experience and historical narrative muster, transgress, interlard and ultimately effectuate a transactive blend of interpretation and meaning.This idea is distilled and adumbrated in the thoughts of Robert Scholes and Robert Kellogg, who wrote that "Meaning, in a work of narrative art, is a function of the relationship between two worlds: the fictional world created by the author and the 'real' world, the apprehendable universe".(Note 1) This aesthetics ethic framework, I think, accords with the thoughts of historian Johan Huizinga (1872Huizinga ( -1945)), who wrote that historical sense is a reticular but methodical synthesis that "proves anew its close connection with the forms of thought of ordinary human life, which also would be impossible without general categories into which intelligence organizes phenomena".(Note 2) Huizinga added that "by reason of its natural bent historical sense always inclines toward the particular, the graphic, the concrete, the unique, the individual".(Note 3) Huizinga's conceptions here fit with his research methodology, which held that "no knowledge of the particular is possible without its being understood within a general frame" and, yet more poetically, "Every historical fact opens immediately into eternity".(Note 4) Huizinga's thoughts seem to rehearse those of my model, with its "general" philosophical/phenomenological frame of historical consciousness, experience and understanding, alongside larger moral and ethical aims and outcomes in human communities; conditioned by "particular" aesthetic and narrative details, emerging out of conscious and compositional processes and structures. The Aesthetics Ethic: Aesthetic Contours in Lived Experience HISTORICAL EXPERIENCE AND REPRESENTATION are the veritable mortise and tenon of my examination.As Frank Ankersmit once wrote, "representation is the birthplace of meaning-and whoever is interested in the nature of meaning can do little better than to closely investigate representation".(Note 5) Representation and meaning-conceptions that cut to the heart of the crossroads of lived experience, narrated history and historicized narrative, which I will examine by way of the aesthetics ethic. I turn here to two facets of historical representation, and link the conceptions to the above ideas.We might say that historical writers are in one sense representatives of their communities, who by way of their representations speak for those communities, interpreting and expounding lived experience in all of its epistemological, ontological, moral/ethical, aesthetic and referential plenitude.As community representatives historical writers "are socioculturally mediated, that is, they are fundamentally connected to the cultural and social background of an agent, to a group practice, and, not least, to an intersubjective situation".(Note 6) In terms of this social background, group practice, and intersubjectivity, these writers are immersed in transacting Habermasian "situation definitions" in the lifeworld, with human communication and interaction hinging on validity claims through which consent is negotiated by way of relationships among the objective/purposive, normative/pragmatic and subjective/individual.(Note 7) In terms of representation, some will say that the two genres being considered here are wholly different, with historians indeed representing, or re-representing, the past, and novelists doing no such thing-in fact they do something like misrepresent the past in their narratives (they are liars).I take the opposing view, and posit at the highest level that the representational value and general validity of representation-this species, this class of communicative action-is in sum equal across the two genres.If to offer a slightly heated illustration, to claim they are wholly distinctive would be like claiming that peoples with varying tints and hues of skin colour are substantively, essentially different.There are differences on the outside of course, and the cultures of these different "peoples" also evince their own unique qualities-but their lifeblood, their DNA, their very existence and interaction, functioning and outcomes, are of the same ilk.A related point is how some would claim that the assumptions, interpolations and interpretations of historians are valid, while those of historical novelists are not.But in the same way as above, this is a difference of degree, not of kind.To illustrate the common interpretive ground beneath historical novels and historiography, I select a few examples from Daniel Jonah Goldberg's Hitler's Willing Executioners.In the text we are given interpretive guidance and license by way of diction such as: "Even if the interpretation put forward here of the Einsatzgruppen's exact order is wrong … the order was still genocidal;" "Although it is not known for sure, it is most unlikely that Hitler decided to annihilate Soviet Jewry without at the same time deciding…;" "Our knowledge of police battalion activities during the war is fragmentary and partial.… An overview … however, can be constructed;" "such genocidal opportunities were available to the men of many police battalions, and it is probable-though it is not known…;" "The men of these nine battalions form a sample sufficient to generalize…;" "Another member of the battalion … explains why they (presumably also Police Battalion 101 member Erwin Grafmann) did not have any moral qualms about what they were doing".(Note 8) Even if, it is not known for sure, most unlikely, knowledge fragmentary and partial, constructed overviews, it is probable, sufficient to generalize, presumably-such interpretive authorization and modal methodology looks like an ideal ground on which to construct historical novelization with its concomitant re-imagining of historical incident-and yet it is also part and parcel of the empirical historiographic enterprise.Observe here another illuminating example from Goldhagen.Examining life in one concentration camp that had an equal number of male and female guards who got along together very well-to the point of forming love relationships-Goldhagen darkly frames several questions: "The Germans made love in barracks next to enormous privation and incessant cruelty.What did they talk about when their heads rested quietly on their pillows, when they were smoking their cigarettes in those relaxing moments after their physical needs had been met?Did one relate to another accounts of a particularly amusing beating that she or he had administered or observed, of the rush of power that engulfed her when the righteous adrenaline of Jew-beating caused her body to pulse with energy?" (Note 9) Indeed, we wonder, what would they have thought and felt and talked about….Goldhagen does not attempt to reconstruct these dialogs, but some historians would, and to do this they would of course refer to historical source materials-letters, diaries, wills, albums, receipts, and the like-exactly as a historical novelist would.As Gore Vidal wrote of his historical novel Lincoln, which was written based on extensive research into authentic historical source materials: "All of the principal characters really existed, and they said and did pretty much what I have them saying and doing".(Note 10) Reconstruction like this, interpreting and answering questions about historical experience and outcomes like these, is an enterprise in which historians and historical novelists are "presented with different but overlapping opportunities," (Note 11) and we find that these varieties of historical writing become modes in a single transactive paradigm.(Note 12) To delve deeper into the aesthetic contours that rib the ethical model I am constructing, Historian Jerzy Topolski has written that "It is not logic but imagination that generates more or less concretized mental images constituting a background onto which the historian, 'playing' with basic information, imposes some content, occasionally modifying the ground (an effect of idealization) in one way or another".(Note 13) Though Topolski specifically refers to the historian, let everyone be his own historian, and let Topolski's ideas be an introduction into the idea of aesthetics as an organic constituent of lived experience, with our perception (and associated imagination) virtually the essence of narrative/aesthetic consciousness, which in turn constitutes the veritable quiddity of the lived-apprehended-interpreted-incorporated-participatory-synergetic life world-our history, the story and record of all we are.What we are describing is an expansive view, a view onto human ontology, taking in worlds of experience, action, perception, apperception and communication, coursing reciprocally from interior subjectivity, to exterior objectivity, to communal intersubjectivity and back again.To support this view, in the following I will primarily turn to one thinker, who himself takes such an encompassing view of life and letters: the great John Dewey.John Dewey's analyses in Art as Experience, with their views on the aesthetic bases and contours of lived experience, human action, creativity and communication-all environmentally conditioned, recursively employed, emotively expressed, aesthetically germinated, adaptively accorded and temporally consummated-is where I shall begin.One of Dewey's principal ideas is a transactive doing and undergoing, experiences caparisoned with manifold aesthetic intricacies.For Dewey, subjectivity, objectivity and intersubjectivity necessitate and engender an initial aesthetic impulse that yields an artful "doing" by agents (in our analysis, historical writers) that is in turn "undergone" by their peers (fellow writers, and readers).This is a combinatory effort of creation and interpretation, aesthetic to the core, welding subject and object into a dynamic quicksilver: "The uniquely distinguishing feature of esthetic experience is exactly the fact that no such distinction of self and object exists in it, since it is esthetic in the degree in which organism and environment cooperate to institute an experience in which the two are so fully integrated that each disappears".(Note 14) As to some of the particulars of this aesthetics ethic, Dewey goes on that "As an organism increases in complexity, the rhythms of struggle and consummation in its relation to its environment are varied and prolonged, and they come to include within themselves an endless variety of sub-rhythms.The designs of living are widened and enriched.Fulfillment is more massive and more subtly shaded."(Note 15) To sum up, Dewey's aesthetic world and experience become: an everlastingly renewed process of acting upon the environment and being acted upon by it, together with institution of relations between what is done and what is undergone.Hence experience is necessarily cumulative and its subject matter gains expressiveness because of cumulative continuity.… Things and events experienced pass and are gone.But something of their meaning and value is retained as an integral part of the self.(Note 16) This very Husserlian, time-conscious field is a fertile expanse that traverses the fictional and non-fictional, the performed and eventuated, the imagined and experienced.Such aesthetic conditioning within lived experience is an astonishingly vital, wholly ecological, hyper-responsive, intricately temporal and blazingly imaginative cross-fertilization of community and individual historical consciousness, commitment and communication, an eyes-wide-open trek across potential toward consummation.For Dewey, aesthetics in lived experience is nothing less than a "unique transcript of the energy of the things of the world" by way of which we "reach to the roots of the esthetic in experience" and then, coming out on the other side as it were, achieve "a transformation of interaction into participation and communication".(Note 17) Linking these ideas up to historical apprehension and narrative, Dewey continues that aesthetics in lived experience are "a manifestation, a record and celebration of the life of a civilization," and also "the means for entering sympathetically into the deepest elements of remote and foreign civilizations".(Note 18) Dewey's reference here to "a manifestation, a record … of the life of a civilization" and our aesthetically-conditioned ability and aim to "enter sympathetically into the deepest elements of remote and foreign civilizations" remind us that, as the ground of all historicity, the aesthetics ethic, is a veritable window opening onto the historical experience of peoples and ages past-and from there the source of historical writing. These varied ideas can be applied to my aesthetics ethic, a framework comprising an amalgamation of human awareness, existence, and narrative enterprise, insinuating itself into a truly encompassing historical view and consciousness, steeped in intersubjectivity and phenomenological intentionality, with an essential aesthetic thrust emerging from human consciousness and entering into the flourishing communicative endeavors of historiography and historical fiction.We live, perceive, enact, historicize, commune, narrate, know, understand, engender, characterize and develop within this environment, with all of this activity forming a mighty current that sweeps us along in genuinely aesthetic, narrative lived experience toward the denouement of narrated history-to repeat, the story and record of all we are. The Aesthetics Ethic: A Social Ethic AS ALREADY INDICATED, the aesthetics ethic that I am proposing is a deeply social construct, with aesthetic contours that contribute to ethical and moral outcomes in human communities through communicative action that is at once subjective/visionary, intersubjective/correlative, and objective/material (all terms which, I should add, are inherently transactive and textual).It is a fertile world of human collaboration, creation and concert, a richly collective, correspondent and consensual milieu that becomes a wider aesthetic field encompassing areas of society, community, convention, learning, expectation, composition and ideology-all central intellectual, emotional, conscious, cognizant, community and in sum ethical elements of the aesthetics ethic. As introduced above, Jerzy Topolski discusses how the historian's aim is to forge an imaginative and cognitive accord with readers that "yields to the pressure of conventions functioning in society or, more precisely, in the community of historians.… There is often a plurality of conventions characteristic of different schools of historiography, related to particular political, religious, and ideological views".(Note 19) William Cronon (Frederick Jackson Turner Professor of History, Geography, and Environmental Studies, University of Wisconsin Madison) reminds us once again of the historian's situation and role in this widened communal field and medium of lived experience when he writes that "We historians write as members of communities, and we cannot help but take those communities into account as we do our work".(Note 20) As I have noted, audiences reciprocate in this environment, and are "prepared to accord the historian the exorbitant right to know other minds".(Note 21) Karsten R. Stueber (professor and chair in philosophy, College of the Holy Cross) adds that "historians … appeal to large-scale and supra-individual facts such as general cultural habits prevalent at a time, or structures and norms of various institutions".(Note 22) Stueber also writes, referring to Jane Heal, professor of philosophy, University of Cambridge, that "it is very unlikely that we possess any general theory that allows us to decide which of the myriad beliefs we and other people have are relevant to consider in a particular situation.Our only option is to use our own cognitive capacities and to put ourselves imaginatively in their shoes in order to grasp their thoughts as their reasons".(Note 23) Not for the last time, we see the overlay and interaction of aesthetic possibilities (imagination) and mentation (cognitive capacities, thoughts as their reasons, beliefs), revolving around community experience and background resources (cultural habits, institutions, supra-individual facts, beliefs, situations).In the end, our aim, with the help of historical writers, is to effect joint action and communication by "putting ourselves in the shoes of others," helping us to fully apprehend the aesthetics ethic and associated historical apprehension at individual and community levels. Let's examine a few examples of intersubjective and community elements at play in historical writings.The narrative in Saul Friedländer's The Year's of Extermination is illuminating.The historiography is told almost entirely by way of the virtually unmediated, wholly personal points of view of those who lived and died during the Shoah, by way of diaries, letters and other personal records-a "microlevel" history as Friedländer calls it.(Note 24) Though this is foremost uniquely experiential and expedient access to historical occurrence, it also becomes a rich dialectic, with individual voices always conditioned by community encounter, such that attitudes, reactions, viewpoints expressions, illusions, rumors, interpretations, intuitions, beliefs, commitments and values are at once individual and collective, subjective and intersubjective, personal and communal.(Note 25) The findings of a historical work built around the plaintive but uniquely informed "individual voice," (Note 26) given that such a voice is itself something of a combination of the factual and the fictional, make it such that they "are not subject to the usual rules of historical evidence," and "cannot be proven untruthful in the usual way that specific factual statements in an historical account might turn out to be false".(Note 27) Rather, this model is best judged by way of a more flexible and inclusive stable of "diverging emotions, tastes, intuitions, philosophies, and identities" (Note 28)-which is a view not only of the possibilities of fictionalized history, with its unique ways with "the rules of historical evidence," "specific factual statements" and a "flexible and inclusive stable" of meaning creation, but even more importantly is at one with Daniel Wickberg's histories of sensibilities as exemplars of how an aesthetics ethic can be employed in historical analysis.I will examine Wickberg's work, below.Turning to a fictional example, Erich Maria Remarque (1898Remarque ( -1970) ) in his All Quiet on the Western Front evinced a vibrant community ethic in his matchless and deeply empathetic way.Speaking of his fellow soldiers, the novel's protagonist Paul Bäumer reflects, At once a new warmth flows through me.These voices, these few quiet words, these footsteps in the trench behind me recall me at a bound from the terrible loneliness and fear of death by which I had been almost destroyed.They are more to me than life, these voices, they are more than motherliness and more than fear; they are the strongest, most comforting thing there is anywhere: they are the voices of my comrades. I am no longer a shuddering speck of existence, alone in the darkness;-I belong to them and they to me, we all share the same fear and the same life, we are nearer than lovers, in a simpler, harder way; I could bury my face in them, in these voices, these worlds that have saved me and will stand by me.(Note 29) To turn to an example from non-fiction historiography, James M. McPherson, in Battle Cry of Freedom, highlights a "mutual salutation and farewell" that rings of solidarity in the most difficult conditions, during the surrender of the Confederate armies in April 1865: First in line of march behind him General John B. Gordon was the Stonewall Brigade, five regiments containing 210 ragged survivors of four years of war.As Gordon approached at the head of these men with "his chin drooped to his breast, downhearted and dejected in appearance," Joshua L. Chamberlain gave a brief order, and a bugle call rang out.Instantly the Union soldiers shifted from order arms to carry arms, the salute of honor.Hearing the sound General Gordon looked up in surprise, and with sudden realization turned smartly to Chamberlain, dipped his sword in salute, and ordered his own men to carry arms.These enemies in many a bloody battle ended the war not with shame on one side and exultation on the other but with a soldier's "mutual salutation and farewell".(Note 30) These historical passages sound the optimistic notes that seem to intuitively attend to community ethics, intersubjectivity and historicality as we have examined-but we should pause to note a dark reverse to these ideas.For we find in many histories not the brighter notes of community sounded just above, but views of solidarity, social context, norms and community gone wrong, and desperate efforts to right them.In short we find in much narrated history that the ethics and purposeful action we have examined so far can be blasted to pieces by pitiless historical incident and outcome.No doubt we find this in Holocaust histories, and it is also a central theme of James M. McPherson's Battle Cry of Freedom, with his examination of the internecine carnage that marked the American Civil War, when the deaths of large numbers of people from one municipality "could mean sudden calamity for family or neighborhood".(Note 31) As well, the war's tragic parricide is a theme of the book, with Robert E. Lee stating that "I cannot raise my hand against my birthplace, my home, my children," (Note 32) and McPherson also writing that the war in Kentucky was "literally a brothers' war.Four grandsons of Henry Clay fought for the Confederacy, and three others for the Union.One of Senator John J. Crittenden's sons became a general in the Union army and the other a general in the Confederate army.The Kentucky-born wife of the president of the United States had four brothers and three brothers-in-law fighting for the South".(Note 33) In a spectacular exemplification of these historical facts, Michael Shaara (1928Shaara ( -1988) ) in The Killer Angels, a novelization of the same period about which McPherson writes non-fiction, portrays the relationship of Generals Lewis Addison Armistead and Winfield Scott Hancock.The two had been bosom friends and fought together in the United States Army before southern secession pulled them apart, and they found themselves facing each other across the lines at the Battle of Gettysburg.Armistead blanched at the possibility that he would be required to fire on his dear mate.After General James Longstreet's loss at Little Round Top on July 2, and as the southerners are preparing for the next day's massed charge at the Union center, Armistead and Longstreet confer: "'You hear anything of Win Hancock?'" Armistead asks "old Pete".Longstreet answers with darkly bemused resignation, "'Ran into him today.He's over that way, a mile or so.'" (Note 34) As Armistead recalls the news three years before that the Union was breaking up, and the various officers were realizing they could end up antagonists, he continues of Hancock: "Well, the man was a brother to me.You remember.Toward the end of the evening…it got rough.We all began, well, you know, there were a lot of tears.Well, I was crying, and I went up to Win and I took him by the shoulder and I said, 'Win, so help me, if I ever lift a hand against you, may God strike me dead.'I've not seen him since.I haven't been on the same field with him, thank God. It…troubles me to think on it.Can't leave the fight of course.But I think about it.I meant it as a vow, you see.You understand, Pete?" (Note 35) Armistead and Hancock's relationship, though truthful and genuinely revealing of the tragic fratricide of the war, is not addressed by McPherson, and we are fortunate to have Shaara's truth-cum-fiction examination.(Note 36) Another example from Remarque of the destruction of the very intersubjectivity that is a principal source of historicality and historical consciousness, and a superhuman effort to reconstruct it, is powerful.In his depiction of the encounter of Paul Bäumer and Gérard Duval in the trenches in World War I, Remarque orders a brilliant tableaux of everything we hope history will not become.As Paul sinks to the ground to protect himself from machine gun fire, another body leaps into the trench, and Paul strikes "madly home" at the man with his knife, immobilizing him.(Note 37) In his boiling wrath, he rages at the slumped, murmuring figure, "I want to stop his mouth, stuff it with earth, stab him again, he must be quiet, he is betraying me".(Note 38) He spends the night with the unconscious man, and in the morning when he sees he has opened his eyes, with "an extraordinary expression of flight," his humanity returns and he whispers to him, "'No, no.I want to help you, Comrade, camerade, camerade, camerade.'"(Note 39) Paul laments that "This is the first man I have killed with my hands," and he tries to restore the injured man's status as a fellow human being, even noting that his eyes are "brown, his hair is black and a bit curly at the sides".(Note 40) Paul tries to enlarge and repair this battered intersubjectivity and thinks, "No doubt his wife still thinks of him," and then "Does she belong to me now?Perhaps by this act she becomes mine".(Note 41) He continues of the astonishingly thin line between constructive affinity and fearful loss, "I wish Kantorek was sitting here beside me.If my mother could see me … if Kemmerich's leg had been six inches to the right; if Haie Westhus had bent his back three inches further forward-".(Note 42) Baumer finds photographs of the dying man's wife and a girl in his wallet and some letters, tries to decipher the French, and then enfolds himself in what he hopes can be a germinal beneficence: "This dead man is bound up with my life," he thinks, "therefore I must do everything, promise everything, in order to save myself; I swear blindly that I mean to live only for his sake and his family".(Note 43) In the somber denouement to this scene, Paul finds the man's name, and tries to renovate the now-ravaged life connections, fantasizing an intersubjective traversal and union: "I have killed the printer, Gérard Duval.I must be a printer".(Note 44) The grim consummation of experiences like this-and the reader no doubt knows they veritably saturate All Quiet on the Western Front-is that Bäumer and his fellows "are forlorn like children, and experienced like old men, we are crude and sorrowful and superficial-I believe we are lost".(Note 45) History comes alive in Remarque's text, and here it speaks to us across fictional and non-fictional limits-"more vivid, more immersive than a work of history," as Marie-Laure Ryan has written.(Note 46) In penetrating and disturbing ways Remarque's reporting casts in a different light the above discussion of community, social processes, a given humanitarian "mission and duty," the imaginative placement of oneself in another's shoes, irreducible social practices, social contexts, forms of solidarity, shared action, and a now-seemingly vain "celebration of the life of a civilization".Such a dialog is no doubt necessary, for surely these are experiences we do not want to find ourselves "doomed to repeat".(Note 47) In sum we see in the above examples common filaments of historical apprehension, meaning and interpretation running like threads through an "arras web" (to borrow from Hayden White) of fictional and non-fictional historical literatures, becoming, in light of Wolfgang Iser's (1926Iser's ( -2007) ) brilliant literary analysis, a true "product of interconnection," a "referential field" with "viewpoints switching between perspective segments".(Note 48) These filaments interweave within and without historical texts, with common voicings and analyses found first at textual levels, and from there into readers' worlds. To continue this discussion, I want to further link my "ethic" to the immediately-related discipline of "ethics" and the adjective "ethical"-in sum the adjuratory, affective admixture with all of its associated experience, deliberation, interaction, obligation, warrant, estimation, consent and evaluation.These factors and functions constitute the pith of the tenets, codes and customs used to formulate and condition advice and consent in the steering and orderly interaction of societies, as well as virtuous individual behavior, humanistic concern, and constructively principled conscious awareness.In short, the community/compositional model I am describing is a moral universe, with all of these factors the veritable "ethic" of my "aesthetics ethic".Hayden White sums up these transacting ideas this way: The historical past is "ethical" in that its subject-matter (violence, loss, absence, the event, death) arouses in us the kinds of ambivalent feelings, about ourselves as well as about the "other," that appear in situations requiring choice and engagement in existentially determining ways.In order to deal with these kinds of events, which interest or should interest modern publics, appeal should be made to ethically rich traditions of literary expression.(Note 49) As you read the following, keep in mind that without question the best historians and historical novelists are bound by a deep commitment to morals and ethical standards stemming from personal probity, professional standards, and, as we have discussed, performance of constructive inter-communicative tasks within the communities they are part of.Admittedly there can be quite a bit of freedom to bend these moral rules and ethical principles, but the essential truths remain, particularly for the "best" writers.As William Styron (1925-2006) wrote of The Confessions of Nat Turner "the reader may wish to draw a moral from this narrative"-and we may say the same of many another historical novel and factual history, as we shall see in the following.(Note 50) At a high aesthetic level we can read these conceptions down into communal (historical) experience and communicative endeavor (the narrative, the aesthetic).Recall that the word moral simply means "custom," which I offer we may interpret as sets of decision-making practices and outcomes in human communities, as described just above.Hayden White again folds moral ingredients into the compote of historical narrative when he writes that "If every fully realized story, however we define that familiar but conceptually elusive entity, is a kind of allegory, points to a moral, or endows events, whether real or imaginary, with a significance that they do not possess as mere sequence, then it seems possible to conclude that every historical narrative has as its latent or manifest purpose the desire to moralize the events of which it treats".(Note 51) John Tosh, meanwhile, pragmatically reminds us that "historical interpretation is a matter of value judgments, moulded to a greater or lesser degree by moral and political attitudes," (Note 52) while William Cronon writes that "our historical narratives … remain focused on a human struggle over values," and that "Within the field of our historical narratives we too-as narrators-are moral agents".(Note 53) Charles Maier, Leverett Saltonstall Professor of History, Harvard University, in a similar moral/narrative move linked to aesthetic contours of composition, comments on the vitally important narrative interpretation of moral and ethical issues when he posits the importance of "moral narratives" as overall organizing paradigms in historical writing.Such narratives importantly adumbrate, underlie and guide both historical reflection and apprehension, and moral decision-making and action in human communities (decision-making that is in large part informed by these historical narratives).Frank Ankersmit deepens this analysis when he notes that historians who attempt to "cut themselves out of the moral continuum" (by attempting to adhere to a disinterested positivist stance in their narrative interpretations) is to perform "a gesture of subjectivity of truly monstrous proportions".(Note 54) To sum up, in a moral/ethical and aesthetic milieu like that presented here, historical writers find they can "labor, thirsting for light upon the situation which confronts them," and then "emerge filled with joy when clarity is achieved, to experience the dissolution of the anxiety of the moral consciousness into the serenity of truth".(Note 55) If a bit high-flown, thoughts like these express what I think are the valuable philosophical and pragmatic foundations of the aesthetics ethic, ultimately yielding a constructive and applicable corpus of historical writing. Like Jürgen Straub, Dewey called these varied elements a "storehouse of resources," (Note 56) while F.A. Olafson (emeritus professor of philosophy, University of California, San Diego) called this stockpile "a corpus of norms, interpretive principles, and background beliefs of a great variety of kind".(Note 57) Jürgen Habermas referred to "the lifeworld as represented by a culturally transmitted and linguistically organized stock of interpretative patterns," (Note 58) and Noël Bonneuil (Institut National des Études Démographiques and Ecole des Hautes Études en Sciences Sociales, Paris) writes with a dash of temporal apprehension of this decisional/historical/aesthetic universe that "people do make decisions under the pressure of present or anticipated constraints, and thus permanently modify their own history, their 'trajectory' in the space of possible states.… Such decisions yield attainable states satisfactory to the group; technically speaking, they are the decisions, if they exist, that drive the group within the boundaries of the set of survival constraints".(Note 59) All of these factors and circumstances I think we may interpret as "customary" (moral) in human communities, and include them within the aesthetics ethic. In terms of historical fiction, William Styron's The Confessions of Nat Turner is instructive.Styron's work is not only a brilliant reconstruction of historical reality in early nineteenth century slaveholding Virginia and the bloody slave rebellion led by Nat Turner, but also a sustained attack on the institution of slavery and the barbaric treatment and inhuman devaluation of black people in the United States.I think that the book's moral and ethical themes can be interpreted not only as windows onto the past, but also as decidedly hortatory, in terms of their applicability to the present and future (from the 1960s, when the book was published, during the height of civil rights activism in the U.S., and onward).In these ways, the book "reproduces the much more complex and ramifying totality with historical faithfulness".(Note 60) In Styron's work, Nat Turner's white owner Samuel Turner reviles slave-owning humanity, raging against their brutality and denouncing them as nothing more than vermin, and establishing primary anti-slavery and anti-degradation themes of the novel: "Surely mankind has yet to be born.Surely this is true!For only something blind and uncomprehending could exist in such a mean conjunction with its own flesh, its own kind.How else account for such faltering, clumsy, hateful cruelty?Even the possums and the skunks know better!Even the weasels and the meadow mice have a natural regard for their own blood and kin.Only the insects are low enough to do the low things that people do-like those ants that swarm on poplars in the summertime, greedily husbanding little green aphids for the honeydew they secrete.Yes, it could be that mankind has yet to be born".(Note 61) Genuine historical materials covering the slavery era in the United States-historiography, memoirs, letters and diaries, journalism accounts-have taken similarly denunciatory moral positions.Charles Ball wrote in his Slavery in the United States: a narrative of the life and adventures of Charles Ball, a black man that "the entire white population is leagued together by a common bond of the most sordid interest, in the torture and oppression of the poor descendents of Africa, (Note 62) and he described his life as "one long waste, barren desert, of cheerless, hopeless, lifeless slavery; to be varied only by the pangs of hunger, and the stings of the lash".(Note 63) Nat Turner in his confession referred to white people as "the Serpent," (Note 64) while in his narrative Frederick Douglass asked "why have these wicked men the power thus to trample upon our rights, and to insult our feelings?"(Note 65) In McPherson's Battle Cry of Freedom-as often as not viewed as a straightforward, balanced, veritably "scientific" historical narrative-the immorality of slavery and the treatment of blacks, as well as other historical data, are also moralistically conveyed through carefully chosen diction and imagery: Slavery was a "cancer" (Note 66) in the U.S. South, the floundering southern economy functioned like "Alice in Wonderland," (Note 67) and the hellion avenger John Brown had "the glint of a Biblical warrior in his eye".(Note 68) McPherson writes how Harriet Beecher Stowe-who had "breathed the doctrinal air of sin, guilt, atonement, and salvation since childhood"-condemned slavery in her influential Uncle Tom's Cabin, but then he himself conjectures: "or perhaps it was God" who did so.(Note 69) This is colourful, aestheticized moralism at work in straight historiography-but this should not surprise us.McPherson even challenges the reader to come down on one side or the other of one of the ultimate moral questions of the Civil War when he asks in the conclusion of the book, "Was the liberation of four million slaves and the preservation of the Union worth the cost of more than 620,000 dead?That question too will probably never cease to be debated".(Note 70) Another way that moralism is conveyed by historical writers in this area is a reverse of what we have seen, through the presentation of the arguments of pro-slavery advocates-arguments that we denounce for their ludicrous posturing, illogicality, and hateful bias.In The Battle Cry of Freedom, McPherson writes how pro-slavery defenders wrote and spoke of manifold "blessings" of slavery-it had "civilized African savages and provided them with cradle-to-grave security," (Note 71) relieved whites of menial labour of all kinds, stabilized necessary and admirable class and caste systems and created a refined upper class of Southern gentry who added much to American culture.Slavery had, in a word, done no less than created "a most safe and stable basis for free institutions in the world".(Note 72) To compare again to historical fiction, Styron put some of the uglier pro-slavery arguments in the mouths of characters in his novel, and we feel disgust with their odious chicanery."My brother is as sentimental as an old she-hound," says Benjamin Turner after his brother's anti-slavery argument.(Note 73) "He believes slaves are capable of all kinds of improvement.That you can take a bunch of darkies and turn them into shop-owners and sea captains and opera impresarios and army generals and Christ knows what all.I say differently.I do not believe in beating a darky.I do not believe, either, in beating a dog or a horse.If you wish my belief … my belief is that a darky is an animal with the brain of a human child and his only value is the work you can get out of him by intimidation, cajolery, and threat".(Note 74) Similarly, the nasty ruminations and justifications of slavery by Nat Turner's legal adviser, Thomas Gray, yield the same results, with Gray at one point listing a number of slaves captured after the Turner revolt who had not been hanged, then announcing the simon-pure propriety of southern society, and topping it off with a detestable boast: "Dad-burned mealy-mouthed abolitionists say we don't show justice.Well, we do.Justice!That's how come nigger slavery's going to last a thousand years".(Note 75) In a similar light, Styron presents other then-current arguments about these issues, as when Gray considers the "the meddlin' and pryin'" of non-violent Quakers "and other such moralistically dishonest detractors" who "so ignorantly decried" slavery's inherent "benevolence".(Note 76) To continue, Styron condemns, by way of the voice of Nat Turner, the white people of Virginia, who were "reptilian in spirit," and who mete out to blacks "blistering toil and deprivation, slights and slurs and insults, beatings, chains, exile from beloved kin".(Note 77) In a deft narrative touch that allows his words to be read as either the thoughts of Nat Turner, or as the exposition of an omniscient narrator, Styron also wrote of "the white man's wiles, his duplicity, his greediness, and his ultimate depravity".(Note 78) Those needing confirmation about whether or not this is "history"-and not simply the made-up fantasies of an over-imaginative writer-may simply refer to the works of McPherson, Ball, Turner and Douglass. In another example, Richard Hofstadter, in The American Political Tradition, poses a beautifully oblique and erudite moral examination and critique of the thought of the American founding fathers.In chapter 1, "The Founding Fathers: An Age of Realism," he observes that these political men could ambiguously be "starkly reactionary" on the one hand, but possess "a statesmanlike sense of moderation" on the other.(Note 79) More critically, Hofstadter notes that "From a humanistic standpoint there is a serious dilemma in the philosophy of the Fathers, which derives from their conception of man," which was contradictory in that "while they thought self-interest the most dangerous and unbrookable quality of man, they necessarily underwrote it".(Note 80) For the Founding Fathers, mercantilist to the core, their conception of the best state sought less to shape it in a humanistic or even particularly fair-minded way, but simply to "make it less murderous"-hardly a high-minded moral stance.(Note 81) Ultimately, and bringing the argument into the present day, "Modern humanistic thinkers who seek for a means by which society may transcend eternal conflict … can expect no answer" in the philosophy of the Founding Fathers.(Note 82) I can just about see a Gore Vidal or a Norman Mailer going to town on complex ideas like these, and fashioning them into challenging, creatively re-imagined fictional interpretations of received history…. To pull my focus back a bit, I turn to Hans Robert Jauss, who skillfully links a moral/ethical imperative to aesthetics and associated communal and communicative conceptions, drawing my aesthetics ethic perhaps into his aesthetics of reception when he writes, "The relationship between literature and reader can actualize itself in the sensorial realm as an incitement to aesthetic perception as well as in the ethical realm as a summons to moral reflection".(Note 83) And finally, Frank Ankersmit commandingly writes of the ties that bind historical writers and their creations to their communities, their sensibilities, and deeper and wider moral and ethical obligations: "L'histoire se fait avec des documents"-indeed, but also with historians.How historians relate to their own time, what are their innermost feelings and experiences, what have been the decisive facts in their own lives-these are all things that should not be distrusted and feared as threats to so-called historical subjectivity but cherished as historians' most crucial asset in their effort to penetrate the mysteries of the past.… They are absolutely indispensable for historians' being open to the experience of the past, which is, in turn, the bridge to the past for both the historians and their readers.The historians' own sentiments, their convictions and feelings, provide them with the fertile ground on which historical experience can flourish.(Note 84) As the battery of thinkers and writers cited above makes clear, moral and ethical stipulations evince the very structure and content of human interaction and historicity.These analyses I think substantiate my aesthetics ethic, and we can see how they deeply condition the very lifeblood coursing through historical texts, with such teeming ideas and comment showing that "our moral judgments are made within a conceptual framework which is itself the creation of history".(Note 85) This discussion has constituted a deep and wide experiential and analytical channel, packed with complexity and the occasional tentative hypothesis and/or speculative theorizing.It is my hope, however, that thoughts like these confirm my view that the aesthetics ethic comprises historical experience, awareness and communication evinced in and by individuals and communities, with all of their embedded dependencies, obligations, interfaces and aesthetic senses.This is, to be sure, a lot more than wie es eigentlich gewesen.Our target in this analysis is a moving one, and rather than a linear analysis, our strategy is veritably a climbing spiral staircase by way of which we will make our way forward and higher.Yes, we may find ourselves a bit dizzy at times, but not I think vertiginous.With this said, I turn to a more specific and pragmatic examination of how community and intersubjective considerations and factors within the aesthetics ethic can enter into historical analysis and narrative, yielding more complete, accurate and constructive narratives of historical experience: Daniel Wickberg's Histories of Sensibilities. The Aesthetics Ethic and Histories of Sensibilities DANIEL WICKBERG'S FOCUS on the importance of "histories of sensibilities" provides, I believe, an ideal platform for historical writing in terms of key elements of my aesthetics ethic.Wickberg, associate professor of Historical Studies/History of Ideas at the University of Texas at Dallas, discusses and defines "sensibilities" as "modes of perception and feeling, the terms and forms in which objects were conceived, experienced, and represented in the past," as well as "ideas, emotions, beliefs, values".(Note 86) With thoughts like these in support of his ideas, Wickberg goes on to provide meat to his theoretical bones when he writes of the importance of recovering and relating history by way of individual and group sensibilities.These terms and ideas can be directly linked to the aesthetics ethic and the other ideas and analysis I have cited, above and going forward. Wickberg's model suggests the importance of what E.H. Carr called "the historian's need of imaginative understanding of the minds of the people with whom he is dealing" (Note 87)-in short a proper and more complete, textured, varied and intricate picture of past historical experience (by way of "walking in their shoes").Karsten Stueber, sounding exactly like Wickberg, writes that "in order to be able to grasp agents' thoughts as reasons for their actions we have to reenact their thoughts, beliefs, and desires in our own mind while being simultaneously appropriately sensitive to relevant differences between ourselves and the people whose actions we want to understand".(Note 88) Even further, historical writers' own sensibilities can be linked up to those of historical subjects and objects, yielding a transaction of varied points of view and characterizations, and ultimately bringing the past into view by way of a richly transgressive blend, which becomes "historians' most crucial asset in their effort of penetrate the mysteries of the past".(Note 89) Wickberg's framework links back to the vagaries, complexities, subjunctivity and contingency of conscious lived experience within the aesthetics ethic, which are keys not only to historical apprehension, but are also experiential springs that skilled narrativists may be particularly apt at tapping into and fashioning into historical representation.Though to be sure both fictional and non-fictional writers can employ these ideas and methods, novels, even more than historiography, may be the optimal platform for presenting and representing the elaborate features of a people's sensibilities-as Doris Lessing wrote, "Novels give you a matrix of emotions, give you the flavour of a time in a way formal history cannot".(Note 90) To be sure, human sensibilities, mentalities, intellection and consciousness are not easily accessible or crystal clear "sources" with which to interpret historical experience, and are "not organized in archives and conveniently visible for research purposes".(Note 91) Interestingly, in this respect these mindful factors may hearken to the "absences" that theory tells us pepper the past-and nobody is cowed by these factors, and in fact in terms of history in all its heterogeneous glory, they may, if at times ambiguously, provide something of a high road toward historical understanding.In any event, a people's sensibilities seem to ideally comprise the important points and factors we have been examining within the aesthetics ethic, and they become "a concept that lets us dig beneath the social actions and apparent content of sources to the ground upon which those sources stand: the emotional, intellectual, aesthetic, and moral dispositions of the persons who created them".(Note 92) Modern historians have recognized the value of an approach like Wickberg's.Perry Miller (1905-1963), in his masterful The New England Mind: The Seventeenth Century, emphasized that he was "seeking to delineate the inner core of Puritan sensibility," as well as the importance of describing "the temperamental bias behind Puritan thought".(Note 93) Kenneth Stampp (1912Stampp ( -2009) ) noted in The Peculiar Institution that "since there are few reliable records of what went on in the minds of slaves, one can only infer their thoughts and feelings from their behavior, that of their masters, and the logic of their situation".(Note 94) Paul Cohen in his History in Three Keys: The Boxers as Event, Experience, and Myth wrote that part 2 of his tome would "delve into certain facets of the experiential context" of the historical subjects, the "thought, feelings, and behavior of the immediate participants," "the motivational consciousness of the experiencer" of past events, and in turn the way they "made sense of world".(Note 95) We are all, Cohen writes, "experiencers ourselves, not of the past but of a past," and thus we can see the importance of individual experience merging into what Cohen calls "coalesced" historical intersubjectivity.(Note 96) Examples like these show that Wickberg is not alone in his thinking.Paul Ricoeur wrote of a "complex interplay of superimposed intentionalities" in history and historical writing, (Note 97) and Lawrence Stone has written that historiography is now focusing on "man in circumstances" as opposed to "the circumstances surrounding man".(Note 98) Stone continues that historiography has seen a "growth of interest in feelings, emotions, behavior patterns values, and states of mind" of peoples of the past, and adds that we need "to discover what was going on inside people's heads in the past, and what it was like to live in the past".(Note 99) Stone concludes with an important point in terms of our examination, noting that such questions "inevitably lead back to the use of narrative".(Note 100) In a related point, Peter Burke (Life Fellow, Emeritus Professor of Cultural History, Emmanuel College) highlights the "micronarrative," the "telling of a story about ordinary people in their local setting" (Note 101)-a model that has become critical in modern historical writing. It is through these varied and delightfully intricate channels that "more and more people emerge into social and political consciousness, become aware of their respective groups as historical entities having a past and a future, and enter fully into history".(Note 102) Thus, the genuine experiences of past peoples can be located, examined and portrayed in these histories, with they envisioning and apprehending their lives within greater unfolding historical movement and change, beginning in the past, proceeding into the present, and portending future experience-in sum the essence of historical temporality and consciousness. Wickberg's valuable analysis provides a key understanding of history that links individual and community consciousness and experience as they interact and play their roles in the depth and breadth of historical experience, lived and narrated.I emphasize that these ideas go a long way toward showing us a best "thick" way to both show and tell history in all its profound, associative, synthetic and delightfully piebald abundance.My own approach may be less "history of sensibilities" than "history as sensibility," with the various terms-emotion, intellect, morality, ideas, beliefs, values, points of view, feelings, dispositions, perception, confidences, assumptions-and their associated behaviors, responses, interpretations, processes, undertakings-emerging out of and then back into my aesthetics ethic, and then in turn linking to human conscious experience as a main conduit in the flow of historical experience, understanding and writing. The intricacies I have examined comprise an almost endlessly granular, variegated, delightfully indeterminate human existence, an elaborate triptych, comprising that "corpus of norms" referred to by Olafson, in sum an "element of tradition," which "any society builds up over time and which it brings to new situations that arise and which are interpreted for purposes of action in terms of the affinities they show to one or another of the categories that are the precipitate of past experience".(Note 103) Such dimensions are the true marrow of lived historical experience, an "extended historicity" that is "of the greatest importance for a historian," and which seem to burst at the seams with narrative possibilities.(Note 104) Benedetto Croce adds depth and complexity to these descriptions, capturing and describing the origins and functions of our richly substantive and wholesome historical culture (culture understood as "cultivation of living material in prepared nutrient media" indeed seems to be the ideal word to use), (Note 105) when he, again temporally, writes: Historical culture has for its object the keeping alive of the consciousness which human society has of its own past, that is, of its present, that is, of itself, and to furnish it with what is always required in the choice of the paths it is to follow, and to keep in readiness for it whatever may be useful in this way, in the future.(Note 106) All of the above description, explication and analysis within the bounds of the proposed aesthetics ethic are the veritable source and ground of both fictional and non-fictional historical narrative.The model we have discussed is a living, breathing rhizome that constitutes a social/community framework with virtually universal aesthetic factors evincing the structural support and causeways of significance of a human "aesthetic gaze".Historical writers function in this environment in important ways, accessing manifold aesthetic/artful/compositional methods and features to be employed in their narratives.It is indeed largely for these reasons that, as Ankersmit has written, "the history of historical writing is … in the final analysis, a chapter in the book of the history of aesthetics".(Note 107) And I think that these channels of mood, feeling and experience expand outward magnificently, and may be the source of Ankersmit's experienced historical "sublimity".Ankersmit has informed us in this light that "moods and feelings" are the veritable "locus of" and "have a natural affinity with" historical experience, and that "one might well say that sublime historical experience preferably makes itself felt in these moods and feelings".(Note 108) All of this must be the source of what Huizinga, in the same vein, called "the grace of historical experience".(Note 109) Conclusion IN TERMS OF THE aesthetics ethic, I hope I have effectively illustrated the important ways that our subjects condition historical apprehension, and shown how aesthetic experience can open our eyes to new depths in historical consciousness and composition.Some will accuse me of totalizing, but I think not, and I hope that our examination has highlighted not a few aporias, incertitudes and cognitive dissonances, some byways, shortcuts, roundabouts and cul de sacs, a few useful on-ramps and off-ramps, with all of this complexity and retroflexion suggesting that ours is anything but a simple, straightforward model.In any case, and whatever the neatness of my model, I hope that I have shown some of the narrative essences, phenomenological sources and common structural members common to fictional and non-fictional history writing, the indices of which lead us into more varied areas of experience, communication and interaction, and enable us to apprehend in more accurate and integrated ways aggregate narrative formations of history and narrative.Perhaps, in this world, the aesthetic in life is "no intruder from without," but is virtually a "clarified and intensified development of traits that belong to every normally complete experience".(Note 110) Ours has been a complex world, a world of perhaps infinite possibility, in which history happens, and out of which history is written, a contested zone between science and art, objectivity and subjectivity, reality and… irreality (to borrow from Paul Ricoeur).To conclude, such breadth leads us to Wolfgang Iser, whose richly inventive analysis captures the contours of our discussion at a wonderfully elevated level.He writes of transacting fictional and non-fictional histories with interwoven filaments of meaning and configuration, speaking with a common interpretive voice, found first at textual levels, and from there into responsive "reading moments".(Note 111) Iser seems almost to have forecast my aesthetics ethic when he wrote the following-but first I should give the reader some background.In his reference to "segments" in the following, Iser had in mind his textual blanks, which he had modified into "vacancies," which are "nonthematic segments within the referential field of the wandering viewpoint".(Note 112) These vacancies are "important guiding devices for building up the aesthetic object because they condition the reader's view of the new theme, which in turn conditions his view of the previous themes".(Note 113) This said, Iser's text continues, I hope redolently of the attributes of my aesthetics ethic, that this environment and communicative universe "enables the reader to combine segments into a field by reciprocal modification, to form positions from those fields, and then to adapt each position to its successor and predecessors in a process that ultimately transforms the textual perspectives, through a whole range of alternating themes and background relationships, into the aesthetic object of the text".(Note 114) I thank readers for their attention, and to reverse our focus, I wish them wonderful futures. -. (1986) As noted, the sum of my arguments extends beyond the positions that will be examined in this paper, and will be taken up in future analyses.For example, details of narrative human consciousness, frequently referred to in this paper, are not fully addressed.Additionally, specific aesthetic contours will not be examined in detail.Note 7. The important term "transaction" is from John Dewey (1859Dewey ( -1952) ) and Arthur F. Bentley (1870Bentley ( -1957) ) in their Knowing and the Known (Boston: Beacon Press, 1949).Transactional analysis for Dewey and Bentley allows for "the seeing together, when research requires it, of what before had been seen in separations and held severally apart" (112).The two philosophers wrote that "The transactional is in fact that point of view which systematically proceeds upon the ground that knowing is co-operative and as such is integral with communication.By its own processes it is allied with the postulational.It demands that statements be made as descriptions of events in terms of durations in time and areas in space" (vi). Note 12. Naysayers will say that the answers and interpretations constructed by historical novelists are not definitive, and of course this is true--as it is true with the work of historians (a glance at how many historians have been accused of misrepresenting the facts, or getting the data wrong, attests to this).Such conditions are well understood by sophisticated readers, who know that in any historical writing the accuracy of information and the credibility of interpretations must be weighed, examined and judged.Not for the last time do we encounter the same strictures and requirements in historiography and historical novelization. Note 13.Jerzy Topolski, "The Role of Logic and Aesthetics in Constructing Narrative Wholes in Historiography," History and Theory, Vol. 38, No. 2 (May, 1999), pp. 198-210 Note 47.In this way the above plot lines become useful and advantageous.By showing the symptoms of what we do not want to suffer, they may become a bitter antidote to the disease ("hair of the dog that bit you"), and point toward constructive community and affinity.Additionally, however, we could look at these ideas from another perspective, and rather than historical writers who report the erosion of constructive historical intersubjectivity and community ethics, we could examine those who effect such erosion.I mean here historical writers such as those who deny the Holocaust, or distort and revise history through lenses of extremist, inhumane and antagonistic political, cultural or ethical views. These contours include: dense temporality; subject-, object-and intersubjectivity; intertextuality; narrative form and fettle; semantic/syntactic complexity; historical, moral, argumentative and aesthetic rhetoric; contingency and modality; the effort toward becoming; the function and results of aesthetic selection during composition; and the fluid construction and apprehension of truth in historical narrative.Here my focus will be limited, and I will endeavor to explain the aesthetics ethic, proper, as completely as possible.Note 2. Johan Huizinga, "Historical Conceptualization," in The Varieties of History: From Voltaire to the Present, edited, selected and introduced by Fritz Stern.(London and Basingstoke: MacMillan and Co. Ltd.Copyright The World Publishing Company, 1956, 1970), 291.Note 3. Ibid., 298.Note 4. Ibid., 299, 300.Note 5. F.R. Ankersmit, Sublime Historical Experience (Palo Alto: Stanford University Press, 2005), 96, emphasis in original.Note 6. Jürgen Straub, Narration, Identity, and Historical Consciousness (New York: Berghan Books, 2005), 52.
14,032.2
2016-04-06T00:00:00.000
[ "Philosophy", "History" ]
Self-Dual Gravity Self-dual gravity is a diffeomorphism invariant theory in four dimensions that describes two propagating polarisations of the graviton and has a negative mass dimension coupling constant. Nevertheless, this theory is not only renormalisable but quantum finite, as we explain. We also collect various facts about self-dual gravity that are scattered across the literature. Introduction Self-dual gravity is a theory of gravity in four dimensions whose only solutions are Einstein metrics with a half of the Weyl curvature vanishing. Such metrics are also known in the literature as the gravitational instantons. Self-dual gravity is analogous to self-dual Yang-Mills theory, the latter being a four-dimensional gauge theory whose only solutions are connections with a half of the curvature vanishing. One purpose of this paper is to collect known facts about self-dual gravity. These are scattered across the literature spanning the last two decades. Some are reasonably well-known. Some other are contained in recent works of this author, in various degrees of explicitness, and are known less. We hope that collecting all these in one place will create a useful resource on the topic. The other purpose of this text is to attract attention of the community to the fact that self-dual gravity is an interacting theory that describes two propagating degrees of freedom (two polarisations of the graviton), is diffeomorphism invariant, has a coupling constant of negative mass dimension, and in spite of all these trouble guaranteeing features gives rise to a perfectly well-behaved quantum theory. This theory is finite -there are no quantum divergences. So, it gives the only known example of a consistent theory of quantum pure gravity in four dimensions. Superstring theory, when appropriately compactified, does give rise to a consistent theory of quantum gravity in four dimensions, but it has (infinitely) many extra fields. Of course one could object that self-dual gravity can only describe metrics of either Riemannian or split signatures (in Lorentzian signature vanishing of half of the Weyl implies vanishing of it all), and hence is not a physical theory. 1 But one should recall that the argument of power counting nonrenormalisability of GR has little to do with the metric signature, it is only based on the dimensionality of the coupling constant. In fact, all loop calculations that confirm non-renormalisability are done in the Euclidean setting. So, the striking fact here is that, as any other theory of gravity, self-dual gravity has a negative mass dimension coupling constant, but in spite of this manages to be not just renormalisable, but quantum finite. The mechanism for how this is possible is instructive and can be explained in simple terms already in the Introduction. First, we shall see that the Lagrangian of self-dual gravity treats two polarisations of the graviton on a different footing. Thus, let us choose our theory to describe metrics with vanishing self-dual (SD) part of Weyl. It would be more natural to call this theory anti-self-dual (ASD) gravity because it is the ASD part of Weyl that is allowed to be non-zero, but we shall continue, for brevity, to refer to it as SD theory. Then there is a polarisation of the graviton that respects the SD part of Weyl being zero condition. It is natural to refer to this polarisation as the negative one because this graviton will have only ASD part W − of Weyl non-zero. As we shall see below, the Lagrangian of SD gravity will be non-linear in the field that describes the negative helicity gravitons. It will however contain another field, used to describe the other (positive) helicity polarisation. The Lagrangian is linear in this field. It is then not hard to see that the theory described by a Lagrangian of this sort is one-loop exact. Thus, at tree level it is only possible to construct diagrams with one positive external leg and as many negative legs as one desires. At one-loop level it is only possible to have diagrams with negative external legs. It is not possible to construct diagrams with more than one loop. The theory being one-loop exact, the study of quantum divergences, if any, reduces to that at one-loop. The one-loop effective action is given by the logarithm of the determinant of a certain firstorder differential operator. This operator arises by linearising the theory around a fixed instanton background, and the effective action is a functional of the fields describing the background. For convenience of the computation this operator can be squared and then the determinant of the resulting second order operator found by the heat-kernel methods. As is usual in four dimensions, the logarithmic divergences, if any, are proportional to quantities constructed from the background curvature squared. One can then quickly convince oneself that on instanton backgrounds all possible quantities of this type reduce to topological invariants of the manifold. Thus, due to certain identities involving the curvature of instanton backgrounds, there are no possible counterterms and thus no divergences. This will be spelled out below. The above argument can be compared to what happens with full GR at one loop [1]. In this case the possible logarithmic divergences are also integrals of curvature squared. One can then use the fact that the background is Einstein, as well as the identity expressing the Euler characteristic of the manifold as an integral of the curvature squared to eliminate all divergences apart from the divergence proportional to the total volume of the space. This is the reason why quantum gravity is one-loop finite in flat space (or renormalisable on a constant curvature background). The additional simplification that occurs in the self-dual case is that also the total volume of the manifold can be expressed as a certain linear combination of the Euler characteristics and the signature of the 4-manifold. So, to summarise, in SD gravity there are no divergences (of the type that can contribute to the S-matrix) at one loop, and there are no higher loops, so the theory is quantum finite. This should be contrasted with full GR, which is one-loop finite in flat space [1], but requires the famous Goroff-Sagnotti counterterm [2] at two loops. The paper is organised as follows. One of the striking facts about the self-dual gravity is that it behaves in almost the same way as the more known self-dual YM theory. So, it is appropriate to start with a description of SDYM, which is what we do in Section 2. We describe the covariant formulation of this theory, a convenient for computations gauge-fixing that makes the operator arising that of Dirac type, characterise the cubic interaction of this theory in terms of amplitudes, compute the Berends-Giele current and show that all tree-level amplitudes for more than 3 particles vanish on shell. We also discuss the vanishing of quantum divergences and the give the result for the one loop scattering amplitudes. We describe a relation to full YM, and sketch the twistor space description of SDYM. In Section 3 we proceed with self-dual gravity. We present the material in precisely the same order, to highlight exact parallel between the two theories. We shall see that self-dual gravity behaves in exactly the same way as SDYM, the only difference being that SDGR is a bit more non-linear than SDYM (there is also a quartic vertex in gravity while the only interaction of SDYM is cubic), as well as the fact that SDGR has negative mass dimension coupling constant. Thus, self-dual gravity follows the same pattern that has been emerging for the relation between full gravity and YM: In spite of its non-renormalisable interaction, gravity is much more closely related to Yang-Mills theory than could be anticipated by inspecting the corresponding Lagrangians. On this point, we refer the reader to works of Bern and collaborators, e.g. [3]. It is also worth pointing out that the formulation of SDGR on which this article is based and which exhibits the strongest analogy with SDYM is based on connections rather than metrics. The main rational for considering the connection rather than metric formulation is that the instanton condition, while second order in derivatives in the metric language (as a condition on the Riemann curvature), is a first order condition at the level of connections. The price to pay for this is that the connection formulation requires a non-zero cosmological constant (of arbitrary sign). Self-Dual Yang-Mills We start this paper by describing what is known as the Chalmers-Siegel formulation [4] of self-dual Yang-Mills. We do this to showcase a strong analogy between self-dual Yang-Mills and self-dual gravity. SDYM in the covariant formulation [4] can be studied by usual Feynman diagram techniques, and we describe a very useful way of gauge-fixing this theory. The kinetic operator of the gauge-fixed theory is just an appropriate Dirac operator. This gauge-fixing was used in [5], and also appears in the book [6], exercise VIB4.1c, in the context of the first order formalism for the full YM. We sketch the computation of the Berends-Giele current [7] and show that all tree level amplitudes (on a trivial background) with more than 3 external legs vanish on shell. We explain why there are no quantum divergences in this theory, and state the result for the one-loop scattering amplitudes. The non-vanishing such amplitudes are those involving only negative helicity gluons. Bardeen [8] has suggested that the close relationship between the self-dual and full YM, as well as the integrability of the former, may be of help to understand the latter theory. So far this idea has not been realised, possibly because no one has tried hard enough. The theory There are several non-covariant formulations of self-dual Yang-Mills, see [4] for a discussion of this. We will only present the covariant formulation, also from [4]. The Lagrangian of SDYM involves two fields. One is the usual YM connection field, the other can be referred to as the Lagrange multiplier field imposing the self-duality condition. The action is Here F = dA + A ∧ A is the curvature 2-form, and B + is a Lie algebra valued 2-form field that is required to be self-dual We work in either Riemannian or split signature. For both of these there is no imaginary unit in this formula. Trace everywhere stands for the matrix trace, with Lie algebra valued objects represented by matrices. The metric information enters (1) via the requirement (2) that B + is self-dual. The field equations are as follows. Varying the action with respect to B + we get which is correct field equation for SDYM. It says that the field strength is anti-self-dual, which then implies the usual YM field equation d µ A F µν = 0. Connections satisfying (3) are called instantons. Field configurations satisfying (3) describe one (in our conventions negative) polarisation of the gluon. Varying with respect to the connection one gets Here d A is the covariant derivative with respect to the gauge field. We will later identify this as the field equation for the other polarisation of the gluon. Linearisation The action (1) can be expanded around an arbitrary instanton background. Thus, let the background connection satisfy (3), while background B + = 0. Denote the perturbations by b ≡ δB + , a ≡ δA. We then have the following linearised Lagrangian Here d A is the covariant derivative with respect to the background connection, and we have omitted the wedge symbol product for brevity. There is only the cubic interaction, and the interaction part of the Lagrangian reads Linearisation around a non-trivial B + background One can also take a more non-trivial background involving also a non-zero B + satisfying (4) with A satisfying (3). The interaction (6) is unchanged, but the kinetic term changes to which is a sort of a mass term for the connection. This extra term changes the propagator of the theory, see below. Spinor description We will only give details for the B + = 0 backgrounds, making some comments about more general case in 2.9. Given that b is a self-dual two-form (with values in the Lie algebra), it is very convenient to write this object as a spinor. When this is done, one no longer has to keep in mind its self-dual property. Our spinor notations are explained in the Appendix. We now translate (5), (6) into spinor notations. The connection perturbation field a µ becomes an object a M M ′ (still Lie algebra valued). The self-dual 2-form field b µν becomes an object . Thus, our self-dual field takes values in S 2 + . The free part of the Lagrangian becomes Here d M M ′ is the operator of covariant derivative with respect to the background connection. The interaction becomes We have absorbed any numerical coefficient that arises in the spinor translation into b M N . Gauge-fixing The main idea of the gauge-fixing procedure introduced below is to combine the field b M N with the auxiliary field in the ghost sector, to obtain a Dirac operator. This was used in [5], but also known to W. Siegel, see [6], exercise VIB4.1c. The first-order differential operator in (8) maps S + × S − → S 2 + and is thus degenerate. This degeneracy reflects the usual gauge invariance. To gauge fix this, we use the following gauge-fixing fermion wherec is the anti-ghost and as before d M M ′ is the background covariant derivative. This is the usual for YM theory gauge-fixing fermion, except that we are using the Landau gauge rather than the Feynman gauge. The gauge-fixing Lagrangian is then where h c is the ghost sector auxiliary field. As usual the full covariant derivative (including the perturbation field) has appeared in the ghost term. What is convenient about the gauge-fixing (10) is that the first term in (11) can be combined with (8) by simply defining a new fieldb This new field takes values in S + × S + . Then (8) together with the first term in (11) take the same form as (8) but withb in place of b. We also remark that in (9) we can replace b byb if we add the symmetrisation on the a's. We now drop the tilde on the b for brevity, and write the full gauge-fixed Lagrangian as The operator that appears in the bosonic kinetic part is just the (covariant) Dirac operator Here g is the Lie algebra. This operator is elliptic and all the gauge has been fixed. Amplitudes A useful exercise is to characterise the Lagrangian of SDYM in terms of amplitudes. For doing this we pass to the split (+ + −−) signature where null momenta as well as self-dual field configurations are possible. In this signature all spinor objects are real. We will also work around the trivial background connection (as well as B + = 0), as is appropriate in the amplitude context. The linearised (gauge-fixed) field equations are Note that these are just massless Dirac equations for "spinors" valued in S + × S − and S + × S + respectively. The Lie algebra index decouples in this linearised case. The Dirac operator acts on the second spinor index in each case. Applying the Dirac operator one more time and using the fact that the Dirac squared is the Laplacian we see (passing to the momentum space) that field equations imply that the momentum vector must be null. Such vectors, when written in spinor form, are products of two spinors of different types Here k M , k M ′ are real and independent (in the case of split signature) spinors. We can then immediately construct the polarisation vectors solving (15). The negative helicity polarisation vector is given by Here q M is an arbitrary reference spinor, whose presence here reflects the possibility of making gauge transformations. Physical amplitudes are q-independent. The denominator is necessary to make this polarisation spinor homogeneous degree zero in q. The general solution of the first equation in (15) is a linear combination of plane waves weighed with such polarisation tensors. It is most convenient to take the reference spinor q to be the same for all negative helicity gravitons. The polarisation spinor in (17) is dimensionless. Similarly, for the positive polarisation, the general solution of the second equation in (15) is a combination of plane waves with the polarisation spinor This polarisation spinor has mass dimension one, as is appropriate for a field of mass dimension two. We now evaluate the first term in the second line of (13) on shell. Thus, we take the states 1, 2 to be of negative helicity, and 3 to have positive helicity. We get for the colour-ordered 3-point amplitude where our notation is that (k 1 ) AA ′ = (k 1 ) A (k 1 ) A ′ ≡ 1 A 1 A ′ , the angle bracket stands for the contraction of unprimed and the square bracket of primed spinors. Using the momentum conservation 11 ′ + 22 ′ + 33 ′ = 0 to eliminate the reference spinor q A we get the familiar formula [32] . This is the only 3-point amplitude in this theory. As we shall see below, this is also the only non-zero amplitude at tree level (on the trivial background). Berends-Giele current We now compute the colour-ordered Berends-Giele current, which is defined as the sum of all colourordered Feynman diagrams with all but one leg on-shell. A convenient rule is that the off-shell leg is taken with the propagator on that leg. We take the on-shell legs to be those of negative polarisation, as this gives the most interesting current. The one-point current is just the polarisation tensor (17) itself. Anticipating what will happen, we write the general current as where J(1, . . . , n) is now a scalar. We have The second current is computed putting two order one currents (polarisations) into the cubic vertex and applying the final leg propagator. It is easy to see that this gives where we wrote the result in a suggestive form. To compute the third current we note that in forming a colour-ordered amplitude we can either attach the gluon number 3 to the current J M M ′ (1,2) [23] . We can then use Schouten identity in the form q1 23 + q3 12 = q2 13 (24) to notice that the numerator cancels the propagator in the denominator and we have The pattern is becoming clear. It is not hard to write a recursion relation [7] for the general current and prove that This is essentially the famous Parke-Taylor formula [9] for the so-called MHV amplitudes. We thus see that the basic ingredient of this formula arises already in self-dual YM theory. This was noted by several authors, notably by [8]. The form of the current (21) together with the result (26) immediately shows that all the on-shell amplitudes with more than 3 gluons are zero. Indeed, first of all, we already know that at tree-level there can only be amplitudes with at most one positive leg. So, let us compute these amplitudes by first computing the current with all negative on-shell legs, and then putting the off-shell leg on-shell inserting in it a positive polarisation gluon. In the process of doing this we must multiply the current by the propagator that by our convention was always included in the off-shell leg. However, since the current does not have any pole to cancel this propagator, and the propagator is zero on-shell, the whole on-shell amplitude is rendered zero. Thus, on the trivial background B + = 0, A = 0 all tree-level amplitudes with n > 3 are zero. In order to avoid confusion we emphasise that, while the Berends-Giele current in SDYM is nontrivial and is given by (26), the tree-level amplitudes in SDYM are trivial (zero). In particular, in spite of the fact that one sees in (26) all the ingredients of the MHV amplitude Parke-Taylor formula, there are no non-zero (+ + − . . . −) MHV amplitudes in SDYM. The reason why the current (26) resembles the Parke-Taylor formula so close is that one can take the limit of the Parke-Taylor formula in which the momenta of two of the positive helicity gluons become colinear. This limit must be proportional to the current (26), and this is why the two formulas are so closely related. At first sight, it may seem that more amplitudes can be non-zero on a more general B + = 0 background. Thus, one can continue to take A = 0, but take a non-zero B + solving ∂ A ′ B B + BA = 0. In this case, there is a second, mass-like term in the linearised Lagrangian (7). This changes the propagator in that there is now also the bb part of the propagator. There are now more diagrams that can be constructed at tree level. In particular, it may seem that the all negative helicity tree level scattering amplitudes do not have to vanish anymore. However, this is an illusion, because we can equally well treat this correction to the propagator as a new B + aa vertex. This vertex can connect two Berends-Giele currents computed by the procedure described above. However, when we attempt to put B + on shell the result will vanish, as we already know from the previous discussion. So, there are no new tree-level amplitudes one can generate on B + = 0 backgrounds, provided of course that B + satisfies its field equation. Quantum theory The fact that one of the fields, namely B + , enters the Lagrangian linearly immediately tells us that the theory is one-loop exact. This is because at tree level one can only construct diagrams with at most one external B + leg. The propagator of the theory takes B + into A, and so at one loop only diagrams with no external B + leg can be formed. No higher loop diagrams exist. To study the theory at one loop one can use the background field method formalism, and evaluate the path integral of the theory linearised around an arbitrary background. We consider in details the case of a B + = 0 background, but the general background can be treated by the same method. We make some remarks on this at the end. We follow the computation in [5], which in our case of self-dual theory becomes much simpler. The action to study is (5). Once gauge-fixed the problem reduces to computing the determinant of the operators appearing in the first line in (13). We perform this computation by passing to a second-order problem. Thus, we first rewrite the bosonic part of the linearised Lagrangian as Here is the chiral part of the Dirac operator and is its adjoint operator. The operator D in (27) is then the usual Dirac operator acting on 4-component spinors, here with values in V The one-loop (and thus all loop) effective action is then given by where the last term is the contribution from ghosts, i.e. the second term in the first line in (13). To compute the first determinant we square the Dirac operator with being an operator of Laplace type. Absence of quantum divergences Logarithmic quantum divergences, if any, can now be computed by the standard heat kernel techniques, see e.g. [10]. For the operator D 2 this computation has been performed in [5], in a much more involved set-up. But the final result of relevance for us here could be written without any calculation. Indeed, the final result must be an integral of the background curvature squared, because only such terms arise in the heat kernel expansion at this order. We can then use the fact that is a total divergence that does not contribute to S-matrix. This quantity only depends on the fibre bundle, and (an appropriate multiple of it) is known as the first Pontryagin number. This means that modulo surface terms we can replace integrals of the ASD part of the curvature squared (F M ′ N ′ ) 2 with integrals of the SD part of the curvature squared (F M N ) 2 . But on instanton backgrounds F M N = 0 by definition, and so the integral of the full curvature squared reduces on instanton backgrounds to the topological term (34) that has the interpretation as measuring the number of instantons. The one-loop divergence is thus proportional to the instanton number, and does not contribute to the S-matrix. The theory is quantum finite. The coefficient in front of the divergence proportional to the instanton number can be extracted from results in [5], see in particular the formula (66) of this reference, with A = B = 0. We refrain from giving these unnecessary details here. Let us now discuss what seems to be much more non-trivial case of a background with B + = 0. In this case there is a second term in the linearised Lagrangian, see (7). The computation is still possible following the same idea, namely squaring the arising first-order operator and then using the heat kernel technology. This computation is possible, and in fact has been done (even in greater generality) in [5]. The result of interest for us here can be extracted from the formula (66) of [5] by putting the matrix A to zero. Note that in the notations of that paper this matrix encodes terms quadratic in b. In particular, it has nothing to do with the connection. The result in [5] shows that, while terms of the (B + ) 2 type are present, they always appear together with factors of A. So, setting the matrix A to zero also eliminates all B + dependent terms. The fact that the one-loop effective action on a B + = 0 background does not depend on B + can also be understood without any computation. 2 Indeed, if there was some non-trivial dependence, we could expand the result in power series in B + . We would then obtain one-loop amplitudes with one or more external b legs. As we know from the preceding discussion, this is not possible. So, we learn from this argument that one-loop effective action cannot depend on B + . It can therefore be computed on B + = 0 background, which is the much simpler computation reviewed above. The observed in (66) of [5] fact that for A = 0 the dependence on B + drops out is thus not a miracle, it must have happened if the calculation was done correctly. To conclude, even though a computation with B + = 0 is possible via the same technology, this much more involved computation is not needed, as the result cannot depend on B + . The theory is proved finite by a simple argument on a B + = 0 background. In particular, no computation is required, possible divergences cannot contribute to the S-matrix by a simple argument involving the Pontryagin number. One loop amplitudes As we have just seen, there are no one loop divergences in SDYM (and no higher loops). However, the finite parts of one loop integrals are non-zero. Historically, people first computed one loop amplitudes in full YM. It was then observed that for special helicity configurations, namely all minus in our conventions, the full YM amplitudes coincide with those obtained directly within SDYM. The relevant references are as follows. The form of the same helicity one loop n-gluon amplitude was conjectured in [11] (see also reference [5] of that paper) using the collinear limit arguments. Around the same time Mahlon [12] used the technology [7] of recursion relations to compute one loop multi-photon amplitudes in QED. Mahlon then applied the same recursive technology to compute a multi-gluon amplitude from a quark loop [13]. Using supersymmetry, one can show that the same results apply to pure QCD amplitudes. In particular, the one loop all minus 4-point amplitude is a multiple of Note that this is a purely rational result that does not have any cuts. This is related to the fact that the on-shell amplitudes that could appear on the cut according to the Cutkosky rules are absent. A closed form expression for an arbitrary number of negative helicity particles is also known, and is given in [11]. Cangemi [14], [15] and then Chalmers and Siegel [4] then showed that the same helicity one loop amplitudes in SDYM are the same as those in full YM. Cangemi's argument was an explicit computation, but using the action that differs from (1). The argument in [4] is more elementary, and follows from the relation between the full and SD YM Feynman rules, see below. Bardeen [8] suggested that the non-vanishing of the one loop all-minus amplitudes in the SDYM theory may be related to an anomaly in the conservation of the symmetry currents responsible for the integrability of the theory. So far, to the best of our knowledge, this suggestion has not been verified. It would be interesting to compute (35) directly from (1) using the Feynman rules described above. This can also shed light on the suggested [8] anomaly interpretation of the result (35). Relation to the full Yang-Mills The relation to the full YM theory arises by writing the YM action in the following form Integrating out the auxiliary field B + , which, unlike in the SDYM case, is possible, one gets the YM action in the form of the integral of the (F + ) 2 , which agrees with F 2 modulo a surface term. The quantum theory for (36) is constructed completely analogously to what we have done for the self-dual case. The gauge-fixing is performed in an analogous way, except that now the Feynman gauge is more convenient, see [5] where it is explained that in this case it is also possible to absorb the auxiliary field for the ghosts into the B + field perturbation. The interaction is also unchanged as compared to the SDYM case. The only difference is that now the kinetic term has an addition contribution g 2 b 2 in it. This changes the propagator, and apart from the propagator connecting the b and a, there now is also a propagator connecting a with a. So it becomes possible to construct many more diagrams. In particular, the theory is no longer one loop exact. At the level of the amplitudes, the field a can now describe gluons of both helicities, but b continues to describe only the positive helicity. So, when computing the full YM amplitudes involving n negative helicity gluons, all polarisation spinors must be inserted into the a legs of the vertices. At the same time, the structure of the Feyman rules is such that at tree level there is at least one b external leg. At one loop level, the minimal number of external b legs is zero. These tree level and one loop diagrams with the minimal possible number of external b legs are the same as in SDYM theory. It is then clear that the one loop all negative amplitudes in full YM are those that do not have any b external legs, and are therefore the same as in the SDYM. Twistor space description The integrability of SDYM theory is most easily seen in its twistor description, in which the non-linear SDYM equations are expressed as the compatibility condition of a certain linear system, see e.g. [16]. A twistor action for SDYM has been described in [17] and then in [18]. We review only the very basics of this description. The twistor space over the conformal compactification S 4 of R 4 is the 2-component spinor bundle over S 4 . The projective twistor space is then PT = CP 3 . The connection allows to define a (0, 1) Lie algebra valued form a on CP 3 . The field b gets represented as a (0, 1)-form that is Lie algebra and O(−4) valued. The lift of (1) to the twistor space is Here f =∂a+ a∧ a is the (0, 2) part of the curvature of the connection, and Ω is the (3, 0) holomorphic top form on CP 3 . It is of homogeneity degree O(4), which makes the integrand in (37) homogeneity degree zero and the integral well-defined. Integrating the above action over the fiber we get (1) with the self-dual 2-form B + arising as the Penrose transform of b We refer the reader to [18] for more details. Colour/kinematics duality The purpose of this subsection is to mention that the colour/kinematics duality [19] that is known to be true in full YM theory can be shown to hold in self-dual YM by analysing Feynman rules. This was first observed in [20] in a non-covariant version of the theory. A covariant version of the argument is also possible. The computation that leads to this conclusion is very similar to the computation of the Berends-Giele current, except that in this case it is more natural to consider the full (not colour-ordered) amplitudes. Details of this are given in [21] and will not be repeated here. It would be interesting to try to understand the statement gravity equals YM squared that is an application of the colour/kinematics duality in the self-dual setup and from the point of view of Feynman diagrams. We leave this for future research. Self-Dual Gravity We are now ready to proceed with similar constructions in the case of self-dual gravity (SDGR). What we mean by self-dual gravity is a theory of gravity in four dimensions whose equations force metrics to be (i) Einstein R µν = Λg µν ; (ii) have one of the two chiral halves of the Weyl curvature vanishing, say W + µνρσ = 0. There are several non-covariant formulations of SDGR available in the literature. In particular, Plebanski [22] gave a description of gravitational instantons in the zero scalar curvature case. Plebanski equation is nicely reviewed in [20]. This work also contains references to papers describing the lightcone action for self-dual gravity. There is a covariant description of SDGR in [6], exercise IXA5.6, see also [23] for a supersymmetric version. This description can be obtained by taking the "chiral" first order action for GR, and rescaling the fields so as to remove the AA term from F = dA + AA. This modifies the local gauge invariance of the chiral GR action from the chiral half SO(3) of Lorentz group to U(1) 3 . This author doesn't know if a covariant gauge-fixing of this action is possible. In this paper we describe a different covariant formulation. It was known for a long time that gravitational instantons (of non-zero scalar curvature) can be described very economically via SO(3) connections rather than metrics. The first reference on this we are aware of is the paper by Gindikin [24], [25]. These papers used SL(2) connections rather than SO(3), but contain the key idea of all later descriptions. The connection description was rediscovered by Capovilla, Dell and Jacobson in [26], in the context of their work [27] on "General Relativity without the metric". It was again rediscovered in [28]. Another relevant set of papers is that by Torre [29], [30], [31]. Here the author describes the linearisation of the instanton condition in the connection language, as also we do below. The connection description of instantons also appears in [32]. Most of the above authors consider equations describing gravitational instantons in the language of connections rather than an action that gives these equations as critical points. So, it is not easy to pinpoint the first reference that contains this action. Given that, once the equations are known, the action with the right properties presents itself almost immediately, we restrict ourselves to just one recent reference [33] explicitly containing the action. One of the main attractive features of the description we are to present is that the instanton condition is a first order PDE on the basic field. This is similar to the YM case, but should be contrasted with the metric description in which the instanton condition, being that on the curvature, is a second order PDE. The other attractive feature is that in this description the gauge-fixing is based on a nice "instanton deformation complex", see the discussion around (54) below. The action Similar to (1) there are two types of fields in the action. One is the Lagrange multiplier field in which the Lagrangian is linear. It will describe one of the two polarisations of the graviton. The field in terms of which the action is non-linear will be an SO(3) connection field. We will explain its relation to the metric below. The action reads Here is the Lagrange multiplier field that is required to be tracefree Ψ ij δ ij = 0. The action (39) is clearly SO(3) and diffeomorphism invariant. Note that no metric appears or is used in the construction of the action. The Euler-Lagrange equations following from (39) are as follows. By varying with respect to the Lagrange multiplier field we get This equation says that the 4-form valued matrix F i ∧ F j has vanishing tracefree part, and thus has only the trace part proportional to δ ij . The proportionality coefficient follows by computing the trace of both parts of this equation. It is this equation that was observed in [24], [26] to describe the gravitational instantons, as we shall review below. Note that (40) is a first-order differential equation on the basic (connection) field. The equation one obtains by varying with respect to A is Below we will see that this will describe one of the two polarisations of the graviton propagating in this theory. We also note that one could use a more general action than (39), allowing an arbitrary symmetric matrix in front of F i ∧ F j , but requiring the trace of this matrix to be a constant, so that the extremisation is only carried out with respect to the tracefree part. The part of the action proportional to the trace part then gives rise to the Pontryagin number and does not change the field equations. This observation is related to the formalism for the connection description of gravity described in [33]. 2-forms in four dimensions We now need to explain why (40) describes instantons. To explain this, we need to start by reviewing the geometric fact that a generic triple of 2-forms in four dimensions defines a metric. Thus, let B i be a triple of two-forms in four dimensions. First, we need to spell out the notion of genericity. A triple B i of 2-forms is called generic or non-degenerate if the 3 × 3 matrix of wedge products B i ∧B j has non-zero determinant. Here, to check the condition one divides the 4-form valued matrix B i ∧ B j by an arbitrarily chosen volume form. Clearly, the non-degeneracy condition does not depend on which volume form is used for this purpose. We assume that the manifold is orientable, so that a globally defined volume form exists. Another natural notion arising in this context is that of definite triples B i . A non-degenerate triple of 2-forms B i is called definite if the matrix of wedge products B i ∧ B j is definite, i.e. has eigenvalues of the same sign. Once again, to check definiteness one divides B i ∧B j by an arbitrarily chosen volume form v and checks definiteness of the arising symmetric 3 × 3 matrix B i ∧ B j /v. A definite triple B i defines a natural orientation. This is the orientation given by the volume form v such that the matrix B i ∧ B j /v is positive definite. Now, let B i be a definite triple of 2-forms. It defines a metric by the following formula Here on the left we have the metric contraction of two vector fields ξ, η multiplied by the volume form for this metric. The volume form is chosen in the same orientation as that defined by B i . On the right we have the top form constructed from the triple of 2-forms B i . The volume form (vol) g can be computed by taking the determinant of both sides. This shows that the volume form is homogeneity degree 2 in B i , and the metric itself is homogeneity degree one in B i . The ultimate origin of the formula (42) is the geometry of 3-forms in 7 dimensions, see [34], but we will deviate from the main topic too far if we are to explain this. It can be shown that the metric (42) is non-degenerate (has non-zero determinant) whenever B i is non-degenerate. Moreover, this metric is of signature all plus or all minus when the matrix of wedge products B i ∧ B j is definite. The geometrical meaning of the formula (42) is that this metric makes the definite triple B i selfdual with respect to the Hodge star operator defined by this metric, in the orientation defined by B i . The formula (42) also defines a sign of the triple B i . Thus, when the metric (42) is of signature all plus we call B i to be positive definite, and when we get the signature all minus we call B i to be negative definite. It is clear that multiplying the positive definite triple by the minus sign one gets a negative definite triple. The sign of the triple will be identified with (minus) the sign of the cosmological constant. For completeness we also mention that when B i is real non-degenerate but indefinite then (42) gives a metric of split signature. It is also possible to get Lorentzian signature metrics out of (42), but this requires complex B i satisfying reality conditions B i ∧ (B j ) * = 0 together with the reality condition on the determinant of the matrix B i ∧ B j . All the statements in this subsection are those of linear algebra. They are proved by noticing that for a non-degenerate B i there is always a GL(3) matrix G ij such that B i = G ij Σ i , where Σ i satisfy an orthonormality property. Thus, in the case of positive definite B i the matrix G ij can be taken to be the positive branch of the square root of the positive definite matrix B i ∧ B j , and then Σ i ∧ Σ j ∼ δ ij . One then constructs the complex linear combinations Σ ± = Σ 1 ± iΣ 2 that are simple Σ ± ∧ Σ ± = 0 and thus decomposable. Therefore there exist complex one-forms u, v such that Σ + = u ∧ v, Σ − =ū ∧v. The bar here denotes the complex conjugation. Because Σ + ∧ Σ − = 0 the 4 one-forms u, v,ū,v form a frame. The data in Σ ± is not enough to fix this frame completely because one can apply an SL(2, C) transformation to u, v without changing Σ ± . But using the fact that Σ 3 ∧ Σ ± = 0 as well as Σ 3 ∧ Σ 3 = (1/2)Σ + ∧ Σ − together with the SL(2, C) freedom in choosing u, v we can achieve Σ 3 = (i/2)(u ∧ū + v ∧v). This fixes the frame completely. The metric (42) is then in the conformal class of ds 2 = u ⊗ū + v ⊗v. The conformal factor is (det(G)) 1/3 . All statements above follow from this construction. The case of split and Lorentzian signatures is proved using the same basic idea -one writes B i as a GL(3) matrix acting on Σ i with desired properties, and then these Σ i define some decomposable forms which in turn define the metric with respect to which Σ i are self-dual. The connection description of instantons We now apply (42) to the curvature 2-forms. The claim is that metrics (42) (40) are anti-self-dual Einstein with non-zero cosmological constant. The sign of the cosmological constant is negative of that of the triple B i . This claim is proved in several steps. The first step is the following lemma Lemma 1 Let B i be a definite triple of 2-forms. Then the equation viewed as an algebraic equation for the connection A i , has a unique solution. To prove this lemma one again writes B i = G ij Σ j and looks for the connection as a sum of the self-dual part of the Levi-Civita connection compatible with the metric (42) defined by B i , and an extra term. This extra term can be computed and involves the covariant derivatives of G ij (with respect to the self-dual part of the Levi-Civita connection). Details can be found in e.g. [35], [36]. We shall call the connection in the above lemma compatible with the triple B i . Note that when B i = F i the equation (43) is the Bianchi identity and so is automatically satisfied. Still, (43) can be solved to find components of the connection algebraically expressed in terms of the derivatives of the curvature. The second lemma describes the solution to (43) in the special case that the triple B i satisfies the equation (40). Such triples can be called perfect. This lemma follows from details of the proof of the previous lemma, as all terms involving the covariant derivative of G ij vanish in this case, and A reduces to the self-dual part of the Levi-Civita connection. Combining these two lemmas we see that when the connection satisfies (40) it is the self-dual part of the Levi-Civita connection compatible with the metric (42) with B i = F i . However, by construction of the metric (42) the curvature 2-forms F i are self-dual with respect to it. We now use the well-known decomposition of the Riemann curvature, see e.g. Besse [37] page 51, to conclude that (i) the metric is Einstein; (ii) the self-dual part of Weyl vanishes and so it is a gravitational instanton. The first of these follows because the curvature of the self-dual part of the Levi-Civita connection is self-dual as a 2-form. The second follows because the equation (40) says that there is only the scalar curvature, but no self-dual part of Weyl. More details on the above material, together with proofs of many of the facts mentioned, can be found in e.g. [38]. All in all, in the description presented here, to find a gravitational instanton one has to find an SO(3) connection satisfying (40), which is a first order PDE on the connection. One then gets an instanton metric by the formula (42). The sign of the scalar curvature is determined by the sign of the triple of curvature 2-forms. Thus, if the resulting triple of curvature 2-forms F i is positive (negative) definite, see previous subsection, then the scalar curvature is negative (positive). This reversing of the sign has to do with our usage of self-dual 2-forms rather than anti-self-dual. It may well be more natural to use anti-self-dual forms, but we have refrained from doing so in this paper in order to agree with conventions of some of our earlier works. Linearisation around an instanton We now linearise (39) around an arbitrary instanton background. Let us denote by Σ i the triple of orthonormal self-dual 2-forms for the background instanton metric. This triple is defined so as to satisfy the following algebra Let us for definiteness consider a positive scalar curvature instanton. Let Λ be the cosmological constant. We define a mass scale Λ/3 = M 2 . Then the curvature of our background connection can be written as where again the minus sign here is related to our usage of self-dual 2-forms. As in the case of the SDYM, we first consider a background with Ψ ij = 0. A more general background is also possible, we will give some comments on this below. Denoting the field perturbations δA = a, M 2 δΨ ij = ψ ij , where we have rescaled the ψ field to give it mass dimension two, we have the following linearised Lagrangian There is also a cubic and a quartic interaction The wedge product of forms is everywhere implied. The quintic interaction vanishes because it makes the wedge product of at least a pair of one forms a i ∧ a j contract in their Lie algebra index. The appearance of 1/M 2 in the first term in the cubic interaction tells us that we have a negative mass dimension coupling constant, as is appropriate for a gravity theory. Spinor description As in the case of self-dual YM, the Lagrangian (46), (47) is most clearly understood by rewriting it in terms of spinors. The tracefree field ψ ij becomes a rank 4 symmetric spinor ψ ABCD , thus valued in S 4 + . The connection perturbation has both its spacetime index and the internal index translated into spinors. The internal SO(3) index becomes a pair of symmetrised spinor indices (AB), while the spacetime index becomes a pair M M ′ . The identification of internal and spacetime spinor indices is carried out by the 2-forms Σ i . Overall, the connection perturbation becomes an object a M M ′ AB , thus valued in S + × S − × S 2 + . The linearised Lagrangian becomes Here we have absorbed some numerical factors that arise in the spinor conversion (including a factor of √ 2) into ψ. Details of our spinor conversion rules are spelled out in [39], Section 6. Gauge Let us discuss the gauge symmetries that (48). First, there is a complete symmetrisation of the BCD indices of a BM ′ CD enforced by the complete symmetry of ψ ABCD . This means that the linearised Lagrangian is independent of some of the components of the connection. We have the decomposition The last term here, which contains 4 components, is what is projected out in (48). It is not hard to show, see [39], that this is precisely the part of the connection perturbation that changes under diffeomorphisms. So, a very convenient way of gauge-fixing diffeomorphisms is simply to require the connection perturbation to have only the completely symmetric S 3 + × S − component. This is an important point, as in the usual metric formulation the action of diffeomorphisms involves the first derivative of the gauge transformation parameter -a vector field. Therefore, one cannot disentangle the components of the metric that are pure gauge. They are usually made propagating, with appropriate ghosts added to the theory to project out the effect of the gauge modes. What happens in the connection formulation is different. The action of diffeomorphisms is algebraic, and does not involve derivatives of the gauge parameter. The components of the connection that are pure (diffeomorphism) gauge can be projected out from the start. However, there are also the usual SO (3) This vanishes when contracted with ψ AB CD , which shows that (48) is SO(3) gauge invariant. Gauge-fixing As we have already mentioned, a very convenient gauge-fixing of diffeomorphisms is to allow only the S 3 + × S − component of the connection propagate, fixing the other part of the connection to zero. This does not require ghosts, at least not at the order quadratic in the perturbations. The gauge-fixing of SO(3) symmetry is done as in the Yang-Mills case, by using the gauge-fixing fermion appropriate for the Landau (sharp) gauge. The only subtlety is that we want the gauge-fixing term only depend on the S 3 + × S − part of the connection so that only this component propagates. So, we project on this part of the connection in the gauge-fixing term. The corresponding gauge-fixing fermion is where a M ′ M AB = a M ′ (M AB) is the connection perturbation projected onto S 3 + × S − . We also wrote the gauge-fixing fermion in a suggestive form. The part of the BRST variation of (51) contributing to the bosonic part of the action is where h c is the ghost auxiliary field. As in the case of gauge-fixing the self-dual YM action, we can now combine the auxiliary field for ghosts with the field ψ ABCD viã The new field is no longer in S 4 + but rather in S 3 + × S + . This makes the full gauge-fixed quadratic part of the Lagrangian to be (48) with ψ ABCD replaced withψ ABCD . The tilde will now be dropped with understanding that the field ψ ABCD takes values in S 3 + × S + and contains the ghost auxiliary fields. The operator that appears in the kinetic term is now This is the elliptic Dirac operator and the gauge has been fixed. All this is exactly analogous to the self-dual YM story, see (14). Amplitudes We will now characterise the cubic interaction (47) in terms of amplitudes that it produces. In order to do so, we need to describe the physical states. However, we encounter a complication, which is that our description of self-dual gravity only makes sense with non-zero scalar curvature. Thus, the backgrounds we can consider are not flat, with radius of curvature 1/M . One can take the limit M → 0 to approach the flat case, but in this limit the coupling constant in (47) blows up, so the limit is singular. Fortunately, there is a systematic procedure of taking the flat limit described in [39]. In this procedure one essentially works with flat space and the usual Fourier transform. However, some factors of M are kept in intermediate answers, and cancel out in the (most) final expressions. Taking the flat limit, the linearised field equations read The operators that appear here are the usual chiral Dirac operators. They convert a spinor index of one type into the other. Squaring these operators one gets the Laplacian, which implies, when going to the momentum space, that the momentum is null. The momentum vector translated into spinors is thus a product of two spinors (16). We now describe polarisation spinors. Both negative and positive polarisations will contain factors of M . The solution to the first equation in (55) is given by the following spinor The factor in the denominator is needed to make this polarisation spinor homogeneity degree zero in the reference spinor q. The prefactor of M is needed to give it the mass dimension zero, as is appropriate for the polarisation spinor of a field of mass dimension one. A general solution to the first equation in (55) is then a linear superposition of plane waves with such polarisation spinors. The other polarisation provides a solution to the second equation in (55). It is Again, we need a dimensionful prefactor here to get the mass dimension one, as is appropriate for a mass dimension two field. We can now compute the cubic vertex on shell. We insert two negative polarisation spinors for a ′ s and the positive polarisation spinor for ψ. The computation is simplified by noticing that in the last term in the first line of (47) the commutator ǫ ijk a j a k vanishes on two spinors of the type (56), so the last term does not contribute. Also in the first term we only need to compute the ASD parts of ∂a because the self-dual part vanishes by the linearised field equations. This ASD part is M q A q B k A ′ k B ′ / qk 2 . We then get the following result The fraction here is just the square of (19), and so using the momentum conservation we get a multiple of the square of (20) This is the familiar gravitational 3-point amplitude, apart from the fact that instead of the familiar 1/M p here, M p being the Planck mass, we have 1/M . This is because our gravitons are normalised to the scale M , not to the scale M p . In other words, the usual gravitons are normalised so that the perturbative expansion of the metric starts as η +M −1 p h, where η is the flat metric. In the story above, the similar expansion starts as η + M −2 ∂a. So, our amplitudes need to be appropriately rescaled by factors of M p , and in the case of the 3-point amplitude this replaces M with M p . These subtleties are discussed in more details in [39] for the case of full GR. Berends-Giele current Similarly to the YM case, one can compute the Berends-Giele current given by the sum of all Feynman diagrams with one off shell leg. The most interesting current is when the on shell legs are those of negative polarisation gravitons. As for the YM theory, we write the n-point current in a convenient form where J(1, . . . , n) are now scalars. Note that the convention here is to include the final off shell leg propagator. The first current is just the polarisation spinor (56) itself Note that the form (60) implies that the n-th current remains anti-self-dual in the sense that ∂ M M ′ J ABCM ′ = 0. This is confirmed by the computation sketched below. We now start building more complicated higher point currents from the lower order ones, following the Berends-Giele procedure. At second order we need to insert two order one currents into the cubic vertex. As we have already discussed in the previous subsection, we only need the first term in (47) because the connection commutator in the second term vanishes on states of the type (60). From the first term we only need the product of the anti-self-dual parts of da, as the self-dual parts vanish. So, inserting two negative polarisation spinors into the cubic vertex and applying the final leg propagator we read off the result J(1, 2) = 1 q1 2 q2 2 [12] 12 . We then add contributions from all 3 channels. After extracting a common factor, there are terms of two types. In one type we have squares of q1 , etc. These terms can be written as (omitting the common factor of the final propagator times 1/ q1 2 q2 2 q3 2 ) . Even though more complicated than in the case of YM theory, the pattern is now becoming clear. The current is given by a sum over trees on n points. A general expression can be found in e.g. [40]. It can also be seen that the current is given by an expansion of a certain (reduced) determinant, see [40]. Precisely the same arguments as in the case of self-dual YM theory show that all tree-level amplitudes with n > 3 vanish. This is because to get such amplitudes one would need to remove the final propagator that was part of the definition of the current. However, there is no pole to cancel and the resulting amplitudes are zero. Quantum theory The discussion in this subsection parallels that in the case of YM almost verbatim, so we will be brief. Again, the fact that Ψ enters the Lagrangian linearly immediately tells us that the theory is one-loop exact. To study the theory at one loop we again use the background field method formalism. As in the YM case, we first discuss the case of a pure instanton background with the background auxiliary field Ψ = 0. We then make comments about the more general backgrounds. The action to study is (48), gauge-fixed as we explained above. As in the YM case we first rewrite the (bosonic part of the) Lagrangian as Here is the chiral part of the Dirac operator and is its adjoint operator. The operator D in (70) is the usual Dirac operator acting on 4-component spinors with values in S 3 + . Its determinant is computed by first squaring the operator to convert it to a Laplace type second order operator, see (33) and then using the standard heat kernel technology. The effective action is obtained by adding contributions from the bosonic sector and the ghosts as in (32). Absence of quantum divergences As in the case of SDYM, the logarithmic divergences are captured by the heat kernel coefficient that is of the form of the integral of curvature squared. To see if any divergence is possible on an instanton background, we need to consider the topological invariants available. In the gravity case there are two such invariants -the Euler characteristic and the signature. Both can be expressed as integrals of the appropriate curvature components squared. The Gauss Bonnet formula gives the Euler characteristic as Decomposing the Riemann curvature into its irreducible parts where is the tracefree part of Ricci, we get Thus, we can rewrite where we wrote the Weyl squared as the sum of squares of its self-and anti-self-dual parts. The other topological invariant we need is the signature Any invariant quadratic in the curvature can be decomposed into irreducible parts of the curvature squared. These parts are W ± , Z, R. On an instanton we have Z = 0, W + = 0, so there are only two possible invariants constructed from the curvature squared. But there are also two topological numbers, and thus both of these invariants are proportional to a quantity that is topological. We have Thus, on an instanton background any logarithmically divergent quantity is some linear combination of χ and τ . These are integrals of total derivatives, and cannot contribute to the S-matrix. The theory is quantum finite, at least on an instanton backgrounds. All this is like in self-dual YM. We should now discuss more complicated backgrounds with Ψ = 0. A background of this sort modifies the operator D to be considered in (70) by introducing an offdiagonal term Ψ ij d A a i d A a j . By integration by parts this term can be written as a term of the type d A Ψ ij a i d A a j , which contains only the first derivative of a i , as well as the curvature term of the schematic type Ψa 2 . This modifies the operator D by adding also a first order differential operator to the lower-diagonal position. This D has then to be squared, and converted into an operator of Laplace type in order to use the standard heat kernel methods. Details of this has not yet been worked out. However, as in the YM case one can also refer to a general argument. Thus, we can argue that the one-loop effective action cannot depend on Ψ. Indeed, if there was such a dependence, it would imply that there are one loop amplitudes with external ψ legs, but we know this is not the case. Thus, the computation of the one loop effective action can be reduced to that on Ψ = 0 background. Then no computation is necessary, as we know before any computation that all possible divergences are topological. Self-dual gravity is thus quantum finite. One loop amplitudes As in the YM, the all negative helicity one loop amplitudes can be conjectured based on the soft and collinear limit arguments [41]. One expects these amplitudes to be purely rational, because all cuts are vanishing. They can be explicitly computed (at low n) by using supersymmetry to replace the graviton propagating in the loop with a massless scalar, as explained in [41]. Nobody has computed these amplitudes from the self-dual gravity Feynman rules, in particular because no covariant formulation of SDGR was previously available. It should be possible to do this using the self-dual gravity Feynman rules described above, but this remains to be done. We now quote the result for the 4-point amplitude from [41], see formula (17) of this reference. We omit the numerical factors. The amplitude is where s ij = (k i + k j ) 2 are the usual Mandelstam variables. The general n expression can be found in [41]. As is noted in this reference, the general n all minus amplitudes in both self-dual YM and gravity exhibit the same structure. They are both built from the corresponding off shell currents in an essentially the same way, see formulas (16), (23) of this reference. As in the YM case, it can be conjectured that non-vanishing of these amplitudes is related to a quantum anomaly in conservation of the currents responsible for the integrability of self-dual gravity. It remains to be seen if this interpretation is correct. Relation to full GR We now describe the relation of the above self-dual gravity theory to the full GR. As in the case of YM, the full theory is obtained by adding to the Lagrangian extra terms. Unlike the YM case, in the case of gravity one needs an infinite number of such terms. The action for full GR in this language reads The first term here is topological. The second term is a multiple of our self-dual gravity action (39). The third and the following terms are new. The new terms modify the propagator by making also aa non-zero, which is like in the case of YM theory. The new terms also add an infinite number of new interaction vertices. We note that as in the metric GR, the vertices that follow from (81) contain at most two derivatives. The action (81) is non-polynomial in one of the fields, namely in Ψ. This is no better in the usual metric formulation of GR, where the action is non-polynomial in the metric due to the inverse metric explicitly appearing in its construction. In this sense (81) is no worse than the usual Einstein-Hilbert action. The above action is normalised to coincide with on Einstein metrics. In (81) M 2 = Λ/3 and M 2 p = 1/8πG. A quick explanation of why (81) is the correct gravity action is as follows. It is very convenient to introduce the notation The field equations are then as follows. When varying with respect to Ψ ij we get Varying with respect to the connection we get These are the familiar equations from our discussion of the self-dual gravity. Both equations together imply that A i is the self-dual part of the Levi-Civita connection compatible with the metric defined by Σ i via (42), see Lemmas 1,2 in subsection 3.3. Then (83) implies that the curvature of the selfdual part of the Levi-Civita connection is self-dual as the 2-form. This is equivalent to the Einstein condition, as follows from the decomposition of the Riemann curvature, see [37] page 51. The equation (83) also identifies Ψ ij as a multiple of the self-dual part of the Weyl curvature. The auxiliary fields Ψ ij can be integrated out from (81) to obtain a "pure connection" description of GR. The corresponding Lagrangian is analogous to the usual F 2 Lagrangian for YM. The pure connection formulation of GR was first described in [42] and is reviewed from a more mathematical perspective in [38]. We note that results in particular in [33] suggest that it may be more useful to work in the formulation (81), and not in the formulation with Ψ integrated out, also for the full GR. For the self-dual gravity one does not have an option, as only the first order description with field Ψ is possible. Because of the relation (81) between the self-dual and full gravity theories we can immediately conclude that some of the amplitudes of the full GR are correctly captured by the simpler self-dual gravity theory. It can then be anticipated, in analogy to [8], that integrability of the self-dual gravity may be of help to understand the full gravity theory. Whether this point of view can be useful remains to be seen. Another covariant formulation There exists another covariant formulation of self-dual gravity [43]. It contains more fields, but allows to describe the flat case instantons as well, unlike the above connection formulation. However, this other formulation has the drawback of being much more non-linear than (39). In particular, it is no longer linear in one of the fields, and so the already used argument for one loop exactness of this theory is no longer valid. So, it is not clear if this formulation is a good starting point for the construction of the quantum theory. We refrain from giving details here, referring an interested reader to [43]. Twistor space description The fact that in our formalism the instanton condition is first order in derivatives suggests that there is a link to the twistor description, where the instanton condition takes the form of Cauchy-Riemann holomorphicity equation, which is also first order. This turns out to be true, there is a direct link between the twistor space description of instantons and (39). Details of this relation have been worked out in [44], where we refer the reader for more details. To describe it, we note that the connection A i , which we will now view as an SU(2) connection A AB , defines a set of 1-forms in the C 2 bundle over the space. Let π A be the (holomorphic) coordinates on C 2 . Then we have There are also the complex conjugate one-forms. In turn, the connection can be defined geometrically as the 4-dimensional distribution that is in the kernel of Dπ A and its complex conjugate. In twistor theory one works with the projective twistor space. To pass to this we consider the Euler vector fields E = π A ∂/∂π A ,Ê =π A ∂/∂π A . For our notation on the hat operator, as well as our spinor notations see the Appendix. The forms in the total space of the C 2 bundle that vanish when E,Ê are inserted descend to the projective twistor space. Such a form α is of a definite degree of homogeneity O(n, m) if L E α = nα, LÊα = mα. From the two components of the 1-form Dπ A that can be chosen to beπ A Dπ A , π A Dπ A , only the second one descends to the projective twistor space. Thus, we define τ := π A Dπ A . (87) It descends to a form of degree O(2, 0) ≡ O(2), as is easy to check. Another simple computation gives where F A B = dA A B + A A C A B is the curvature of our connection A AB . The first term in dτ does not descend to the projective twistor space, as is easy to see using the identity Inserting this decomposition of the identity into the first term in (88) gives This containsπ A Dπ A that gets projected to zero in PT . Thus, dτ in PT is just the curvature. Note that dτ projected to PT is a form of degree O(2). One can then forget about the connections and consider O(2)-valued 1-forms in the projective twistor space PT . To define the notion of degree of homogeneity one needs to assume that PT = M ×S 2 . We can now write the following action, see [45] S[ψ, τ ] = PT ψ ∧ τ ∧ dτ ∧ dτ. Here in order for the integral over the fiber to make sense ψ must be a form of homogeneity degree O(−6). Clearly ψ must also be a 1-form in the total space. We do not have any pre-defined notion of the complex structure in our projective twistor space, and so we cannot require any holomorphicity properties from ψ. Varying (91) with respect to ψ one gets τ ∧ dτ ∧ dτ = 0. In view of (88), this is clearly equivalent to This is precisely our condition (40) written in the spinor language. It is also clear that the above action corresponds to our spacetime action (39), with Ψ given by the Penrose transform of ψ More details on this twistor description can be found in [?]. Acknowledgments The author was supported by ERC Starting Grant 277570-DIGT and is grateful to Evgeny Skvortsov for asking a question about self-dual gravity that led to this paper. The author is grateful to Yannick Herfray for many fruitful discussions about self-dual gravity, and for reading a draft of this paper. It is also important to acknowledge that the idea to expand in power series in (81) belongs to Yannick. The author is grateful to W. Siegel for attracting his attention to [23]. Spinors The aim of this Appendix is to establish our spinor notations. For concreteness, we only describe the Riemannian signature case. In this case the 4 coordinates of R 4 can be collected into a matrix It is clear that is the usual norm of a vector in R 4 . We can alternatively write the flat R 4 metric as The group of rotations SO(4), or rather its double cover SU(2) × SU(2) acts on R 4 as x → g L xg † R , g L,R ∈ SU(2). There are then two types of spinors. We have the so-called unprimed λ A (primed λ A ′ ) spinors that transform as the fundamental representation of SU(2) L (SU(2) L ) We shall refer to unprimed spinors as taking values in S + , and to primed spinors as taking values in S − . The bilinear form in the space of both types of spinors is given by where The row λ A := λ T ǫ can be referred to as the spinor with its index raised, so that the spinor contraction takes the form λη = λ A η A . Because of the property ǫg † = g T ǫ that is valid for any g ∈ SU(2) we see that the spinors with raised index transform as Thus, we see that the matrix x transforms (98) as an object x A A ′ with a spinor index of one type down and the other type up. This pairing applied to a spinor with itself is non-vanishing. It also introduces an anti-linear map λ = ǫλ * . Thus, the operation· maps the spaces S ± into themselveŝ : S ± = S ± . It is also easy to see that the operation hat squares to minus the identitŷ Using this anti-linear map we can give another characterisation of matrices (95). These can be characterised as those with the property ǫx * ǫ T = x. However, this property is just the statement that the operation· applied to both spinor indices of x leaves this object unchanged. So, matrices of the type (95) are real objects in S + × S − in the sense of the hat operation. Note that there are no real objects in S ± . It is also convenient to use objects with both indices down x AA ′ (or up). Doing so, one translates every vector (or one-form) into an object with two spinor indices of different types. Let us rewrite the metric (97) in the spinor notations. First, using dx † = ǫ T dx T ǫ = −ǫdx T ǫ we see that the matrix dx † is just minus the matrix dx A A ′ with index A raised and A ′ lowered. And so
17,652.6
2016-10-05T00:00:00.000
[ "Physics" ]
Scattering amplitudes in N=2 Maxwell-Einstein and Yang-Mills/Einstein supergravity We expose a double-copy structure in the scattering amplitudes of the generic Jordan family of N=2 Maxwell-Einstein and Yang-Mills/Einstein supergravity theories in four and five dimensions. The Maxwell-Einstein supergravity amplitudes are obtained through the color/kinematics duality as a product of two gauge-theory factors; one originating from pure N=2 super-Yang-Mills theory and the other from the dimensional reduction of a bosonic higher-dimensional pure Yang-Mills theory. We identify a specific symplectic frame in four dimensions for which the on-shell fields and amplitudes from the double-copy construction can be identified with the ones obtained from the supergravity Lagrangian and Feynman-rule computations. The Yang-Mills/Einstein supergravity theories are obtained by gauging a compact subgroup of the isometry group of their Maxwell-Einstein counterparts. For the generic Jordan family this process is identified with the introduction of cubic scalar couplings on the bosonic gauge-theory side, which through the double copy are responsible for the non-abelian vector interactions in the supergravity theory. As a demonstration of the power of this structure, we present explicit computations at tree-level and one loop. The double-copy construction allows us to obtain compact expressions for the supergravity superamplitudes which are naturally organized as polynomials in the gauge coupling constant. Introduction Perturbative S matrices of gravitational theories in flat spacetime enjoy a particularly simple structure which remains highly obscure at the level of the Lagrangians and in covariant computations based on Feynman diagrams. Evidence for this structure was first brought to light through the Kawai-Lewellen-Tye (KLT) relations [1,2], which demonstrate that the information encoded in a gauge-theory tree-level S matrix is sufficient to construct a tree-level gravity S matrix. More recently, Bern, Carrasco and Johansson (BCJ) proposed a set of Lie-algebraic relations for the kinematic building blocks of gauge-theory loop-level amplitudes [3,4] which mirror analogous relations obeyed by the corresponding color building blocks. Their existence mandates that gravity amplitudes can be obtained from gauge-theory amplitudes through a double-copy construction. The double copy trivializes the construction of gravity loop-level integrands when the corresponding gauge amplitudes are available in a presentation that manifestly satisfies such duality between color and kinematics. The existence of amplitudes presentations which obey color/kinematics duality has been conjectured for broad classes of Yang-Mills (YM) theories to all loop orders and multiplicities [4]. There is by now substantial evidence for the duality at tree level. Multiple versions of duality-satisfying tree amplitudes in pure super-Yang-Mills (sYM) theories, including arbitrary number of legs and in D dimensions [5,6,7], have been constructed [8,9,10,11,12,13,14]. The duality enforces the so-called BCJ relations between color-ordered tree amplitudes, which have been proven from both string- [15,16] and field-theory [17,18,19] perspectives. Duality-satisfying structures are known to naturally arise from the field-theory limit of string theory using the pure-spinor formalism [7,20,21,22,23,24]. Attempts toward a Lagrangian understanding of the duality are ongoing [25,26], and the study of the kinematic Lie algebra underlying the duality has made significant advances [9,27,28,29,30]. Color/kinematics duality played a key role in the advent of the scattering equations [19,31,32,33,34,35]. That the duality is not limited to YM theories, but applies to certain Chern-Simons-matter theories, was observed in refs. [36,37,38]. Amplitude presentations with manifest color/kinematics duality at loop-level were first constructed for the maximal sYM theory [39,40,41] and have played a critical role in enabling state-of-the-art multiloop computations in (ungauged) N ≥ 4 supergravities [42,43,44,45,46,47]. These calculations, as well as earlier ones, have exposed unexpected ultraviolet (UV) cancellations. The four-point amplitudes of N = 8 supergravity were shown to be manifestly UV-finite through four loops [48,49,50,41], and later work proved that the N = 8 theory cannot have a divergence before seven loops in four dimensions [51,52,53,54]. Interestingly, finiteness until this loop order agrees with earlier naïve powercounting based on the assumed existence of an off-shell N = 8 superspace [55]. The potential existence of seven-loop divergences has been suggested from several perspectives, including an analysis of string theory dualities [56], a first-quantized world-line approach [57], and light-cone supergraphs [58]. However, it has also been argued that the theory may remain finite beyond seven loops [59]. Pure N = 4 supergravity has been shown to be UV-finite at three loops [44,45], and to diverge at four loops [47]. The authors of ref. [47] suggested that the appearance of the four-loop divergence should be related to the quantum anomaly of a U(1) subgroup of the SU(1, 1) global symmetry of the theory [60]. More recent work studied duality-satisfying presentations for one-loop gauge-theory amplitudes, with and without adjoint matter, for N ≤ 2 supersymmetric theories [61,62,63,64]. 1 In ref. [61] it was first shown that there exists double-copy structures for particular classes of gravity theories with N < 4 that can be constructed as field-theory orbifolds of N = 8 supergravity. To access a broader spectrum of supergravities with reduced N < 4 supersymmetry it is necessary to generalize color/kinematics duality by including matter fields that do not transform in the adjoint representation; the bi-fundamental [69] and fundamental [70] representations are natural generalizations. In these cases, each line of a gauge-theory graph has a definite gauge-group representation through its color factor, and, generically, proper double copies are obtained by matching up pairs of numerator factors that belong to conjugate (or, alternatively, identical) color representations [69,70]. Further generalizations (deformations) of supergravity theories are achieved by gauging. In supergravity theories with abelian vector fields, one can gauge a subgroup of the global symmetry group, and/or R-symmetry group, such that some of the vector fields become gauge fields of the gauged group. This (sub)set of vector fields must transform in the adjoint representation of the corresponding gauge group. The minimal couplings introduced by this gauging break supersymmetry; however, it is restored by introducing additional terms into the Lagrangian. For example, one can gauge the SO(8) subgroup of the full U-duality group E 7 (7) of maximal supergravity in D = 4 [71], thereby turning all vector fields into non-abelian gauge fields of SO(8) [72]. The gauging-induced potential does not vanish for the maximally (N = 8) supersymmetric ground state; instead, it produces a stable anti-de Sitter (AdS) vacuum. This means that, in this maximally supersymmetric background, all the fields of SO (8) gauged N = 8 supergravity form an irreducible supermultiplet of the AdS superalgebra OSp(8|4, R) with even subalgebra SO(8) ⊕ Sp(4, R). In five dimensions, the maximal Poincaré supergravity theory has 27 vector fields [73]; however, there is no simple group of dimension 27. The problem of gauging the maximal supergravity theory was unresolved until the massless supermultiplets of the N = 8 supersymmetry algebra SU(2, 2|4) in AdS 5 were constructed in ref. [74]. The massless fivedimensional N = 8 AdS graviton supermultiplet turned out to have only 15 vector fields; the remaining 12 vector fields must instead be dualized to tensor fields. Hence it was pointed out in ref. [74] that one can at most gauge a SU(4) subgroup of the R-symmetry group USp (8) in five dimensions. Gauged maximal supergravity in D = 5 with simple gauge groups SO(6 − p, p) was subsequently constructed in refs. [75,76]. 2 In these SO(6 − p, p) gaugings of maximal supergravity all the supersymmetric vacua turned out to be AdS. Shortly thereafter an SU(3, 1) gauged version of maximal supergravity in five dimensions, which admits an N = 2 supersymmetric ground state with a Minkowski vacuum, was discovered [78]. In this ground state of the SU(3, 1) gauged maximal supergravity, one has unbroken SU(3) × U(1) gauge symmetry and an SU(2) R local R-symmetry. The SO (8) gauged N = 8 supergravity in D = 4 and SO (6) gauged N = 8 supergravity in D = 5 describe the lowenergy effective theories of M-theory on AdS 4 × S 7 and IIB superstring over AdS 5 × S 5 , respectively. In general compactification of M-theory or superstring theories with fluxes leads to gauged supergravity theories. In recent years the embedding tensor formalism for constructing gauged supergravities was developed which is especially well suited for studying flux compactifications. For reviews and references on the subject we refer to refs. [96,97]. The double-copy construction of the amplitudes of ungauged maximal supergravity has not yet been consistently or comprehensively extended to gauged versions of the theory. One conceivable obstacle in this endeavor is the fact that fully-supersymmetric ground states of gauged maximal supergravity theories are AdS and the methods for double-copy construction to date require a flat background. However, the fact that the SU(3, 1) gauged maximal supergravity in D = 5 admits a stable N = 2 supersymmetric ground state with vanishing cosmological constant, does suggest that flat-space techniques for the double-copy construction can be used for this theory. Indeed, one should expect that such double-copy constructions are allowed. Quite some time ago, Bern, De Freitas and Wong [79] constructed tree-level amplitudes in Einstein gravity coupled to YM theory, in four dimensions and without supersymmetry, through a clever use of the KLT relations. They realized that color-dressed YM amplitudes could be written in circular ways as KLT products between color-stripped YM amplitudes and a scalar φ 3 theory. The scalar theory is invariant under two Lie groups, color and flavor (see refs. [25,80,81,34] for more recent work on this theory). Remarkably, after applying the KLT formula to the color-stripped amplitudes, the global flavor group of the scalars was promoted to the gauge group of the gluons. They then minimally coupled the φ 3 theory to YM theory which, through KLT, allowed them to compute single-trace tree-level amplitudes in a Yang-Mills/Einstein theory [79], i.e. the amplitudes with the highest power of the trilinear scalar coupling g ′ . Unfortunately, without the modern framework of color/kinematics duality, it was not clear how to generalize this construction to subleading powers of g ′ , multiple-trace amplitudes, and more importantly to loop-level amplitudes. Instead, this generalization will be achieved in the current work. Focusing on suitable theories with flat-space vacuum, it is well known that four-and fivedimensional N ≤ 4 supergravities with additional matter multiplets admit a very rich family of gaugings that preserve supersymmetry. For example, the N = 4 supergravity coupled to n vector supermultiplets has the global symmetry group SO(6, n) × SU(1, 1) in D = 4 and SO(5, n) × SO(1, 1) in D = 5. The family of four-and five-dimensional Maxwell-Einstein supergravity theories (MESGTs) corresponding to N = 2 supergravity coupled to n vector multiplets is even richer. A general construction of N = 2 MESGTs in D = 5 was given in ref. [82] and a construction of matter-coupled N = 2 supergravity theories in D = 4 was given in refs. [83,84]. An important feature of N = 2 MESGTs is the fact that all the bosonic fields are R-symmetry singlets. As a consequence, gauging a subgroup of their global symmetry groups in D = 5 does not introduce a potential, and hence one always has supersymmetric Minkowski vacua in the resulting Yang-Mills/Einstein supergravity theories (YMESGTs) in both D = 5 and D = 4. Gaugings of five-dimensional N = 2 MESGTs were thoroughly studied in refs. [85,86,87,88]. The gaugings of four-dimensional MESGTs were originally studied in refs. [89,90]. For a complete list of references, we refer the reader to the book by Freedman and van Proeyen [91]. Ungauged N = 2 MESGTs coupled to hypermultiplets in four and five dimensions arise as low-energy effective theories of type-II superstring and M-theory compactified over a Calabi-Yau threefold, respectively. They can also be obtained as low-energy effective theories of the heterotic string compactified on a K3 surface down to six dimensions, followed by toroidal compactifications to five and four dimensions. In general, six-dimensional N = (1, 0) supergravity coupled to n T self-dual tensor multiplets and n V vector multiplets reduces to a D = 5 MESGT with (n T + n V + 1) vector multiplets. Such six-dimensional supergravity theories, coupled in general to hypermultiplets, can be obtained from M-theory compactified to six dimensions on K3 × S 1 /Z 2 [92,93]. They can also be obtained from F -theory on an elliptically fibered Calabi-Yau threefold [94]. 3 If we restrict the low-energy effective theory to its vector sector, the resulting MESGT Lagrangian in five dimensions can be fully constructed by knowing a particular set of trilinear vector couplings. These are completely specified by a symmetric tensor C IJK , where the indices I, J, K label all the vectors in the theory including the graviphoton [82]. For the MESGT sector of the N = 2 supergravity theory obtained by compactification on a Calabi-Yau threefold, the C tensor simply corresponds to the triple intersection numbers, which are topological invariants. Whenever the C tensor is invariant under some symmetry transformation, the D = 5 MESGT Lagrangian will posses a corresponding (global) symmetry. Indeed, this is a consequence of the fact that D = 5 MESGT theories are uniquely defined by their C tensors [82]. 4 The special cases in which the five-dimensional N = 2 MESGT has a symmetric target space have long been known in the literature [82,98,99,100]. The MESGTs with symmetric scalar manifolds (target spaces) G/K such that their C tensors are G-invariant are in one-to-one correspondence with Euclidean Jordan algebras of degree three, whose norm forms are given by the C tensor. There exist an infinite family of reducible Jordan algebras R ⊕ Γ n of degree three, which describes the coupling of N = 2 supergravity to an arbitrary number (n ≥ 1) of vector multiplets. This class of theories is named the generic Jordan family in the literature, and yields target spaces of the form Additionally, there exist four unified magical MESGTs constructed from the four simple Jordan algebras of degree three which can be realized as 3 × 3 hermitian matrices over the four division algebras R, C, H, O. These theories describe the coupling of five-dimensional N = 2 supergravity to 5, 8, 14 and 26 vector multiplets, respectively. As a first step in the systematic study of amplitudes in gauged supergravity theories, in this paper we initiate the study of amplitudes of N = 2 YMESGTs constructed as double copies using color/kinematics duality. In particular, we focus on those N = 2 YMESGTs whose gauging corresponds to a subgroup of the global symmetry group of the generic Jordan family of MESGTs in D = 5 and D = 4 Minkowski spacetime. Furthermore, we restrict our study to compact gauge groups, and leave to future work the study of more general gaugings in generic Jordan family as well as in magical supergravity theories. We note that the gauging of the generic Jordan family of supergravities is quite important from a string-theory perspective. The low-energy effective theory that arises from the heterotic string compactified over K3 is described by six-dimensional N = (1, 0) supergravity coupled to one self-dual tensor multiplet together with some sYM multiplets and hypermultiplets. Under dimensional reduction this theory yields D = 5 YMESGTs belonging to the generic Jordan family with a compact gauge group coupled to certain number of hypermultiplets. We will not discuss the interactions of hypermultiplets in the current work; since they always appear in pairs in the effective action, they can be consistently truncated away. Our main results are: (1) the construction of the ungauged amplitudes as double copies between elements from a pure N = 2 sYM theory and a family of N = 0 YM theories that can be viewed as the dimensional reductions of D = n + 4 pure YM theories; (2) the introduction of relevant cubic scalar couplings to the N = 0 gauge theory which, through the double copy, are responsible for the interactions of the non-abelian gauge fields of the YMESGT. As in ref. [79], the gauge symmetry in the supergravity theory originates from a global symmetry in the N = 0 YM theory employed in the double copy. Our construction is expected to give the complete perturbative expansion of the S matrix in these theories, including the full power series in the gauge coupling and all multi-trace terms, at arbitrary loop orders. 5 Although the current work is limited to N = 2 supergravity theories, we expect that our construction straightforwardly extends to N = 4 supergravity coupled to 5 Modulo issues that are common to all formalisms: possible UV divergences and quantum anomalies. vector multiplets as well as to some matter-coupled supergravities with N < 2. The former theory can be obtained by promoting the N = 2 sYM theory to N = 4 sYM, and the latter theories can be obtained by truncating the spectrum of the N = 2 sYM theory, while in both cases leaving the bosonic YM theory unaltered. In section 2, we provide a brief review of color/kinematics duality, identify the types of gauged supergravities which can most straightforwardly be made consistent with the double-copy construction and show that, on dimensional grounds, gauge interactions in the supergravity theory require the introduction of cubic scalar couplings in one of the gauge factors. In section 3, we review the Lagrangians for general MESGTs and YMESGTs in five and four dimensions, giving particular attention to gauged and ungauged theories belonging to the generic Jordan family. We show how the full Lagrangian can be constructed from the C tensors controlling the F ∧ F ∧ A interactions in five dimensions. Moreover, we discuss the fundamental role played by duality transformations in four dimensions and the corresponding symplectic structure. We also show that, for theories belonging to the generic Jordan family, it is possible to find a symplectic frame for which (1) the linearized supersymmetry transformations act diagonally on the flavor indices of the fields when the Lagrangian is expanded around a base point, and (2) the three-point amplitudes have manifest SO(n) symmetry. Section 4 discusses in detail the gauge-theory factors entering the double-copy construction for theories in the generic Jordan family. We show that we can take as one of the factor the dimensional reduction to four dimensions of the pure YM theory in n + 4 dimensions. We also identify particular cubic couplings which are responsible for the non-abelian interactions in the YMESGT obtained with the double-copy. These couplings have the mass dimension required by the general argument presented in section 2. In section 5, we compare the amplitudes from the double-copy construction with the ones from the Lagrangian discussed in section 3. Introducing particular constrained on-shell superfields, we obtain compact expressions for the superamplitudes of the theory. We show that, when we employ the Feynman rules obtained from the Lagrangian in the particular symplectic frame identified at the end of section 3, the two computations lead to the same three-point amplitudes and that the double-copy superfields are mapped trivially (i.e. by the identity map) into the Lagrangian on-shell superfields. We argue that, since the C tensors are fixed by the three-point interactions, the double-copy construction should continue to yield the correct amplitudes at higher points. We run some further checks on the amplitudes at four points and present compact expressions for the five-point amplitudes. In section 6 we present some amplitudes at one loop, while our concluding remarks are collected in section 7. For readers' convenience, in appendix A we list our conventions and provide a summary of the notation employed throughout the paper. Finally, in appendix B we present expansions for the various quantities entering the bosonic Lagrangian in the symplectic frame discussed in section 3. These expansions can be used to obtain the Feynman rules for the computation outlined in section 5. 2 Designer gauge and supergravity theories 2.1 Color/kinematics duality: a brief review It has long been known [1,2] that, at tree level, the scattering amplitudes of gravity and supergravity theories related to string theory compactifications on tori exhibit a double-copy structure, being expressible as sums of products of amplitudes of certain gauge theories. This structure was more recently clarified in refs. [3,4], where it was realized that there exist underlying kinematical Lie-algebraic relations that control the double-copy factorization. The integrands of gauge-theory amplitudes are best arranged in a cubic (trivalent) graph-based presentation that exhibits a particular duality between their color and kinematic numerator factors. Once such a presentation is obtained, the double-copy relation between the integrands of gauge-theory and gravity amplitudes extents smoothly to loop level. In this organization, the gauge-theory L-loop m-point amplitude for adjoint particles is given by (2.1) Here the sum runs over the complete set Γ of L-loop m-point graphs with only cubic vertices, including all permutations of external legs, the integration is over the L independent loop momenta p l and the denominator is given by the product of all propagators of the corresponding graph. The coefficients c i are the color factors obtained by assigning to each trivalent vertex in a graph a factor of the gauge group structure constantfâbĉ = i √ 2fâbĉ = Tr([Tâ, Tb]Tĉ) while respecting the cyclic ordering of edges at the vertex. The hermitian generators Tâ of the gauge group are normalized so that Tr(TâTb) = δâb. The coefficients n i are kinematic numerator factors depending on momenta, polarization vectors and spinors. For supersymmetric amplitudes in an on-shell superspace they will also contain Grassmann parameters representing the odd superspace directions. These Grassmann parameters transform in the fundamental representation of the on-shell R-symmetry group of the theory. There is one such parameter for each external state; in four-dimensional theories there is a close relation between the number of Grassmann parameters and the helicity of the corresponding external state. The symmetry factors S i of each graph remove any overcount introduced by summation over all permutations of external legs (included in the definition of the set Γ), as well as any internal automorphisms of the graph (i.e. symmetries of the graph with fixed external legs). A gauge-theory amplitude organized as in equation (2.1) is said to manifestly exhibit color/kinematics duality [3,4] if the kinematic numerator factors n i of the amplitude are antisymmetric at each vertex, and satisfy Jacobi relations around each propagator in oneto-one correspondence with the color factors. Schematically, the latter constraint is It was conjectured in refs. [3,4] that such a representation exists to all loop orders and multiplicities for wide classes of gauge theories. 6 While amplitudes with manifest color/kinematics duality have been explicitly constructed both at tree-level and at loop-level in various theories [5,6,7,8,9,10,11,12,13,14,39,40,41,61,62,63,65,64,69,70], they are often somewhat difficult to find because of the nonuniqueness of the numerators. At tree-level it is possible to test whether such a representation exists by verifying relations between color-stripped partial amplitudes which follow from the duality [3]. Examples of such BCJ relations for adjoint-representation color-ordered tree amplitudes are s 24 A We refer the reader to ref. [3] for a detailed description of all-multiplicity amplitude relations. In section 4.2 we shall use these relations to demonstrate that a certain Yang-Mills-scalar theory which appears in our construction exhibits color/kinematics duality. Starting from two copies of gauge theories with amplitudes obeying color/kinematics duality and assuming that at least one of them does so manifestly, the amplitudes of the related supergravity theory are then trivially given in the same graph organization but with the color factors of one replaced by the numerator factors of the other: Here κ is the gravitational coupling. In writing this expression one formally identifies g 2 → κ/2; additional parameters that may appear in the gauge-theory numerator factors are to be identified separately and on a case-by-case basis with supergravity parameters. The Grassmann parameters that may appear in n i and/orñ i are inherited by the corresponding supergravity amplitudes; they imply a particular organization of the asymptotic states labeling supergravity amplitudes in multiplets of linearized supersymmetry. Since these linearized transformations -given by shifts of the Grassmann parameters -are inherited from the supersymmetry transformations of the gauge-theory factors, they need not be the same as the natural linearized supersymmetry transformations following from the supergravity Lagrangian and a nontrivial transformation may be necessary to align the double-copy and Lagrangian asymptotic states. 7 Minimal couplings and the double-copy construction It is interesting to explore on general grounds what types of gauged supergravity theories are consistent with a double-copy structure with the two factors being local field theories. We discuss here the constraints related to the kinematical structure of amplitudes; further constraints related to the field content of the theory may be studied on a case-by-case basis. To this end let us consider a four-dimensional supergravity theory coupled to non-abelian gauge fields which has a supersymmetric Minkowski vacuum; while the matter interactions depend on the details of the theory, the gravitational interactions of matter fields and those related to them by linearized supersymmetry are universal, being determined by diffeomorphism invariance. Similarly, the terms linear in the non-abelian gauge fields are also universal, being determined by gauge invariance. For gauged supergravities that have a string-theory origin one may expect that a KLT-like construction [1,2] should correctly yield their scattering amplitudes. Similarly, it is natural to expect that a double-copy-like construction [3] would then extend this result to loop-level [4]. In either construction, the three-point supergravity amplitudes are simply products of three-point amplitudes of the two gauge-theory factors. Assuming that this property should hold generally, the dimensionality of these amplitudes imposes strong constraints in the structure the two gauge-theory factors. Let us look first at the interactions of scalars and spin-1/2 fermions. Since they are minimally coupled to non-abelian vector fields, where we have stripped off the dimensionfull gravitational coupling. Thus, the double-copy of classically scale-invariant gauge theories cannot lead to standard non-abelian gauge field interactions. To obtain a supergravity amplitude of unit dimension, classical scale invariance must be broken in one of the two gauge-theory factors to allow for the existence of amplitudes of vanishing mass dimension. Using the fact that supergravity vectors, spin-1/2 fermions and scalars are written in terms of gauge theory fields as it is easy to see that the constraints imposed by the dimensionality of amplitudes can be satisfied if one of the gauge-theory factors contains a cubic scalar coupling. The dimensionfull parameter g ′ governing such a coupling should then be related to the product of the gravitational coupling and the coupling of the supergravity gauge group, g ′ ∼ κg. The same analysis implies that the supergravity fields that are identified with products of gauge-theory fields with nonzero helicity, cannot couple directly to the non-abelian vector potential but only to its field strength. It is indeed not difficult to see that any three-point amplitude, in a conventional gauge theory, with at least one field with nonzero spin has unit dimension. Thus any supergravity amplitude obtained from such a double copy with at least one field of the type (2.9) has dimension 2, and therefore cannot be given by a minimal coupling term. It is easy to extend the analysis above to gravitino gauge interactions. 9 Such interactions arise in theories in which part of the R-symmetry group is gauged. In this case the minimal coupling is and, consequently, the two-gravitini-vector amplitudes have again unit dimension. Supplementing this with the helicity structure following from the structure of the Lagrangian (2.10) it follows that (2.11) Since Lorentz invariance implies that the only way to realize this field content is by double-copying a three-vector and a two-fermion-scalar amplitude. The helicity structure however is incompatible with the latter amplitude originating from a standard Lorentz-invariant Yukawa interaction, which requires that the two fermions have the same helicity. Together with the fact that such a construction leads to an amplitude of dimension two and the fact that the three-point vector amplitudes are uniquely determined by gauge invariance, this implies that one of the two gauge-theory factors must have a non-conventional dimension 3 fermionscalar interaction. We leave the identification of such possible interactions for the future and instead focus in the remainder of this paper on trilinear scalar deformations and the corresponding YMESGTs. 3 N = 2 supergravity in four and five dimensions General five-dimensional Maxwell-Einstein Lagrangian In the present section we summarize the relevant results on MESGTs in five dimensions following refs. [75,76,82,85,86,87,88,98] and adopt the conventions therein. We use a five-dimensional spacetime metric of "mostly plus" signature (− + + + +) and impose a symplectic-Majorana condition of the form on all fermionic quantities whereî,, .. = 1, 2. C is the charge conjugation matrix and the Dirac conjugate is defined asχî = χ † ı Γ 0 . The field content of the general five-dimensional MESGT withñ vector multiplets is where the indices I, J = 0, 1, . . . ,ñ label the vector fields. The scalar fields ϕ x can be thought of as coordinates of añ-dimensional target-space manifold M; a, b = 1, . . . ,ñ and x, y = 1, . . . ,ñ are flat and curved indices in the target-space manifold, respectively. Rsymmetry SU(2) R indicesî,, .. are raised and lowered by the symplectic metric ǫî and its inverse ǫî. Since our analysis focuses on amplitudes involving bosons, we write explicitly here only the bosonic part of the Lagrangian, which takes on the form, where g xy is the metric of theñ-dimensional target-space manifold M. The theory is uniquely specified by the symmetric constant tensor C IJK which appears in the F ∧ F ∧ A term. The only constraint on the C tensor is the physical requirement that the kinetic-energy terms of all fields be positive-definite. The connection between the C tensor and the "field-space metrics" of the kinetic-energy terms of the vector and scalar fields proceeds via the associated cubic prepotential, The scalar manifold M can always be interpreted as the codimension-1 hypersurface with equation N(ξ) = 1 in theñ+1-dimensional ambient space C with coordinates ξ I and endowed with the metric The metric of the kinetic energy term for the vector fields is the restriction of the ambientspace metric a IJ to the hypersurface M, while the target space metric g xy is the induced metric on M,å The vielbeine (h I , h a I ) and their inverses (h I , h I a ) for the metricå IJ obey the standard algebraic relations,å They may be expressed in terms of the (derivatives of the) prepotential and the embedding coordinate ξ and their derivatives as Last but not least, the kinetic-energy term of the scalar fields can also be expressed in terms of the C tensor as The supersymmetry transformation laws of MESGTs are (to leading order in fermion fields) where f x a denotes theñ-bein on the target space. From the above transformation laws, one can identify the field strength of the "physical" (dressed) graviphoton to be h I F I νρ and the linear combinations of the vector field strengths that are superpartners of dressed spin-1/2 fields λ aî , given by h a I F I µν . The requirement that the C tensor must lead to positive-definite kinetic-energy terms implies that at a certain point in the ambient space C the metricå IJ can be brought to the diagonal formå IJ | c = δ IJ by a coordinate transformation. This point is referred to as the base point c I . The base point c I lies in the domain of positivity with respect to the metric a IJ of the ambient space C. Choosing the base point as one finds that the most general C tensor compatible with positivity can be brought to the form C 000 = 1, with the remaining components C abc being completely arbitrary. The global symmetry group of the MESGT is simply given by the invariance group of the tensor C IJK . These global symmetry transformations correspond to isometries of the target space M. The converse is however not true. There exist theories in which not all the isometries of the target manifold extend to global symmetries of the full N = 2 MESGT, e.g. the generic non-Jordan family to be discussed below. Since our main goal is to understand the double-copy structure of gauged or ungauged YMESGTs, we will consider only theories whose C tensors admit symmetries, in particular, those theories whose scalar manifolds are symmetric spaces M = G/H such that G is a symmetry of the C tensor. 10 In the latter theories, the cubic form defined by the C tensor can be identified with the norm form of a Euclidean Jordan algebra of degree three. There are 10 In addition to the MESGTs defined by Euclidean Jordan algebras of degree three, there exists an infinite family of theories whose scalar manifolds are symmetric spaces of the form This infinite set of MESGTs is called the generic non-Jordan family. The isometry group SO(1,ñ) of the scalar manifold of the generic non-Jordan family is broken by supergravity couplings down to the semi-direct four simple Jordan algebras of degree three denoted as J A 3 , which can be realized as hermitian 3 × 3 matrices over the four division algebras A = R, C, H, O. The scalar manifolds M 5 (J A ) of MESGTs defined by the simple Jordan algebras in five dimensions are as follows: USp (6) , They describe the coupling of 5, 8, 14 and 26 vector multiplets to N = 2 supergravity and are referred to as "Magical supergravity theories". Their global symmetry groups G are simple and all the vector fields including the graviphoton transform in a single irreducible representation under G. Hence they are unified theories. The quaternionic magical theory with the global symmetry group SU * (6) can be obtained by a maximal truncation of the N = 8 supergravity in five dimensions. There also exists an infinite family of reducible Jordan algebras R ⊕ Γñ of degree three whose cubic norms factorize as a product of linear and quadratic form. The MESGTs defined by them are referred to as the generic Jordan family; their scalar manifolds are of the form and describe the coupling ofñ vector multiplets to N = 2 supergravity. The magical supergravity theories can be truncated to MESGTs belonging to the generic Jordan family. Below we list the target spaces of maximal truncations of this type: We should note that the C tensors of MESGTs defined by Euclidean Jordan algebras of degree three, generic Jordan as well as the magical theories, satisfy the adjoint identity where the indices are raised with the inverse • a IJ of the vector field space metric • a IJ . Furthermore the C tensor is an invariant tensor of their global symmetry groups G such that we have Compact gauged Yang-Mills/Einstein sugra K ⊆ SU(2) R × SO(ñ − 1, 1) × SO(1, 1) Non-compact gauged Yang-Mills/Einstein sugra Table 1: List of possible gaugings of generic Jordan family of N = 2 MESGTs without tensor fields. In this paper we will be considering MESGTs belonging to the generic Jordan family. The natural basis for defining the C tensor for this family is given by identifying the prepotential with the cubic norm of the underlying Jordan algebra, which is The C tensor in the natural basis is given by 22) and the corresponding base point is at YMESGTs in five dimensions Let us now review the gaugings of a N = 2 MESGT in five dimensions whose full symmetry group is the product of R-symmetry SU(2) R with the global invariance group G of the C tensor. If one gauges only a subgroup K of the global symmetry G, the resulting theories are referred to as YMESGTs which can be compact or non-compact. If one gauges only a subgroup of SU(2) R the resulting theories are called gauged MESGTs. If one gauges a subgroup of SU(2) R as well as a subgroup of G simultaneously then they are called gauged YMESGTs. In gauging a subgroup of the global symmetry group G those non-gauge vector fields that transform non-trivially under the non-abelian gauge group must be dualized to tensor fields that satisfy odd-dimensional self-duality conditions [88]. We list the possibilities for gauging the generic Jordan family of MESGTs without tensor fields in Table 1. In this paper we will focus on YMESGTs with compact gauge groups obtained by gauging a subgroup of the global symmetry group of generic Jordan family. We leave the study of more general gaugings in generic Jordan as well as magical supergravity theories in the framework provided by the double-copy to future work. Consider a group G of symmetry transformations acting on the coordinates of the ambient space as where M r are constant (anti-hermitian) matrices which satisfy the commutation relations of G, If G is a symmetry of the Lagrangian of the MESGT, it must satisfy The bosonic fields of the theory transform as where the field-dependent quantity K x r is the Killing vector that generates the corresponding isometry of the scalar manifold M. It can be expressed as The scalar field dependent quantities h I (ϕ x ) transform just like the vector fields Spin-1/2 fields transform under the maximal compact subgroup of the global symmetry group as with Ω ab x denoting the spin connection over the scalar manifold M. The remaining fields (gravitinos and graviton) are invariant under the action of global symmetry group G. To gauge a subgroup K of the global symmetry group G of a N = 2 MESGT in five dimensions, we then split the vector fields as We shall refer to the latter as spectator fields and pick a set of matrices M such that Compact YMESGTs obtained with this construction from the generic Jordan family of MESGTs in five dimensions always have at least two spectator fields -the vector field in the D = 5 gravity multiplet and an additional one. For the sake of simplicity we will not consider theories with more spectators and assume that from now on. The Lagrangian of the desired YMESGT is then obtained from (3.3) replacing with the caveat that the F ∧ F ∧ A term for the gauge fields must be covariantized as follows, where we formally introduced the structure constants f I JK which vanish when any one of the indices corresponds to the spectator fields. The only non-vanishing components are those of the compact gauge group K , i.e f r st with r, s, t = 2, 3, . . .ñ. The bosonic sector of the resulting YMESGT is then given by Note that the five-dimensional YMESGT Lagrangian does not have a potential term and hence admits Minkowski ground states. However, to preserve supersymmetry under gauging one introduces a Yukawa-like term involving scalar fields and spin-1/2 fields One may understand the need for such a term by noticing that in the limit of vanishing gravitational constant YMESGTs should become flat space non-abelian gauge theories which necessarily exhibit Yukawa couplings. Four Dimensional N = 2 MESGTs and YMESGTs via dimensional reduction Since in later sections we will compute scattering amplitudes of YMESGTs (and MESGTs) in four dimensions, it is useful to review following ref. [102] the construction of the bosonic sector of these theories by dimensional reduction from five dimensions. To distinguish the four-and five-dimensional fields we shall denote the five-dimensional field indices with hats in this section. For dimensional reduction we make the Ansatz for the fünfbein as followŝ which impliesê = e −σ e, where e =det(e m µ ). We shall denote the field strength of the vector W µ coming from the graviton in five dimensions as W µν : (3.40) The five-dimensional vector fields a Î µ decompose into four-dimensional vector fields A I µ and four-dimensional scalars A I as follows Note that the four-dimensional abelian field strengths F I µν = ∂ µ A I ν − ∂ ν A I µ are invariant with respect to the U(1) symmetry of the compactified circle. The bosonic sector of dimensionallyreduced, four-dimensional action of the MESGT is then [102] The four-dimensional scalar manifold geometry is defined by (ñ + 1) complex coordinates [82] z I : One can write the four-dimensional Lagrangian as where A, B = −1, 0, 1, . . . ,ñ, g IJ =å IJ , and the period matrix N AB is given by The vector field −2 √ 2W µ is denoted as A −1 µ and its field strength as The scalar manifolds of magical supergravity theories defined by simple Jordan algebras of degree three in four dimensions are the following hermitian symmetric spaces The scalar manifolds of generic Jordan family of MESGTs in D = 4 are Our focus in the paper will be mainly on gaugings of the generic Jordan family of MESGTs. Motivation for studying the generic Jordan family from a string theory perspective derives from the fact that the vector-multiplet moduli spaces of heterotic string theories compactified on K3 × S 1 are precisely of the generic Jordan type. In the corresponding superpotential the singlet modulus s is simply the dilaton. The cubic superpotential is exact in five dimensions. The dilaton factor corresponding to the scale symmetry SO(1, 1) of the fivedimensional U-duality group gets extended by an extra scalar, the axion, under dimensional reduction to four dimensions and together they parametrize the SU(1, 1)/U(1) factor in the four-dimensional moduli space SU(1, 1)/U(1) × SO(ñ, 2)/SO(ñ) × SO (2). The fourdimensional supergravity moduli space of the generic Jordan family gets corrections due to target space instantons in the string theory. There is a corresponding picture in the type-IIA string due to the duality between type-IIA theory on a Calabi-Yau threefold and heterotic string on K3 × T 2 . We refer the reader to the review [95] for a detailed discussion of this duality and the references on the subject. We should note that non-abelian gauge interactions in lower dimensional effective theories of heterotic string theory descend, in general, directly from the non-abelian gauge symmetries in ten dimensions. This is to be contrasted with compactifications of M-theory or type-II superstring theories on Calabi-Yau manifolds without any isometries. The latter theories can develop enhanced non-abelian symmetries at certain points in their moduli spaces and the corresponding low-energy effective theories are described by YMESGTs coupled to hypermultiplets. Detailed examples of such symmetry enhancement both in five and four dimensions were studied in refs. [103,104]. The dimensional reduction of the five-dimensional YMESGTs without tensor fields leads to the four-dimensional Lagrangian where 52) and the four-dimensional scalar potential, P 4 , is given by The appearance of a nontrivial potential may be understood by recalling that in the limit of vanishing gravitational coupling a four-dimensional YMESGT reduces to the dimensional reduction of a five-dimensional gauge theory and, as such, it has a quartic scalar coupling which is bilinear in the gauge-group structure constants. Generic Jordan family of 4D N = 2 YMESGTs In this section we shall study in detail the symplectic formulation of the generic Jordan family of four-dimensional YMESGTs defined by the cubic form (3.22). Four dimensional N = 2 supergravity theories coupled to vector and hypermultiplets were constructed in refs. [83,84,89] which showed that the prepotentials for D = 4 MESGTs must be homogeneous functions of degree two in terms of the complex scalars. For those N = 2 MESGTs originating from five dimensions the prepotential is given by the C tensor [82,90]. Later on, a symplectic covariant formulation of D = 4 MESGTs was developed [105,106] (also see ref. [91] for a review and further references). Before we proceed, we recall some basic facts about choice of symplectic sections and existence of a prepotential. In a symplectic formulation, the four-dimensional ungauged Lagrangian (3.44) can be obtained from the prepotential where Z −1 ≡ Z A=−1 . One considers a holomorphic symplectic vector of the form , (3.55) where the Z A areñ + 2 arbitrary holomorphic functions ofñ + 1 complex variables z I which need to satisfy a non-degeneracy condition. Next, one introduces a Kähler potential K(z,z) defined by The metric for the scalar manifold is then readily obtained as A little more work is necessary to obtain the period matrix appearing in the kinetic term for the vector fields. We first introduce a second symplectic vector defined as 58) and the corresponding target-space covariant derivatives, The (ñ + 2) × (ñ + 2) period matrix N can be expressed in terms of the quantities above as We should note that for the generic Jordan family with the symplectic vector that comes directly from dimensional reduction from five dimensions we have Z −1 ≡ 1 and Only the compact subgroup SO(ñ − 1) of the full U-duality group SU(1, 1) × SO(ñ, 2) is realized linearly. One can go to a symplectic section in which the full SO(ñ, 2) symmetry is realized linearly. However this symplectic section does not admit a prepotential [106]. While we will omit the fermionic part of the action as before, the supersymmetry transformations of the gravitinos and spin-1/2 fermions will be relevant. They are: Here F A± µν indicate the self-dual and anti-self-dual field strengths. With this notation F A+ µν and F A− µν are complex conjugate to each other. Moreover, the dual field strengths are given by One can introduce an Sp(2ñ + 4, R) group of duality transformations acting as Under such transformations the target space metric g IJ is invariant and the period matrix N AB transforms asÑ The action is invariant under a duality transformation provided that B = 0; transformations with B = 0 are non-perturbative (i.e. involve S-duality as seen from an higher-dimensional perspective) and are called symplectic reparameterizations. A duality transformation can also be enacted directly on the prepotential. Adopting this perspective, one needs to introduce a new prepotentialF where the old coordinates X A now depend on new coordinates Finally, we note that a duality transformation also acts as We now consider the four-dimensional theory specified by the prepotential (3.54) obtained from dimensional reduction, and expand the Lagrangian for the generic Jordan family around the base point c I of the five-dimensional parent theory while introducing the special coordinates z I as follows, With this choice, all scalar fields vanish at the base point; the standard choice of c I for the generic Jordan family is given by equation (3.23). At the base point, we have a canonically normalized scalar metric g IJ = δ IJ and a matrix N AB given by We encounter however difficulties with interpreting the supersymmetry transformations (3.63). Indeed one may see that at the base point (3.23) the (ñ + 2) × (ñ + 2) matrix X A DĪX A t which appears in the supersymmetry transformations of the fermionic fields (3.63) is not diagonal and presents some imaginary entries. This implies that both field strengths and dual field strengths of spectator fields appear in the linearized supersymmetry variations of the fermionic fields. 11 To make contact between scattering amplitudes computed from the supergravity Lagrangian with the ones obtained employing a double-copy construction (which we shall detail in later sections), it is desirable to go to a symplectic frame in which (1) supersymmetry acts diagonally at the base point without mixing fields with different matter indices I, J = 0, 1, . . . ,ñ (so that scalars and vectors with the same matter index belong to the same supermultiplet) and (2) the cubic couplings of the theory are invariant under the maximal compact subgroup SO(ñ) of the U-duality group of the ungauged theory (and hence SO(ñ) is a manifest symmetry of the resulting scattering amplitudes). It turns out this can be achieved in three steps. 1. We first dualize the extra spectator coming from dimensional reduction, F −1 µν . Using the language introduced at the beginning of this section, we use a duality transformation defined by After this duality transformation, we have the following expressions, where O is an orthogonal matrix which acts non-trivially only on the spectator fields and J 1 , J 2 are diagonal matrices, To obtain diagonal supersymmetry transformations at the linearized level, a second Sp(2ñ + 4, R) transformation is necessary: Note that this transformation, having B = C = 0, does not involve the dualization of any field and can be thought of as a mere field redefinition involving the three vector spectator fields. 12 After this redefinition the supersymmetry transformations act diagonally with respect to the matter vector indices I, J = 0.1. . . . ,ñ. We obtain a simple expression for the period matrix N AB , where the C tensor is the one corresponding to the generic Jordan family, given by (3.22). 3. We finally dualize the extra spectator vector F 1 µν employing a transformation specified by the matrices (3.78) In order to avoid an additional factor of i in the supersymmetry transformation of the scalar field z 1 , we need to accompany this last duality transformation with the field redefinition In the end, the period matrix assumes the following expression up to linear terms in the scalar fields where we have defined a new tensorC IJK with non-zero entries We note thatC IJK is manifestly invariant under the SO(ñ) symmetry. 13 In appendix B we collect the expansions for the period matrix N AB , the scalar metric g IJ and the Kähler potential K in the symplectic frame specified above and up to quadratic terms in the scalar fields. The final action for a YMESGT with compact gauge group obtained from the generic Jordan family takes on the following form, where the gauge covariant derivatives are standard, with g denoting the gauge coupling and f rst the group structure constants. The fourdimensional potential term P 4 is given by As mentioned previously, this is the expected form of the scalar potential, based on the fact that in the limit of vanishing gravitational constant the YMESGTs reduce to the dimensional reduction of a five-dimensional gauge theory. It should be noted that the duality transformation which we have employed does not touch fields charged under the gauge group. By employing the Lagrangian (3.82) and the expansions collected in appendix B it is straightforward to derive the Feynman rules used to obtain the amplitudes presented in the following sections. 13 We stress that only SO(ñ − 1) is linearly realized in the symplectic frame we have chosen and that only the cubic vector-vector-scalar couplings posses the extended SO(ñ) symmetry. To reach a symplectic frame in which the full SO(ñ) is linearly realized (such as the one in ref. [106]), a further nonlinear field redefinition is necessary. Since this redefinition becomes the identity map when nonlinearities are removed, it does not affect the S matrix, which is already invariant under SO(ñ) transformations. Color/kinematics duality and the double-copy N = 2 YMESGTs In section 2 we discussed the properties of gauge theories which can generate through the double-copy construction minimal couplings between non-abelian gauge fields, spin-0 and spin-1/2 matter fields. In this section we expand that discussion and identify the two gauge theories whose double copy can yield the generic Jordan family of YMESGTs in D = 4, 5 dimensions. One of them is the standard N = 2 sYM theory and the other is a particular scalar-vector theory; we will demonstrate that the amplitudes of the latter obey color/kinematics duality through at least six points. Thus, even though we will not construct its higher-point amplitudes in a form manifestly obeying the duality, they may be used in the double-copy construction. Our construction can be carried out in any dimension in which the half-maximal sYM theory exists. In four and five dimensions it yields the generic Jordan family of YMESGTs; we shall focus on the four-dimensional case because of the advantage provided by the spinor-helicity formalism. The six-dimensional supergravity generated by our construction contains a graviton multiplet, a N = (1, 0) self-dual tensor multiplet andñ − 2 Yang-Mills multiplets. The 6D supergravity theories that one obtains by compactifying heterotic string over a K3 surface belong to this family of theories coupled to hypermultiplets. Modulo the coupling to hypermultiplets they reduce to the five-and four-dimensional generic Jordan family theories (as one can easily see at the level of their scattering amplitudes). 14 The two gauge-theory factors To identify the relevant gauge theories we begin by satisfying the constraints imposed in the vanishing gauge coupling limit by the corresponding MESGT, i.e. that the asymptotic spectrum is a sum of tensor products of vector and matter multiplets. Supergravities of this sort which may be embedded in N = 8 supergravity and have at least minimal supersymmetry, as well as general algorithms for their construction, have been discussed in refs. [107,61]. Extensions of these theories to include further matter (i.e. vector and chiral/hyper multiplets) have also been discussed. Moreover, theories whose spectra are truncations of sums of tensor products of matter multiplets have been discussed in refs. [69,70]. It is not difficult to see that the on-shell spectrum of the generic Jordan family of MESGTs may be written as the tensor product of an N = 2 vector multiplet with a vector andñ real scalar fields, 1) 14 We should however note that generic Jordan family of 5D MESGTs can also be obtained from 6D , N = (1, 0) supergravity coupled to arbitrary numberñ − 1 of N = (1, 0) self-dual tensor multiplets. However interacting non-abelian theories of tensor fields are not known. Therefore it is not clear how one can extend our results to such interacting non-abelian tensor theories. Unlike N > 2 supergravity theories, supergravities with N ≤ 2 are not uniquely specified by their field content. Since N = 2 MESGTs are specified by their trilinear couplings, to identify the correct double-copy construction, it suffices to make sure that the trilinear interaction terms around the standard base point are correctly reproduced. Detailed calculations for MESGTs with various numbers of vector multiplets as well as general constructions of such theories as orbifold truncations of N = 8 supergravity imply that the relevant gauge theories are N = 2 sYM theory and a Yang-Mills-scalar theory which is the dimensional reduction of D = 4 +ñ pure YM theory. Starting with such a pair of gauge theories for some numberñ of scalar fields, the next task is to modify one of them such that a S-matrix element originating from a minimal coupling of supergravity fields is reproduced by the double-copy construction. As discussed in section 3.2, from a Lagrangian perspective we may contemplate gauging a subgroup K of the compact part of the off-shell global symmetry group G of the theory. For four-dimensional theories in the generic Jordan family this is The manifest global symmetry of the double-copy construction is the product of the global symmetry groups of two gauge-theory factors, G L ⊗ G R . In general, this is only a subgroup of the global symmetry group G of the resulting supergravity theory. Since the non-manifest generators act simultaneously on the fields of the two gauge-theory factors, it is natural to expect that such a formulation allows only for a gauge group of the type Certain supergravity theories admit two (or perhaps several) different double-copy formulations and the manifest symmetry group of each of them may be different and each of them may allow for a different gauge group. A simple example is N = 4 supergravity coupled to two vector multiplets. If realized as the double-copy of two N = 2 sYM theories it exhibits no manifest global symmetries (apart from R-symmetry). If realized as the product of N = 4 sYM and YM theory coupled to two scalars, it has a global U(1) symmetry rotating the two scalars into each other which may in principle be gauged. The double-copy construction of MESGTs in the generic Jordan family described above has a manifest SO(ñ) symmetry rotating theñ scalars into each other. This is part of the maximal compact subgroup of the Lagrangian (albeit not in a prepotential formulation). 15 Following the discussion in section 2, to generate the minimal coupling of YMESGTs between scalars, spin-1/2 fermions and non-abelian gauge fields it is necessary that one of the two gauge-theory factors contains a dimension-three operator (in D = 4 counting). Since the minimal coupling is proportional to the supergravity gauge-group structure constants, the desired gauge-theory operator should be proportional to it as well. If only a subgroup of the manifest symmetry is gauged then only a subset of gauge-theory scalars appear in this trilinear coupling; in such a situation the global symmetry of the theory is broken to the subgroup leaving the trilinear coupling invariant. The scalars transforming in its complement should lead to the supergravity spectator fields. 16 We are therefore led to the following two Lagrangians (using mostly-minus metric): where the gauge-group generators are assumed to be hermitian, [Tâ, Tb] = ifâbĉTĉ and the coefficient g ′ is arbitrary and dimensionful. 17 The indices a, b. . . . take values 1, . . . ,ñ. The rank-three tensor F has entries F rst , with r, s, t = 2, 3, . . . ,ñ, given by the structure constants of a subgroup K of SO(ñ), and all other entries set to zero. 18 To use the double-copy construction we need the scattering amplitudes of both the N = 2 sYM theory and the Yang-Mills-scalar theory (4.5) to obey color/kinematics duality, albeit only one of them is needed in a form that obeys it manifestly. Since the amplitudes of the former theory have this property (and their manifestly color/kinematic-satisfying form may be obtained by a Z 2 projection from the corresponding amplitudes of N = 4 sYM theory) we only need to make sure that the vector/scalar theory obeys the duality as well. We shall explore this question in the next subsection with a positive conclusion. Denoting by 16 This relation is somewhat reminiscent of the AdS/CFT correspondence, where supergravity gauge fields are dual to conserved currents for global symmetries. 17 The gauge-group structure constants f are related to the structure constantsf which naturally appearing in color-dressed scattering amplitudes by This corresponds to a change of normalization of generators. 18 A possible proportionality coefficient is absorbed in the coefficient g ′ . the two CP T -conjugate on-shell vector multiplets of N = 2 sYM, the on-shell N = 2 multiplets of the supergravity theory are As we shall see in the next section, multiplets carrying indices r, s, ... will be identified with the gauge field multiplets of supergravity, while those carrying indices m, n, . . . will be related to the supergravity spectator multiplets with the same indices while V andṼ will be related to the "universal" spectator vector fields of the generic Jordan family YMESGTs. The resulting theory will have an SO(ñ − dim(K)) global symmetry acting on the indices m, n, . . . . 19 The fields labeling the amplitudes obtained through the double-copy construction need not a priori be the same as the natural asymptotic states around a Minkowski vacuum following from the Lagrangian (3.82) and a field redefinition may be required. Such redefinitions are to be constructed on a case-by-case basis, for the specific choice of Lagrangian asymptotic fields. As we shall see in section 5, for the choice of symplectic section in section 3.3 the map between the Lagrangian and the double-copy asymptotic states is trivial; additional nonlinear field redefinitions such as those needed to restore SO(ñ) symmetry of the Lagrangian should not affect the S matrix. Color/kinematics duality of Yang-Mills-scalar theories To use the scattering amplitudes from the Lagrangian (4.5) to find supergravity amplitudes either through the KLT or through the double-copy construction, it is necessary to check that, in principle, they can be put in a form obeying color/kinematics duality in D dimensions. 20 Since at g ′ = 0, the equation (4.5) reduces to the dimensional reduction of a (4 +ñ)dimensional pure YM theory which is known to obey the duality, we need to check only g ′ -dependent terms. For the four-point amplitudes this can be done simply by inspection. The only g ′dependent amplitude involving four scalars is given by ref. [69] A (0) (4.9) We therefore see that color/kinematics duality is satisfied if the nonzero part of F abc obeys the Jacobi identity and, therefore, it is proportional to the structure constants of some group. This is consistent with the expectation that the trilinear scalar coupling is responsible for generating the minimal couplings of the supergravity gauge fields. One may similarly check that the g ′ -dependent terms in the four-point scalar amplitude with pairwise identical scalars have a similar property [69]. An amplitude that probes the scalar interactions both on their own as well as together with the scalar-vector interactions is A tree 5 (1 φ a 1 2 φ a 2 3 φ a 3 4 φ a 3 5 φ a 3 ) involving only three distinct scalars. The O(g ′3 ) part of this amplitude is (5) fâ 1âσ(3)b fbâ σ(4)ĉ fĉâ σ(5)â2 . (4.10) By construction this amplitude obeys color/kinematics duality, or rather "color/color duality". Moreover, it is easy to check that it satisfies the BCJ amplitude relations [81]. The A (0) receives contributions from both the cubic and quartic terms of the Lagrangian. Defining k ij... = k i + k j + ... and k ij... = k i − k j + ... it is given by It is not difficult to check that the color-ordered amplitudes following form this expression obey the five-point amplitudes relation (2.3) and its images under permutations of external lines; it therefore follows that there exists a sequence of generalized gauge transformations [3] that casts this amplitude into a form manifestly obeying color/kinematics duality. Since N = 2 amplitudes manifestly obeying the duality are known [69], for the purpose of constructing YMESGT amplitudes, it is not necessary to also have a manifest representation for the amplitudes of Yang-Mills-scalar theory (albeit it might lead to more structured expressions if one were available). We have also checked that the tree-level six-point amplitudes following from the Lagrangian (4.5) obey the relevant amplitude relations [3] and therefore they should also have a presentation manifestly obeying the duality. Beyond six points, we conjecture that the tree amplitudes of (4.5) always satisfy the BCJ relations [3], and thus the theory should satisfy color-kinematics duality at tree level. From this one can expect that it may also satisfy the duality at loop level [4]. Tree-level amplitudes Having established that the scattering amplitudes of the Yang-Mills-scalar theory (4.5) obey color/kinematics duality, we proceed to use it to evaluate explicitly the double-copy threeand four-point amplitudes and compare them with the analogous amplitudes computed from the Lagrangian (3.82). The color structure of supergravity amplitudes is the same as that of a gauge theory coupled to fields which are singlets under gauge transformations. In a structure-constant basis they are given by open (at tree-level) and closed (at loop-level) strings of structure constants and color-space Kronecker symbols. In the trace basis, this implies that the structure of tree amplitudes is similar to that of loop amplitudes in that, unlike pure gauge theories, it is not restricted to having only single-trace terms: where S m i is the set of non-cyclic permutations. Different traces are "connected" by exchange of color singlets. In theories with less-than-maximal supersymmetry, scattering superamplitudes are organized following the number of on-shell multiplets the asymptotic states belong to. In our case, using the fact that N = 2 algebra is a Z 2 orbifold of N = 4 algebra we can use a slightly more compact organization. To this end we organize the supergravity multiplets (4.8) as (5.2) and the N = 2 gauge multiplet as where η 3,4 are auxiliary Grassmann variables. 21 One may think of H ± and V as constrained N = 4 supergravity and vector multiplets, respectively, which are invariant under the Z 2 projection. 22 With the asymptotic states assembled in these superfields, superamplitudes are polynomials in the pairs η 3 i η 4 i with i labeling the external legs. The monomial with n + such pairs represents the superamplitudes with n + supermultiplets of type +. Three-point amplitudes and the field and parameter map Three-point amplitudes verify the structure of minimal couplings and of other trilinear couplings demanded supersymmetry and consistency of the YMESGT, such as the reduction to four dimensions of the fermion bilinear (3.38). They also determine the map between the double-copy and Lagrangian fields and parameters. The kinematic parts of the N = 0 amplitudes involving at least one gluon are the same as in N = 4 sYM; the three-scalar amplitude -the only three-point amplitude dependent on g ′ -is momentum-independent. Up to conjugation and relabeling of external legs, the non-vanishing amplitudes of the Yang-Mills-scalar theory are The N = 2 superamplitudes, labeled in terms of the multiplets G, may be obtained from those of N = 4 sYM theory through the supersymmetric Z 2 orbifold projection acting on the 21 Since they always appear as a product one may also replace η 3 η 4 by a nilpotent Grassmann-even variable. 22 One may find the amplitudes of N = 4 supergravity coupled to n s abelian and dim(K) non-abelian vector multiplets by simply forgetting the Z 2 projection. η 3 and η 4 Grassmann variables. This effectively amounts to modifying the super-momentum conservation constraint as Of course, for higher-multiplicity amplitudes other projected supersymmetry invariants appear as well. With this notation, the three-point superamplitudes are 23 31 , [31] . The two superamplitudes (5.9) are related by conjugation and Grassmann-Fourier transform. From the perspective of the N = 4 theory, Q 34 n and Q 34 3 are the Z 2 -invariant combination of the η 3 and η 4 Grassmann variables. 23 Using equations (5.9) and (5.5), (5.6) and (5.4) it it is easy to construct the doublecopy three-point amplitudes; some of them vanish identically because of special properties of three-particle complex momentum kinematics (which e.g. implies that the product of holomorphic and anti-holomorphic spinor products vanishes identically). Up to conjugation, the non-vanishing superamplitudes are (5.10) The superamplitudes labeled by N = 2 on-shell supermultiplets may be extracted as shown in footnote 23. The component amplitudes extracted from these superamplitudes and their 23 To extract from equation (5.9) scattering amplitudes labeled by the N = 2 multiplets G ± one simply extracts the coefficients of the various monomials in η 3 η 4 . For example, conjugates are very similar to the component amplitudes following from the supergravity Lagrangian (3.82). 24 Indeed, the kinematic factors are fixed by little-group scaling and gauge invariance and the numerical coefficients can be mapped into each other by identifying the structure constants of the global symmetry group of the Yang-Mills-scalar theory with the structure constants of the supergravity gauge group as 25 All the double-copy and Lagrangian multiplets are then trivially mapped into each other, 26 where the field map is presented in terms of on-shell superfields and we have added the superscript "L" to the superfields from the supergravity Lagrangian (3.82). The above identity map also shows that the linearized Lagrangian and double-copy supersymmetry generators are essentially the same. The Lagrangian matter multiplets are 27 We note here that the second amplitude (5.10) and its CP T -conjugate contain the S-matrix element originating from the four-dimensional analog of the Yukawa interaction (3.38) which is required by consistency of gauge interactions and supersymmetry. The fact that this amplitude correctly reproduces the interaction is a strong indication that the double-copy construction proposed here is capturing correctly all features of the generic Jordan family of four-dimensional YMESGTs. 24 To compare the amplitudes from the Lagrangian with those from the double-copy, we also need to employ analytic continuation, as the former are obtained with a mostly-plus metric and the latter with a mostly-minus metric. 25 We may formally separate the identification of the gauge coupling from that of the structure constants through the relation 26 More in general, one can introduce a parameter θ in the identification of the multiplets, The parameter θ is free and its presence is a reflection of the classical U (1) electric/magnetic duality of the theory. The choice θ = π/2 is a consequence of the symplectic section chosen in section 3.3. We also note that the map above establishes an off-shell double-copy structure for the minimal couplings of YMESGTs. Indeed, the off-shell double-copy of the three-scalar vertices of the Yang-Mills-scalar theory and the vertices of the N = 2 sYM theory simply replaced the color structure constants of the latter with the structure constants of the YMESGT gauge group. This is consistent the YMESGTs minimal couplings which are those of a standard N = 2 gauge theory, as it can be seen trivially in the κ → 0 limit. Four-point amplitudes To reinforce the validity of the construction for YMESGTs we proceed to compare the fourpoint amplitudes obtained for the Lagrangian with those obtained through the double copy. Since the g ′ -independent terms are the same as in the ungauged theory, we shall focus here on the g ′ -dependent amplitudes; we inspect separately the terms quadratic and linear in g ′ . From a double-copy point of view, the former must contain a four-scalar amplitude in the N = 0 factor. The amplitude with four independently-labeled scalars can be found in equation (4.9) in a color/kinematics-satisfying form; from here we may construct the amplitude with one pair or two pairs of identical scalars. The N = 2 four-point amplitude labeled in terms of the constrained superfield (5.2) is It is convenient to choose the kinematic coefficient of one of the color structures to vanish; we will choosen t = 0. The kinematics Jacobi identity obeyed by the numerator factorsn implies thatn s = −n u . Then, the four-point superamplitude proportional to (g ′ ) 2 is: 23 34 41 . The four-point amplitudes of the Yang-Mills-scalar theory which are linear in g ′ have three scalars and one gluon on their external legs. It is not hard to see from the Lagrangian (4.5) that, up to permutation of external legs and conjugation, they are given by where we picked the reference vector in the gluon polarization vector to be k 2 . Then, the resulting supergravity superamplitudes which are linear in the gauge coupling are 23 31 (5.20) and the CP T -conjugate amplitude. It is straightforward (albeit quite tedious) to derive the g ′ -dependent terms of four-point amplitudes using standard Feynman diagrammatics and see that the maps (5.11) and (5.12) relate them to the double-copy amplitudes listed in this section. We note here that supergravity scattering amplitudes obey color/kinematics duality on all legs for which a Jacobi relation can be constructed (and do so manifestly if the N = 2 sYM amplitudes obey the duality). It is not hard to check this assertion, which may be understood as a consequence of the color/kinematics duality of the gauge theory factors, on equations (5.17) and (5.20). Indeed, the internal legs on which Jacobi identities can be constructed are color non-singlets and therefore, from the perspective of the double-copy construction, start and end at a trilinear scalar vertex in the Yang-Mills-scalar theory. The part of the numerator factors due to these vertices is momentum-independent and depends only on the structure constants F a 1 a 2 a 3 . Thus, whenever the gauge-group color factors obey the Jacobi identity, the global symmetry group factors obey it as well. In the scattering amplitudes of the corresponding double-copy YMESGT the global symmetry group factors of the Yang-Mills-scalar theory become color factors and are multiplied by the numerator factors of the N = 2 theory which are assumed to obey the kinematic Jacobi relations. It therefore follows that whenever the YMESGT color factors of an amplitude obey Jacobi relations (on a leg on which such a relation may be defined) then so do the kinematics numerator factors, i.e. the amplitude exhibits manifest color/kinematics duality. Five-point amplitudes Having gained confidence that the construction proposed here describes the generic Jordan family of YMESGTs, we can proceed to compute higher-point amplitudes. The double-copy construction of the five-point superamplitudes of YMSGTs is slightly more involved due to the more complicated structure of the color/kinematics-satisfying representations of the N = 2 superamplitudes. Such a representation may be obtained as a Z 2 projection of the corresponding N = 4 five-point superamplitude: where the color factors are explicitly given by ref. [3], (5.22) In the N = 4 theory the numerator factors n i have many different forms, see e.g. ref. [10] and for each of them the orbifold projection yields an N = 2 superamplitude with the desired properties. For five-point amplitudes this projection amounts to replacing supermomentum-conserving delta function as in equation (5.7). An example of numerator factors is n(a, b, c, d, e) = 1 10 The order of arguments is given by the order of free indices of the color factor. The N = 0 five-scalar amplitude has the same form as (5.21) except that the numerator factors are (quadratic) polynomials in g ′ : The O(g ′3 ) part of the corresponding five-vector superamplitude is given by We notice that the numerator factors are simply those of the N = 2 sYM theory and thus they obey the Jacobi relations simultaneously with the F color factors. The O(g ′ ) part of the five-vector superamplitude with three different gauge indices is given by By undoing the projection (5.7) one recovers the five-point amplitude of N = 4 supergravity coupled with abelian and non-abelian vector multiplets. Using similar higher-point color/kinematics-satisfying representations of tree-level N = 2 amplitudes, perhaps constructed in terms of color-ordered amplitudes or by some other methods, and Feynman-graph generated amplitudes of the Yang-Mills-scalar theory, it is easy to construct tree-level amplitudes of any multiplicity for YMESGTs in the generic Jordan family. One-loop four-point amplitudes Similarly to tree amplitudes, loop amplitudes in YMESGTs can be organized following the dependence on the gauge coupling; each component with a different gauge coupling factor is separately gauge invariant. For the YMESGTs considered here it is not difficult to argue both from a Lagrangian and double-copy point of view that, to any loop order and multiplicity, the terms with the highest power of the gauge coupling in the n-vector amplitudes are given by the amplitudes of a pure N = 2 sYM theory with the same gauge group as that of the supergravity theory. 29 From a double-copy perspective these terms are given by the amplitudes of the Yang-Mills-scalar theory with only scalar vertices; since these amplitudes have constant numerator factors, when double-copied with the amplitudes of N = 2 or N = 4 sYM theory (or any other theory for that matter), they simply replace the color factors of the latter with the color factors of the supergravity gauge group. From a Lagrangian perspective the terms with highest power of the gauge coupling in the vector amplitudes are given by the κ → 0 limit of the full amplitude and thus are given by a pure gauge-theory computation. To illustrate the construction of loop amplitudes in the generic Jordan family of YMESGTs we shall compute the simplest one-loop amplitude which is sensitive to the supergravity gauge coupling-the four-vector amplitude. To this end we will first find the bosonic Yang-Millsscalar amplitude with external scalar matter in a form that manifestly obeys color/kinematics duality. Then, through the double copy, this amplitude will be promoted to be a four-vector amplitude in N = 4 and N = 2 YMESGTs. The four-scalar gauge-theory amplitude The three classes of Feynman graphs contributing to the O(g ′4 ), O(g ′2 ) and O(g ′0 ) terms in the four-scalar amplitude of the Yang-Mills-scalar theory (4.6) are schematically shown in fig. 1. The O(g ′4 ) is the simplest as it fully correspond to the four-scalar amplitude in the φ 3 theory. The numerator of the box diagram, shown in fig. 1(a), is entirely expressed in terms of the structure constants of the global group, where we stripped off the color factor of the gauge group given by c (a) box = fbâ 1ĉ fĉâ 2d fdâ 3ê fêâ 4b . Next we consider the O(g ′2 ) contributions, which correspond to mixed interactions in the Yang-Mills-scalar theory. For the box diagram, these contributions are given by fig. 1(b) and its cyclic permutations. While a good approximation for the duality-satisfying box numerator can be obtained using the Feynman rules that follow from the Lagrangian (4.6), we construct the full numerator using an Ansatz constrained to give the correct unitarity cuts. In the labeling convention of fig. 1, this gives the following numerator: where ℓ j = ℓ − (k 1 + . . . + k j ), and we use the shorthand notation Tr ij = F ba i c F ca j b . The parameter N V = δ ab δ ab is the number of scalars in the four-dimensional theory (or the number of vectors after double-copying it with another YM numerator). In D dimensions, one should replace N V → N V + D − 4 for a consistent state counting. Note that the box numerator is designed to satisfy the following automorphism identities: n fig. 1(c) and its cyclic permutations, correspond to YM interactions at all vertices. Using the same procedure, an Ansatz constrained by the unitarity cuts of the theory (4.6), the resulting duality-satisfying box numerator is given by where ℓ i , N V and the automorphism identities are the same as for the O(g ′2 ) numerator. Certain terms in (6.2) and (6.3) can be directly identified as contributions from the Feynman rules for the box diagram; however, most terms have different origin. They are moved into the box numerator from bubble and triangle graphs by a generalized gauge transformation [3,4]. This explains the presence of global-group invariants naively not associated with the box diagram. The box numerators (6.1), (6.2), (6.3) are constructed so that the amplitude manifestly obeys color/kinematics duality. In particular, the numerator factors for the remaining contributing diagrams, the triangles and bubbles, are given by the color-kinematical Lie algebra relations: We have verified that these numerators are correct on all unitarity cuts in D dimensions; in particular non-planar single-particle cuts states A (0) 6 (1, 2, ℓ, 3, 4, −ℓ) were used in this check. The four-vector Yang-Mills-gravity amplitude The double-copy recipe provides a straightforward way to construct amplitudes in the N = 2 of the generic Jordan family as well as N = 4 YMESGTs. For example, fig. 2 illustrates how the different types of contributions O(g 4 ), O(g 2 ), O(g 0 ) arises as double copies between sYM numerators and numerators computed in the previous section. As in the case of other supergravity theories, one may verify that the unitarity cuts of these amplitudes match the direct evaluation of cuts in terms of tree diagrams. The complete amplitude is given by the double-copy formula (2.4), fig. 1 and a sYM numerator that obeys color/kinematics duality. All distinct cyclic permutations of these diagrams should be included. We use dashed lines to denote scalar fields, curly lines to denote vector fields or vector multiplets (as the case may be) and wavy lines to denote the graviton multiplet. Standard on-shell supersymmetry arguments imply that N = 2 one-loop numerator factors may be written as the difference between N = 4 and numerator factors for one adjoint N = 2 hypermultiplet running in the loop, 2, 3, 4, ℓ) . (6.10) The N = 4 sYM box numerator is given bỹ 11) and the triangle and bubble numerator vanishes. Plugging this into (6.8) gives the four-vector amplitude in N = 4 YMESGT. Color/kinematics-satisfying one-loop numerator factors due to one adjoint hypermultiplet running in the loop may be found in refs. [61,63,64,70]. A manifestly N = 2-supersymmetric box numerator was given in ref. [70], where ℓ s = 2ℓ · (k 1 + k 2 ), ℓ t = 2ℓ · (k 2 + k 3 ) and ℓ u = 2ℓ · (k 1 + k 3 ). The numerator factors of other box integrals are obtained by relabeling. The parameter µ is the component of the loop momenta that is orthogonal to four-dimensional spacetime. The external multiplet dependence is captured by the variables κ ij , As before the triangle and bubble numerators are given by the kinematic Jacobi relations, Plugging these numerators together with the Yang-Mills-scalar numerators (6.7) into equation (6.8) gives the four-vector amplitude in the N = 2 YMESGT. The resulting expression exhibits some of the properties outlined in the beginning of section 6 and at the end of section 5.2. We notice, in particular, that the O(g ′4 ) terms are given entirely by the N = 2 numerator factors while the scalar amplitudes in the Yang-Mills-scalar theory provide only the color factors of the supergravity gauge group. As such, these terms are precisely an N = 2 sYM amplitude and, for our choice of numerator factors, manifestly obeys color/kinematics duality. Although we will not give the details here, we note that one can easily further generalize this calculation to less supersymmetric theories. In particular, the duality-satisfying fourpoint one-loop numerators of N = 0 YM and N = 1 sYM, given in refs. [61,62,63,64,70], can be inserted into (6.8), after which one obtains vector amplitudes in certain N = 0 and N = 1 truncations of the generic Jordan family of YMESGTs. Conclusions and outlook It is no surprise that MESGTs obtained by a truncation from N = 8 supergravity, such as N = 2 supergravity with 1, 3, 5, or 7 vector multiplets, have a double-copy structure inherited from that of the parent theory. It is however less straightforward that this doublecopy structure can be extended to theories that have a richer matter content; a large class of examples is provided by theories of the generic Jordan family of N = 2 MESGTs, which have particular symmetric target spaces. In this paper we studied the formalism that follows from the requirement that supergravity theories coupled to non-abelian gauge fields have a double-copy structure. We found that gauging global symmetries of N = 2 MESGTs may be accomplished in this framework by adding a certain relevant trilinear scalar operator to one of the two gauge-theory doublecopy factors. The appropriate undeformed theory is a bosonic Yang-Mills-scalar theory that contains quartic scalar interactions consistent with a higher-dimensional pure-YM interpretation. The undeformed gauge theory naturally obeys color/kinematics duality, and we have shown that the deformed gauge theory continues to obey the duality provided that the tensors controlling the trilinear couplings obey Jacobi relations and hence can be identified as the structure constants of a gauge group. The fact that the gauge theories on both sides of the double copy satisfy the BCJ amplitude relations gives confidence that the construction should give gravitational amplitudes belonging to some well-defined class of supergravity theories. We discussed in detail the theories in the generic Jordan family of N = 2 MESGTs and YMESGTs, and constructed some simple examples of tree-level and one-loop scattering amplitudes. By comparing the tree-level result of the double-copy construction and that of a Feynman-graph calculation we identified a linearized transformation that relates the Lagrangian and double-copy asymptotic states. In particular, for the specific Lagrangian chosen in section 3.4, the two sets of states are related by the identity map. Thus we have shown that the specific double-copy construction discussed here gives the generic Jordan family of N = 2 MESGTs and YMESGTs in D = 4 and D = 5. Quite generally, there are many possible choices of fields which are classically equivalent. As discussed in section 3.4, different choices are related by a change of symplectic section, i.e. by transformations that include electric/magnetic duality transformations. The doublecopy realization of electric/magnetic duality transformations in supergravity was discussed in ref. [60], where the charges of supergravity fields were identified as the difference of helicities in the two gauge-theory factors, whenever the corresponding U(1) transformation is not part of the on-shell R-symmetry. It was also shown in ref. [60] that, while electric/magnetic duality is a tree-level symmetry (tree-level scattering amplitudes carrying nonzero charge vanish), for N ≤ 4 theories it acquires a quantum anomaly and certain one-loop scattering amplitudes carrying non-vanishing charge are nonzero. In the current work we implicitly assume that dimensional regularization is used a loop level, and in that case the anomaly appears because the electric/magnetic duality does not lift smoothly between spacetime dimensions. 30 Some amplitudes breaking this duality are the same in the MESGT and in the corresponding YMESGT; an example is the amplitudes with two positive-helicity gravitons and two scalars in the V − multiplet [60]. In dimensional regularization it is where S +− ≡ S −+ andñ is the number of vector multiplets. In the presence of such an anomaly MESGTs and YMESGTs with symplectic sections related classically by duality transformations are no longer equivalent quantum mechanically and their effective actions differ by finite local terms. The field identification found in section 5.1 shows that the doublecopy construction discussed in this paper realizes a specific symplectic section (through dimensional regularization this gives specific quantum corrections). It would be interesting to see if it is possible to give double-copy constructions for different symplectic sections of the same theory. Four-dimensional YMESGTs of the generic Jordan family coupled to hypermultiplets appear as low-energy effective theories of the heterotic string compactified on K3 × T 2 . The string-theory construction suggests that it should be possible to extend our construction to include hypermultiplets and their interactions; tree-level KLT-like relations should exist at least for the specific numbers of vector and hypermultiplets that can be accommodated in string theory (or even beyond that using the formalism of ref. [70]). In particular, it would be desirable to understand how the introduction of hypermultiplets modifies the two gaugetheory factors and what are the restrictions on the gauge-group representations imposed by coupling N = 2 Yang-Mills-matter theories to N = 2 supergravity. An extension of the double-copy construction to gauge-theory factors with fields in fundamental and bifundamental representations was discussed in refs. [69,70]. Generically this yields supergravity theories with different matter content than a double-copy construction with fields in the adjoint representation. A natural direction for further research would be to include in the two gauge-theory factors fields transforming in arbitrary representations of the gauge group, and to systematically study the gauged supergravity theories obtained with the double copy. In particular, a construction of this sort may be necessary to obtain some of the magical supergravity theories and to study their gaugings. In addition to the more formal discussion, we have in this paper also obtained simple expressions for the three-point supergravity superamplitudes. The structure of these amplitudes should extend to more general N = 2 MESGTs and YMESGTs which do not belong to the generic Jordan family. The on-shell three-point interactions are universal except for the C tensor, which is used to specify the theory. Using this structure it should be possible to construct amplitudes in more general theories from simple building blocks even when a double-copy construction is not yet available. Understanding whether it is possible to satisfy the locality and dimensionality constraints from having a double-copy construction of R-symmetry gaugings, as discussed in section 2.2, remains an interesting open problem. If such structure exists, it would be interesting to explore whether it is restricted to scattering amplitudes around Minkowski vacua or if it exists, with appropriate choice of boundary conditions, in the more general case of AdS vacua. This double-copy structure may translate, through the AdS/CFT correspondence, to a double-copy structure for the correlation functions of certain gauge-invariant operators of the dual gauge theory. A direct investigation of double-copy properties of correlation functions of gauge theories, perhaps along the lines of refs. [113,114,115], may also provide an alternative approach to answering this question. As mentioned in the beginning of section 4, directly applying our construction in six dimensions yields a theory which, apart from the graviton multiplet, contains one self-dual tensor andñ − 2 vector multiplets. The spectrum of our six-dimensional double-copy construction coincides with that of the D = 6 YMESGT formulated in refs. [108,109]. It would be interesting to explore whether the scattering amplitudes following from the Lagrangian of this D = 6 YMESGT are the same as those generated by our construction. Such YMESGTs coupled to hypermultiplets arise from compactification of the heterotic string on a K3 surface. As remarked earlier the D = 5 generic Jordan family of MESGTs can also be obtained from dimensional reduction of six-dimensional N = (1, 0) supergravity coupled toñ − 1 selfdual N = (1, 0) tensor multiplets. Since the interacting non-abelian theory of N = (1, 0) tensor multiplets is not known, and assuming that such a theory exists, our construction cannot be applied directly to calculating its amplitudes. For example, since this interacting tensor theory has no vector multiplets it cannot be constructed in terms of scalar-coupled gauge theories. Rather, one may realize chiral tensor fields as bispinors and thus the relevant gauge theories should contain additional fermions [70]. The fact that two very different-looking theories in D = 6 reduce to the same fivedimensional MESGT, belonging to the generic Jordan family, has a counterpart in theories with 16 supercharges. Namely, the N = 4 sYM supermultiplet can be obtained from N = (1, 1) sYM multiplet as well as N = (2, 0) tensor multiplet in six dimensions. MES-GTs describing the coupling of n N = (1, 1) vector multiplets to N = (1, 1) supergravity in six dimensions have the U-duality groups SO(n, 4) × SO(1, 1) and we expect the method presented in this paper to extend in a straightforward manner to the construction of the amplitudes of the corresponding N = (1, 1) YMESGTs. The formulation of a consistent interacting non-abelian theory of N = (2, 0) multiplets coupled to N = (2, 0) supergravity remains a fascinating open problem. 31 A Notation In this appendix we present a summary of the various indices used throughout the paper. In the YMESGT Lagrangians we use the quantities f rst and g to denote structure constants and coupling constant for the supergravity gauge group. These should not be confused with fâbĉ and g, which denote structure constants and coupling constant for the two gaugetheory factors employed in the double-copy construction. Additionally, F rst are the structure constants of the global symmetry group of the cubic scalar couplings which are introduced in the N = 0 gauge-theory factor. g ′ is a proportionality constant which appears in the above couplings.
21,198.8
2014-08-04T00:00:00.000
[ "Physics" ]
Permanence , Periodicity and Extinction of a Delayed Biological System with Stage-Structured Preference for Predator This study considers a delayed biological system of predator-prey interactions where the predator has stage-structured preference. It is assumed that the prey population has two stages: immature and mature. The predator population has different preference for the stage-structured prey. This type of behavior has been reported in Asecodes hispinarum and Microplitis mediator. By some lemmas and methods of delay differential equation, the conditions for the permanence, existence of positive periodic solution and extinction of the system are obtained. Numerical simulations are presented that illustrate the analytical results as well as demonstrate certain biological phenomena. In particular, overcrowding of the predator does not affect the persistence of the system, but our numerical simulations suggest that overcrowding reduces the density of the predator. Under the assumption that immature prey is easier to capture, our simulations suggest that the predator’s preference for immature prey increases the predator density. Introduction In recent years, much attention has been paid to biological systems with stage structure [1]- [23].One important reason is that there are many species whose individual members have a life history taking them through two stages, immature and mature.Thus considering stage structure in population corresponds with the natural phenomenon.Another reason is that stage-structured ecological models are much simpler than the models governed by partial differential equations but they can exhibit phenomena similar to those of partial differential equations and many important physiological parameters can be incorporated [24].The other reason is that the biological dynamics has long been and will continue to be one of the dominant themes in both ecology and mathematical ecology due to its universal existence and importance [25]. In References [1]- [5], the authors have studied the stability of a class of stage-structured predator-prey systems.The authors in [6] [7] have made Hopf bifurcation analysis in delayed predator-prey systems with stage structure.As we know, environmental and biological parameters (such as the seasonal effects of weather, food supplies, and mating habits) fluctuate naturally over time; thus the effects of periodically varying environments are considered to be important selective forces in systems with fluctuating environments [19] [20].Thus, incorporating periodicity into models of stage-structured biological systems is more realistic with a changing environment.Therefore, many researchers have studied a class of periodic nonautonomous biological systems with stage structures [8]- [16] [21].Recently, Cui and Song [21] considered the following predator-prey system with stage-structured prey: ) x t , and ( ) y t denote the densities of immature prey, mature prey, and predator spe- cies, respectively.They obtained a set of sufficient and necessary conditions that guarantee the permanence of the system. In the natural world, many predators switch to alternative prey when their favored food is in short supply [22]- [24].For example, the lynx switches to red squirrel when the snowshoe hare is scarce [25].Even if there is only one prey type, the degree of predation or the quality (including palatability) of prey is likely to vary with its stage structure, which is likely to affect the predator's preference for different stage-structured prey.This type of behavior has been reported in Asecodes hispinarum [26], who parasitizes all 5 instars of Brontispa Logissina, but prefers to parasitize the 2nd and 3rd instars when it is exposed to all the instars of larvae, and in Microplitis mediator [27], who prefers to parasitize the 2nd and 3rd instars of Mythimna separate. However, previous studies on prey age preference only have been done in laboratory tests.Few researchers have investigated the phenomenon with mathematical models and carried out theoretical analysis together with numerical simulation.To extend research in this area, and based on the recent study by Cui and Song [21], we consider a periodic predator-prey system with time delay and a predator with stage-structured preference. Formulation of the Model Let ( ) x t and ( ) y t represent the density of immature prey, mature prey and predator species, respec- tively.Our periodic predator-prey system with time delay and stage-structured preference of the predator can be described as following: where The coefficients in system (2.1) are all continuous positive T-periodic functions.Parameter ( ) t ω is the immature prey preference of the predator, which takes a value between 0 and 1; ( ) is the mature prey preference of the predator [28] [29].c t c t is the conversion rate of nutrients into the reproduction of the pre- dator.The parameter 2 τ is the delay due to gestation, that is to say, only the mature adult predator can contribute to the production of predator biomass.The functional response of the predator to the mature prey takes the Holling type-III form of h t x t e t x t + and ( ) ( ) h t h t denotes the conversion rate of nutrients into the reproduction of the predator. The initial conditions for system (2.1) take the form of For the purpose of convenience, we write ( ) ( ) ( ) Obviously, ( ) B t is a T-periodic and strictly positive function.Then system (2.1) becomes In this paper, we consider system (2.5) with initial conditions (2.3) and (2.4).At the same time, we adopt the following notation through this paper: where ( ) g t is a continuous T-periodic function.The rest of the paper is arranged as follows.In the following section, we introduce some lemmas and then explore the permanence and periodicity of system (2.5).In Section 4, we investigate the extinction of the predator population in system (2.5).In Section 5, numerical simulations are presented to illustrate the feasibility of our main results.Furthermore, the simulated results are explained according to the biological perspective.In section 6, a brief discussion is given to conclude this work. Permanence and Periodicity In this section, we analyze the permanence and periodicity of system (2.5) with initial conditions (2.3) and (2.4).Firstly, we introduce the following definition and Lemmas which are useful to obtain our result.Definition 3.1.The system ( ) ( ) such that every positive solution of this system satisfies ( ) ( ) has a unique positive T-periodic solution which is globally asymptotically stable.Lemma 3.3.(See [31]).System has a unique positive T-periodic solution which is globally asymptotically stable with respect to for all the solution of system (3.2) with respect to be the any solution of system (3.2).By Lemma 3.3, system (3.2) has a unique globally attractive positive T-periodic solution x t x t , for any positive constant ε ( 0 1 ε < < ), there exists a 1 0 T > , such that for all By applying (3.4), we obtain ( ) ( ) We have ( ) Theorem 3.5.System (2.5) is permanent and has at least one positive T-periodic solution provided where is the unique positive periodic solution of system (3.2) given by Lemma 3.3 and x M is the upper bound of system (3.2) given by Lemma 3.4 and defined by equation (3.5). We need the following propositions to prove Theorem 3.5.Proposition 3.6.For all the solutions of system (2.5) with initial conditions (2.3) and (2.4), we have , where x M is the upper bound of system (3.2) given by Lemma 3.4 and defined by equation (3.5).Furthermore, there exists a positive constant y M , such that ( ) Given any solution , , x t x t y t of system (2.5) with initial conditions (2.3) and (2.4), we have Consider the following auxiliary system By Lemma 3.3, system (3.7) has a unique globally attractive positive T-periodic solution , u t u t be the solution of system (3.7) with . By the vector comparison theorem [32], we have By applying (3.8) and Lemma 3.4, we obtain ( ) ( ) In addition, from the third equation of (2.5) we have Consider the following auxiliary equation: According to the condition (3.6), we have , such that ( ) Proof.By Proposition 3.6, there exists a positive 2 0 T > such that ( ) Hence, from the first and second equations of system (2.5), we obtain t T ≥ .By Lemma 3.3, the following auxiliary system has a unique global attractive positive T-periodic solution , u t u t be the solution of system (3.13) with , by the vector comparison theorem [32], we obtain Therefore, ( ) Consider the following system with a parameter δ , Then, for the above 0 ε , there exists a sufficiently large 4 3 Using the continuity of the solution in the parameter, we have ( ) ( ) , T T T + as 0 δ → .Hence, there exists a ( ) , So, we get By applying (3.21), from the first and second equation of system (2.5), we have ( ) , for the given 0 2 By using (3.20), we obtain Therefore, by using (3.21) and (3.22), for , , , . c t x t h t x t y t y t d t f t f t q t t t x t t e t x t y t t s → +∞ , ( ) m q t → +∞ as q → +∞ , and , 1 By Proposition 3.6, for a given positive integer m, there exist a ( ) for Thus, from the boundedness of ( ) ( ) By (3.18) and (3.27), there exist constants 0 P > and 0 0 By using Propositions 3.6 and 3.7, there exists a large enough This is a contradiction.This completes the proof of Proposition 3.9.□ Proof of Theorem 3.5.By using Propositions 3.6-3.9,system (2.5) is permanent.Using result given by Teng and Chen in [33], we obtain system (2.5) has at least one positive T-periodic solution.This completes the proof of Theorem 3.5. Extinction In this section, we investigate the extinction of the predator population in system (2.5) with initial conditions (2.3) and (2.4) under some condition. Theorem 4.1.Suppose that where Proof.According to (4.1), for every given positive constant ε ( ) From the first and second equations of system (2.5), we have Hence, for the above 1 ε there are exists a ( ) 3) It follows from (4.2) and (4.3) that for ( ) Firstly, we show that exists a ( ) ( ) { } According to Theorem 3.5, system (2.5) with the above coefficients is permanent and admits at least one positive 2π-periodic solution for any nonnegative 2π-periodic function ( ) q t .Figure 1 shows the dynamic behavior of system (2.5) with the above coefficients and ( ) 0.07 q t = .Figure 2 shows the dynamic behavior of system . 10 ) 12 ) 3 . 7 . By(3.10) and Lemma (3.2), we obtain that system (3.9) has a unique positive T-periodic solution which is globally asymptotically stable.Then, for the above ε given in (3.4), there exists a 2 1This completes the proof of Proposition 3.6.□Proposition There exists a positive constant x x M η < Figure 1 .Figure 2 . Figure 1.The periodic found by numerical integration of system (2.5) with initial condition . This completes the proof of Proposition 3.7.□ .20)Suppose that the conclusion (3.17) is not true, then there exists C 19) has a unique positive T-periodic solution which is a contradiction.This shows that (4.5) holds.By the arbitrariness of ε , it immediately follows that ( ) 0 y t → as 0 t → .This completes the proof of Theorem 4.1.
2,643.2
2016-03-21T00:00:00.000
[ "Biology", "Mathematics" ]
The Dynamic Analysis of a Novel Reconfigurable Cubic Chaotic Map and Its Application in Finite Field : Dynamic degradation occurs when chaotic systems are implemented on digital devices, which seriously threatens the security of chaos-based pseudorandom sequence generators. The chaotic degradation shows complex periodic behavior, which is often ignored by designers and seldom analyzed in theory. Not knowing the exact period of the output sequence is the key problem that affects the application of chaos-based pseudorandom sequence generators. In this paper, two cubic chaotic maps are combined, which have symmetry and reconfigurable form in the digital circuit. The dynamic behavior of the cubic chaotic map and the corresponding digital cubic chaotic map are analyzed respectively, and the reasons for the complex period and weak randomness of output sequences are studied. On this basis, the digital cubic chaotic map is optimized, and the complex periodic behavior is improved. In addition, a reconfigurable pseudorandom sequence generator based on the digital cubic chaotic map is constructed from the point of saving consumption of logical resources. Through theoretical and numerical analysis, the pseudorandom sequence generator solves the complex period and weak randomness of the cubic chaotic map after digitization and makes the output sequence have better performance and less resource consumption, which lays the foundation for applying it to the field of secure communication. Introduction Chaos is a well-known dynamic behavior in physics. The most famous characteristic of a chaotic system is that the evolution of the system is sensitive to the initial conditions, which is also known as the butterfly effect [1]. The chaotic system is nonlinear, sensitive to initial value, aperiodic, ergodic, and noise-like. These special chaotic behaviors are consistent with "confusion" and "diffusion" in Shannon's information theory, which provides a theoretical basis for generating pseudorandom sequences via chaotic systems [2][3][4][5]. Because the future evolution behavior of the chaotic system is aperiodic and unpredictable, it also is widely used in the field of chaotic secure communication [6][7][8][9][10][11][12]. With the development of numerical analysis in finite field, it is found that the finite precision effect will cause the dynamic degradation of a chaotic system in real number field, which makes the cryptographic algorithm based on the chaotic system have weak cryptographic security [13]. In a survey of this phenomenon, Li systematically studied the changing state of the chaotic system from real number field to finite field, established the mathematical model of a chaotic system in finite field, and presented the method of analyzing chaotic performance in digital field [14]. Since then, the degradation characteristics of digital chaotic systems have been studied, and many methods to resist the degradation behavior of chaos have been proposed. Yang studied the mathematical model of Logistic chaotic map in Z(2 m ) field and deduced the maximum dynamic orbit [15]. Souza constructed the mathematical model of Arnold's cat chaotic map in Z(3 m ) field and constructed a new pseudorandom sequence generator [16]. Miyazaki found that the properties of sequences generated by digital chaotic systems largely depend on the inherent truncation method in finite fields [17]. The non-smooth probability distribution function of robust logistic map (RLM) trajectories gives an uneven binary distribution in the randomness test. To overcome this disadvantage in RLM, control of chaos (CoC) is proposed for the smooth probability distribution function of RLM [18]. Li proposed the corresponding state mapping network (SMN) model of the chaotic system in finite field, studied the dynamic properties of logistic chaotic map and tent chaotic map in finite field, and proved the scale-free properties of logistic chaotic map in digital field [19]. With the rapid development of large-scale integrated circuits, the speed and quantity of information collection, analysis, and transmission are greatly improved. The throughput of the pseudorandom sequence is as high as 10 11 bit/s. If the pseudorandom sequence is not repeated within 10 years, the period of the pseudorandom sequence should not be less than 2 65 , and the period of pseudorandom sequence with encryption capability is generally greater than 2 80 . In earlier times, it was thought that the digital chaotic sequence generated by the digital chaotic system retained the great period behavior of the original chaotic map [20]. However, compared with the chaotic system, Alvarez found that the orbit of the digital chaotic system is degenerated to some extent, leading to the emergence of short period behavior [21]. In order to overcome the complex periodic behavior of the degenerate chaotic system in digital field, Deng proposed a feedback method to enhance the performance of digital chaotic sequence by using the idea of mixing digital and analog chaotic systems, which increases the period of digital chaotic sequence efficiently [22]. Zheng introduced a disturbance source to disturb the digital chaotic system to resist the complex periodic behavior of the output sequence [23]. Chen proposed a dynamical perturbation-feedback mixed control (DPFMC) method based on a novel pseudorandom sequence by combining feedback and perturbation [24]. Based on introducing the feedback term, Wang proposed a general method to enhance the period of the digital chaotic sequence by using control theory and effectively increased the period of digital chaotic sequence in theory [25]. In addition, Lin proposed the construction method of nondegenerate high-dimensional chaotic systems and resisted the short period behavior of the digital chaotic system by increasing the dimension [26]. Guyeux innovatively proposed the idea of constructing the chaotic system in finite field and called the chaotic system in finite field a chaotic iterative system, which overcomes the influence of truncation error effect on period [27]. Wang proposed a high-dimensional chaotic iterative system, which further enhanced the ability to resist the complex periodic behavior of digital chaotic sequences [28]. For the digital chaotic system, Yang transformed the digital field into other special fields and used the properties of different fields to increase the period of the digital chaotic sequence [15]. In the design of a digital circuit, the state space of a finite state machine is always a finite value. Under the finite state space, the orbit of the digital chaotic system will gradually deviate from the theoretical true orbit of the corresponding chaotic system. The method of cascaded chaotic systems is proposed systematically by Zhou, which can effectively improve the period length of the digital chaotic sequence and give a specific application example [29]. Shakiba proposed a cascaded modulation-coupled hyperchaos to further enhance the periodicity of digital chaotic sequences [30]. The above research promotes the research of improving the period of digital chaotic sequence and lays a theoretical foundation for the methods of enhancing the complex period behavior of digital chaotic sequence. In the field of digital information encryption, such as pseudorandom sequence generator, chaotic stream cipher, and chaotic image encryption, the accurate period is an important feature of the security of chaos-based cryptographic modules. Compared with current enhanced methods of the period of digital chaotic sequence, this paper proposes a novel special cubic chaotic map and accurately controls the period of output sequence of the digital cubic chaotic map to comprehensively overcome the weak pseudorandom sequence phenomenon caused by finite precision effect. On this foundation, a reconfigurable pseudorandom sequence generator based on the digital cubic chaotic system is proposed. Since pseudorandom sequence generator is an important part of the chaotic stream cipher, chaotic image encryption, and other chaotic ciphers, the period of output sequence plays an important role in improving the security of cryptographic modules based on the digital chaotic system. This paper is organized as follows. Section 2 introduces a special cubic chaotic map in real number field, and its chaotic behavior is analyzed, especially the chaotic attractors. Section 3 discusses the degradation of the cubic chaotic map, and its dynamic behavior is analyzed, especially the attractors in finite field. Section 4 presents a novel reconfigurable pseudorandom sequence generator based on the digital cubic chaotic system and analyzes its period and randomness. Finally, the last section concludes our work. New Cubic Chaotic Map For chaos-based cryptosystems, one-dimensional chaotic maps are used widely for their simple algebraic structure, especially linear or quadratic chaotic maps, but the security of linear and quadratic chaotic maps is relatively low. For the high-dimensional chaotic maps, although they have high security, the hardware resource consumption of a cryptosystem based on high-dimensional chaotic map is large. In contrast, the research on the dynamic behaviors of the cubic chaotic system is relatively less, which also is a one-dimensional chaotic map with a simple algebraic structure. In addition, the onedimensional cubic chaotic map has higher order, stronger nonlinearity, and small hardware resource consumption. Cubic map has been deeply studied, especially when the parameter is greater than zero [31,32], Considering the tradeoff between resource and security performance, a special form of the cubic chaotic map is proposed in this paper based on the classical cubic chaotic [32]. The combined form of the cubic chaotic map is: where parameter a ∈ {−1, 1}, b ∈ (1, 10) and t ∈ (1, 3). Fixed Point Analysis The stability of fixed points in the discrete iterative system x n+1 = f (x n ) with an iterative variable n can reflect certain special evolution behaviors of dynamic system. According to the theory of dynamics, the stability of the fixed point is determined by the derivative of the function f (x n ) corresponding to the discrete iterative map at the fixed points. Let x n+1 = x n , Map (2) is changed into: For the cubic polynomial equation, its root is easy to obtain because of the lack of constant term. x n = 0 The function corresponding to the discrete iterative Map (2) is: Its derivative function is: Bringing the fixed points into Equation (6), we can obtain the derivative value at the fixed points: When parameter a ∈ {−1, 1}, b ∈ (1, 10), and t ∈ (1, 3), the absolute values of derivatives at fixed points x 2 n = ta+1 ab are hard to determine. However, we can find that the absolute value of the derivative at fixed points x n = 0 is greater than 1. Therefore, the cubic Map (2) has at least an unstable fixed point, which can make it a chaotic map under certain parameter a, b, and t. The stability of fixed points affects the whole dynamic behavior of the discrete iterative map. When a = 1, Figure 1 shows the influence of parameters b and t on the sequence generated by Map (2). From Figure 1a, we can see that with the change of parameter b, the dynamic behavior of Map (2) does not change, but the value range of the iterative variable x n changes. When a = 1 and b = 1, iterative variable x n ∈ [−2, 2]. When a = 1 and b = 4, iterative variable x n ∈ [−1, 1]. When a = 1 and b = 10, iterative variable x n ∈ [−0.6, 0.6]. As the parameter b increases, the iteration variable x n decreases. Therefore, the parameter b affects the amplitude of the iteration variable x n . Through a large number of statistical experiments, we find that when b = 4, which is more conducive to the realization by digital circuits. Contrary to the effect of the parameter b, the dynamic behavior of Map (2) changes with the change of the parameter t. When a = 1, with the change of parameter t, the iterative variable shows the period-doubling bifurcation, which is a classical dynamic behavior that leads to chaos. When a = 1 and t = 3, the iterative variable of this special cubic map has the largest value range. When a = −1, Figure 2a shows the influence of parameter b on the sequence generated by Map (2), and Figure 2b shows the influence of parameter t on the sequence generated by Map (2). From Figure 2, we also find that parameter b affects the amplitude of iteration variable x n , and parameter t affects the dynamic behaviors of iteration variable x n . When a = −1, with the change of parameter t, the iterative variable also shows the perioddoubling bifurcation, which is a classical dynamic behavior that leads to chaos. But when a = −1, the specific evolution process is different from the situation of a = 1. Different maps can be realized by changing only one symbol in the algebraic structure of Map (2), which also reflects that Map (2) is highly reconfigurable and has a special symmetry. By systematically analyzing the influence of parameters on the fixed points, we find that when parameter b = 4, Map (2) has a good range of iteration variable x n , which is more conducive to the realization by digital circuits. By selecting the appropriate parameter b, Map (2) can be rewritten as: where parameter a ∈ {−1, 1}, x n ∈ [−1, 1]. Lyapunov Exponent Spectrum Lyapunov exponent, also known as Lyapunov characteristic exponent, represents the numerical characteristics of the average exponential divergence rate of adjacent trajectories in phase space. It is an important numerical value used to identify certain dynamic behaviors, especially chaotic behavior. When the Lyapunov exponent of the iterative map is positive, the iterative map shows a chaotic behavior, and the iterative map can be called a chaotic map. The specific expression of the Lyapunov exponent is shown in Definition 1. Definition 1. The Lyapunov exponent of a discrete dynamic system x n+1 = F(x n ) with the iterative variable n is mathematically defined as follows: where ∆ is a small positive number. It is relatively complicated to calculate the Lyapunov exponent by its definition. At present, the Jacobi method is commonly used to calculate the Lyapunov exponent of the discrete dynamic system. Since parameter b does not affect the dynamic behaviors of Map (2), we mainly analyze the variation of Lyapunov exponent with parameter t, which is shown in Figure 3. By numerical comparison, we find that the Lyapunov exponent spectrum of Map (2) is independent of the parameter a, but only related to the parameter t. In theory, the absolute values of derivatives of iteration variable x n are: which is the most important value to calculate the Lyapunov exponent in the Jacobi method. From Equation (10), we find that the value f (1) (x n ) has nothing to do with the parameter a. However, the parameter a participates in the whole iterative operation of Map (2). Therefore, the parameter t determines whether Map (2) has chaotic behavior, and the parameter a determines the evolution process of a specific chaotic behavior. By systematically analyzing the influence of parameters on dynamic behavior, we find that when parameter t = 3, Map (2) has the largest Lyapunov exponent, that is λ = 1.1, and perfect chaotic behavior. By selecting the appropriate parameter t, Map (2) can be rewritten as: where parameter a ∈ {−1, 1}, x n ∈ [−1, 1]. Symmetry Analysis The symmetry of a discrete map mainly depends on its corresponding function, which limits the dynamic behavior of iteration. For Map (11), it is a discrete chaotic iterative map based on a polynomial function, and the highest order of the polynomial is 3. Therefore, the function corresponding to chaotic Map (11) is a cubic polynomial function. where parameter a ∈ {−1, 1}. The diagram of Function (12) is shown in Figure 4. As shown in Figure 4, Function (12) has obvious symmetry, which is about the origin. This can also be seen from the algebraic form of the Function (12). (12) is an odd function, symmetric about the origin. In addition, for the chaotic Map (11), its dynamic behavior also has symmetry. For amplitude, from Figures 1 and 2, the value of the iteration variable x n is symmetric with respect to the x-axis. When a = 1 and a = −1, the algebraic form of a chaotic Map (11) also has symmetry in operation, which provides the basis for the reconfigurable design of Map (11). Chaotic Attractors and Sequence Characteristics Lyapunov exponent is a one-dimensional numerical index to describe chaotic behavior, and chaotic behavior can also be shown by attractor, which is a high-dimensional description method. For a one-dimensional chaotic map, the attractor can be a two-dimensional image to describe chaotic behavior. In addition, time series and iterative graphs can well reflect the chaotic characteristics of the chaotic map, especially iterative graphs, which can reflect the specific behavior of attractors of a one-dimensional chaotic map. When a = 1, chaotic Map (11) can be expressed as: where iterative variable x n ∈ [−1, 1]. Through numerical simulation, the time series and iterative graph of Map (13) are shown in Figure 5. Since Map (13) has a positive Lyapunov exponent, it is a chaotic map. When x 0 = 1, the output sequence of Map (13) is shown in Figure 5a. It can be seen from Figure 5a that the output sequence of Map (13) is bounded and random-like. Figure 5b is the iterative graph of the output sequence of Map (13) under x 0 = 1, which reflects the stretching and folding characteristics of the chaotic system. When a = −1, Map (11) can be expressed as: where iterative variable x n ∈ [−1, 1]. Maps (13) and (14) have a high degree of similarity. The arithmetic in the whole iterative map is consistent, and the only difference is one symbol. Therefore, Maps (13) and (14) have high reconfigurability and symmetry. Using digital circuits to realize these two systems at the same time can reuse multiple identical units, greatly reducing the consumption of hardware resources. Compared with Maps (13) and (14), it also has good chaotic characteristics. Its time series and iterative graph are shown in Figure 6. When x 0 = 1, the output sequence of Map (14) is shown in Figure 6a. It can be seen from Figure 6a that the output sequence of Map (14) is also bounded and random-like. But when x 0 = 1, the evolution of the output sequence of Maps (13) and (14) is different, which can also be reflected in the iteration graph of the sequence. It can be seen from Figure 6b that the chaotic attractor of Map (14) is similar to the Lorenz chaotic attractor, which is divided into two parts, but the attractor of Map (13) is mainly concentrated in one part and similar to the chaotic attractor of Logistic map. Although the form of Maps (13) and (14) are highly similar, the evolution of the output sequence is different. Because of the different sequence characteristics, when using a digital circuit to realize these two maps, two sequences with different performances can be obtained while repeatedly using the same modules. The Model of Digital Cubic Chaotic Map in Finite Field Although Map (11) has good chaotic characteristics, the digital circuit has a certain influence on the realization of a chaotic system. For the chaotic Map (11), when it is realized by a digital circuit, the original model will change, and the influence of the digital circuit must be added. Compared with the floating point operation, the fixed point operation is faster, and the logic consumption is less. In this paper, the chaotic Map (11) is realized by the form of an unsigned fixed point form. The precision of digital circuit is set as N, and for decimal x n , the expression of its unsigned fixed point number x n is: where term 2 N x n represents the integer part of 2 N x n . Multiplying both sides of Equation (11) by 2 3N , we can obtain: Merging congeners, Converting decimal x n to an unsigned fixed point number x n : Dividing both sides of Equation (18) by 2 2N , we can get: By replacing symbol x n with the symbol x n , the digitized chaotic Map (11) is obtained: where the iterative variable x n ∈ [0, 2 N − 1], a ∈ {−1, 1}. The Lyapunov Exponent in Finite Field Due to the influence of the digital circuit, the digital cubic chaotic map is not an iterative map composed of real numbers in real number field, and its output sequence is no longer an aperiodic sequence in infinite field. From the mathematical meaning of the Lyapunov exponent, it represents the numerical characteristics of the average exponential divergence rate of adjacent trajectories in phase space. For the density of real numbers, we can find a third different real number interval in the two different real numbers. However, in finite field, this property is difficult to achieve. For the digital cubic chaotic Map (20), we cannot find a third different positive integer number interval in the two different positive integer numbers. For example, we cannot find a third different positive integer number interval in positive integer number 1 and 2. Therefore, the Lyapunov exponent can describe the chaotic characteristics of the dynamic iterative map in real number field, but it is difficult to analyze the digital chaotic system effectively. It does not seem sensible to use the Jacobi method to calculate the Lyapunov exponent of Map (20). The Dynamic Behaviors of Digital Cubic Chaotic Map In addition to the Lyapunov exponent, time series and iterative graphs can well reflect the chaotic characteristics of the chaotic map, especially iterative graphs, which can reflect the specific behavior of chaotic attractor of the one-dimensional chaotic map. However, for the proposed digital cubic chaotic map, due to the influence of numerical simulation, it cannot clearly show the characteristics of all sequences, especially when the precision is high. When a = 1, Map (20) can be expressed as: In order to intuitively analyze the dynamic characteristics of Map (21), when x 0 = 1, the precision N = 4, N = 12, and N = 20 respectively, its time series and iterative diagram are shown in Figure 7. As shown in Figure 7a, when N = 4, the output sequence of the digital chaotic Map (21) evolves into a constant sequence with the growing times of iteration, and the value of the output sequence eventually is 0, which can also be reflected in the iterative graph of the sequence. In Figure 7b, when N = 4, the iterative graph of the digital chaotic Map (21) finally stays at the origin 0. When N = 12, as shown in Figure 7a, the output sequence of the digital chaotic Map (21) evolves with the iteration of variable and finally becomes a periodic sequence with period 4. The periodic sequence is {940, 1474, 437, 2804}, which can also be reflected in the iterative graph of the sequence. In Figure 7b, when N = 12, the iterative path of the digital chaotic Map (21) finally forms a closed loop and sequence points jump among four values 940, 1474, 437, and 2804. As shown in Figure 7a, when N = 20, the output sequence of the digital chaotic Map (21) shows random behavior with the evolution of iteration times, but it is still a periodic sequence. Because of the small number of iterations, we cannot see its period intuitively, but through longer iterations, it will eventually become a periodic sequence, and the same is true for the corresponding diagram of sequence iteration in Figure 7b. It can be seen from Figure 7 that compared with the original chaotic Map (11), the chaotic behavior of the digital chaotic Map (21) is degraded. The digital chaotic Map (21) only has a period attractor, it is no longer a chaotic map. When a = −1, Map (20) can be expressed as: where iterative variable x n ∈ [0, 2 N − 1]. When x 0 = 1, precision N = 4, N = 10 and N = 20 respectively, its time series and iterative diagram are shown in Figure 8. Due to the inherent limitations of digital circuits, Map (22) also exhibits the same degradation behavior as Map (21). When N = 10, the period of the output sequence of Map (22) is small, but it is difficult to quickly distinguish from the time series; thus, the repeated period sequence is marked with a red box in the time series. By comparing Figures 7 and 8, we can see that there are obvious differences in the function graphs of the corresponding functions. In both cases of a = 1 and a = −1, the functions corresponding to Map (11) are the cubic function. However, the digitized Map (20) is no longer a typical cubic function, and the corresponding functions of Map (11) are different in the two different cases of a = 1 and a = −1. The deformation degree of Map (11) is large, especially a = −1. By comparing Figure 8 with Figures 6 and 7 with Figure 5, respectively, the chaotic behavior of the chaotic map realized by the digital circuit is degraded. Furthermore, by analyzing the sequence behavior with different digital circuit precision, we can find that with the increase of precision, the behavior of Map (20) is closer to the original chaotic Map (11). But for digital circuits, the hardware resources are limited, and the precision cannot be infinite. Therefore, in the case of limited resources, the dynamic behavior of Map (20) needs to be optimized to be suitable for the digital application, such as a pseudorandom sequence generator. Nevertheless, in many designs of the chaos-based pseudorandom sequence generator, the degradation of chaotic dynamics due to digital circuit implementation is often ignored, especially the complex periodic phenomenon. In this paper, we analyzed the influence of digital circuits on the complex periodic behavior of the digital cubic chaotic Map (20), and we present an effective method to resist the complex periodic behavior. The Analysis of Determining Periodic Sequence Source For the digital cubic chaotic Map (20), its output sequence has complex periodic behavior. When the precision is high, the period of output pseudorandom sequence cannot be found effectively. However, for pseudorandom sequences, period and randomness are two important indexes, which need to be analyzed carefully. As seen from Figures 7 and 8, the period of the output sequence of Map (20) shows short period behavior, which is not suitable for an ideal pseudorandom sequence. Therefore, it is necessary to optimize the period of the output sequence of Map (20). Let the general iterative map in the digital field be x n+1 = F(x n )mod2 N . For a map x n+1 = F(x n )mod2 N , when different inputs are corresponding to the same output, that is F(x i ) = F(x j ) = kmod2 N , i = j, sequence point x i and sequence point x j are not in one cycle. In this case, the more the sequence points are, the smaller the output period of the iterative map x n+1 = F(x n )mod2 N will be, and multiple sequences with different periods will be generated. Therefore, to effectively increase the period of the output sequence of Map (20), Map (20) can avoid the situation that different inputs correspond to the same output, and an ideal way is to make Map (20) be a one-to-one mapping. However, the rounding function of the term 4x 3 n 2N in Map (20) seriously affects the functional properties of the corresponding function of Map (20). In this paper, parameters are introduced into the linear term to compensate for the truncation effect due to the elimination of the rounding function. In addition, for Map (20), when x 0 = 0, F(x n ) = 0, n = 1, 2, 3, . . . . . .. Therefore, Map (20) always has a fixed point of 0. In order to avoid this short period behavior situation, it is necessary to offset a constant term parameter to Map (20). By analyzing the specific form of Map (20), the iterative Map (20) is optimized: where a ∈ {−1, 1}, d = 0. The corresponding function of Map (23) is: The range of parameter c in Function (24) can be determined by analyzing the precondition of one-to-one mapping. Lemma 1. When 3c is odd, the Function (24) is a one-to-one mapping. Proof of Lemma 1. The lemma is proved by the contradiction. First, suppose that the function f (x) is not a one-to-one mapping over F 2 N , that is, there are two different numbers t 1 and t 2 such that f (t 1 ) = f (t 2 ). Let f (t 1 ) = f (t 2 ), bring t 1 and t 2 into function f , and we can obtain: Merging congeners in Equation (25), For the two different numbers t 1 , t 2 ∈ F 2 N , (t 1 − t 2 ) ∈ F 2 N . Since a ∈ {−1, 1}, the parameter a does not affect the parity of Equation (26). In Equation (26), the term 4(t 1 2 + t 1 t 2 + t 2 2 ) is an even number. When 3c is odd, the term (4(t 1 2 + t 1 t 2 + t 2 2 ) − 3c) in Equation (26) is an odd number. Then, the term (4(t 1 2 + t 1 t 2 + t 2 2 ) − 3c) cannot be a multiple of 2 N , and t 1 = t 2 . However, this is contradictory to the assumption that there are two different numbers t 1 and t 2 . Therefore, Function (24) is a one-to-one mapping. Lemma 2. ([33]) . For x n+1 = F(x n )mod2 N , if the k-th column of the output F(x n ) depends only on the first k-th columns of the x n , then for each cycle in the (N − 1)-th iteration of length l, there are either two cycles of length l or one cycle of length 2l in the N-st iteration. Consequently, if F has any fixed point then for any i > 0 there are at least 2 i+1 points that belong to cycles of length at most 2 i . First, the Boolean function of the iterative Map (23) is analyzed in the view of binary. There are only multiplication, addition, and subtraction in Map (23). For any number g, according to the properties of mod2 N , we can obtain 2 N − gmod2 N = g + 1mod2 N , where the symbol "∼" is bitwise negation. Here, a concrete example of the symbol "∼" is given. When N = 8, suppose g = 93, then the binary representation of g is g (2) , and g (2) = (0, 1, 0, 1, 1, 1, 0, 1), g (2) = ( 0, 1, 0, 1, 1, 1, 0, 1) = (1, 0, 1, 0, 0, 0, 1, 0). According to the conversion between decimal system and binary system, g = 162, then 2 8 − 93mod2 N = 163mod2 8 = 162 + 1mod2 8 . Therefore, addition and subtraction can be converted to each other for the properties of mod2 N , and there are only multiplication and addition in Map (23). In the addition of two multibit numbers, the carry of each single bit addition is added to the next-insignificance bit, and the multiplication also has a similar operation process. Multiplication and addition both satisfy the condition of Lemma 2. Theorem 1. When ac = 1 and the parameter d is odd, the output sequence of Map (23) has a definite period, and the period is 2 N . Proof of Theorem 1. When N = 1, Map (23) is: where x n ∈ {0, 1}. Let the initial variable x 0 = 0, and bring x 0 = 0 into Equation (27), then we can obtain x 1 = dmod2. Since d is odd, x 1 = dmod2 = 1. When x 1 = 1, bringing x 1 = 1 into Equation (27), then we can obtain x 2 = 4a − 3c + dmod2. It is known from the precondition that a ∈ {−1, 1}; thus, 4a is even. From Lemma 1, we already know that 3c is an odd number. From the condition of Theorem 1, we can know that parameter d is odd. Therefore, the sum of two odd numbers d and 3c is even, then the sum of two even numbers −3c + d and 4a is even, that is x 2 = 4a − 3c + dmod2 = 0. When N = 1, Map (23) can produce a sequence with period 2. According to Lemma 2, when system precision N > 1, the period of the output sequence of Map (23) is the exponential power of 2. When N > 1, by using Lemma 2 many times, we can obtain that if x 2 N−1 = x 0 , the period of the sequence generated by Map (23) is 2 N . For any initial variable x 0 , we have: Let the initial variable x 0 = 0, we can obtain: Since 4a is an even number, the term 4a( (32) also is an even number. From the condition of Theorem 1, we can know that parameter ac = 1, then the term ( (32) is even. Therefore, the product of an even number ( and an odd number d is even, and the sum of an even number d( nomial on the power of (−3ac). Then, x 2 N−1 is polynomial on the power of (−3ac). For each x n , 1 ≤ n ≤ 2 N−1 , they have a common factor (−3ac). Since (−3ac) is an odd number, x 2 N−1 cannot be the multiple of 2 N . According to the properties of mod2 N , In conclusion, the period of the sequence generated by iterative Map (23) is 2 N . For Map (23), according to the selection of parameter a and parameter c, it can be changed into two similar maps. When a = 1 and c = 1, Map (23) can be changed into Map (33). By simplifying Map (34), we can obtain: Although the output sequences of Maps (33) and (35) vary with the evolution of iteration, the periods of sequences generated by both of them are 2 N . By combining the form of Maps (33) and (35), a comprehensive map is obtained: where a ∈ {−1, 1}. Chengqing Li, Bingbing Feng, Shujun Li, Jüergen Kurths, and Guanrong Chen observed that the effectiveness of the improvement method for digital chaotic system mainly depends on the period distribution under low precision [19]. However, many proposed improvement methods ignore the analysis under low precision. In contrast, the map proposed in this paper also has a good periodic distribution under low precision. When precision N is low, all periodic behaviors of the map in the digital field can be found by traversal search. In Figure 9, when precision N = 4, all periodic behaviors of Maps (20) and (37) are shown completely. As can be seen from Figure 9, when a = 1, Map (20) shows the complex periodic behavior, and no matter which initial value starts, it will eventually become a periodic sequence with period 1 and show only period point 0 after several iterations. This short period behavior is not acceptable in the design of a pseudorandom sequence generator, and the sequence cannot be used as a pseudorandom sequence in the cryptosystem. However, when a = 1, Map (37) shows excellent periodic behavior. Its output sequence forms a perfect closed-loop, the period reaches the ideal value 2 4 , and shows ergodicity. When a = −1 Map (20) shows more complex periodic behavior than when a = 1. Five sequence points 4, 11, 13, 5, and 14 form a small closed-loop, which represents a periodic sequence {4, 11, 13, 5, 14} with period 5. Starting from one of the remaining initial values, after several iterations, it will eventually become a periodic sequence with period 1 and show only period point 0. This short period behavior and multiple period behavior is also not acceptable in the design of a pseudorandom sequence generator, and the sequence is hard to be used as a pseudorandom sequence in the cryptosystem. As in the case of a = 1, when a = −1 the output sequence of Map (37) forms a perfect closed loop, and the period reaches the ideal value 2 4 . Since the values of the output sequences of Map (37) are different with the evolution of iteration in the two cases of a = 1 or a = −1, they belong to two different periodic sequences with the largest period. When precision N is higher, the difference between all periodic behaviors of Maps (20) and (37) is more obvious. Due to the limitation of computer precision and storage devices, the periodic behavior of Maps (20) and (37) cannot be directly described when the precision N is high. Therefore, when precision N = 17, the periods of the output sequences of Maps (20) and (37) are analyzed and distinguished. When the system precision is high, the period of sequence can be estimated by autocorrelation detection. Autocorrelation detection can reflect the dependence relationship of a signal between two different moments, which is an important detection method to evaluate the sequence period. The expression of the autocorrelation function is as follows: where R z (m) and K represent the autocorrelation function and the length of detection sequence, respectively. The autocorrelation detection results of the output sequences of Maps (20) and (37) are shown in Figure 10. For Map (20), when a = 1 or a = −1, a 1000-length sequence is selected from the output sequence for autocorrelation detection. However, for Map (37), when a = 1 and a = −1, a 2 17 -length sequence is selected from the output sequence for autocorrelation detection. In Figure 10, the distance between two peaks can be approximately expressed as the period of the sequence. For Map (20), there are many peaks in Figure 10a,b. Through autocorrelation detection, whether a = 1 or a = −1, the period of output sequence generated by Map (20) is much less than 1000. By measuring the distance between the two peaks, we find that whether a = 1 or a = −1, the period of output sequence generated by Map (20) is less than 200. In contrast, for Map (37), only one peak can be seen in Figure 10a,b respectively. Since the length of the test sequence is 2 17 , whether a = 1 or a = −1, the period of output sequence generated by Map (37) is greater than or equal to 2 17 . According to the theory of digital circuits and cryptography, when the system precision is N, the maximum period of the sequence is 2 N . When system precision N = 17, whether a = 1 or a = −1, the period of output sequence generated by Map (37) is less than or equal to 2 17 . Therefore, when system precision N = 17, whether a = 1 or a = −1, the period of output sequence generated by Map (37) is equal to 2 17 = 131072. When system precision N = 17, whether a = 1 or a = −1, the period of the sequence generated by Map (37) is at least 655 times larger than that of the sequence generated by Map (20). The proposed Map (37) in this paper greatly improves the period of the sequence generated by the digital cubic chaotic map in finite field. The Design of a Reconfigurable Pseudorandom Sequence Generator In addition to period behavior, as a rule, the weakest statistical property the sequence must necessarily satisfy to be considered as pseudorandom in any reasonable meaning is a uniform distribution, that is, each term of the sequence must occur with the same frequency. From Theorem 1, the period of the output sequence of Map (36) is 2 N , and thus is its special form Map (37). For Maps (33) and (35), each term of the sequence occurs with the same frequency within a period, and the probability of occurrence is 1/2 N , which satisfies the uniform distribution in pseudorandom sequence theory. Since the output sequence of Map (36) has a large period, good randomness, excellent algebraic form, and high symmetry, a reconfigurable pseudorandom sequence generator is designed based on Map (36). When N = 32, the principle block diagram is shown in Figure 11. As shown in Figure 11, the proposed reconfigurable pseudorandom sequence generator implements both Maps (33) and (35), where module d − 3x n and module x 3 n are shared reconfigurable modules. By reducing the repeated use of these two modules, hardware resources can be greatly saved. Compared with only one map, the number of output sequences can be further increased, and the randomness of output sequences can be enhanced by implementing two maps at the same time. The output sequences of Maps (33) and (35) In Figure 11, the specific operation method of the module P(x) is as follows: where x <<<i denotes i-bit left rotation of a value x, and ⊕ denotes bitwise xor operation. The module P(x) is a linear diffusion operation, it can associate each bit of the binary vector of x with other bits. The function of the module B(x) is bit extraction, its specific operation is: [31] , x [30] , x [16] , x [1] , For an integer number x, the module B(x) will extract specific 5 bits from the binary vector of x, that is x [31] , x [30] , x [16] , x [1] , x [0] . In Figure 7, the module H(x) is a special nonlinear function, which has the property of five inputs and one output. For a five bits input h [4] , h [3] , h [2] , h [1] , h [0] , the specific form of function H(x) is: Since the output sequences of Maps (33) and (35) undergo the same processing: module P(x), module B(x), and module H(x), these three different modules also are the reconfigurable modules in the proposed reconfigurable pseudorandom sequence generator. By calculating the Walsh spectrum of the function H(x), we find that the function is balanced, that is, the number of 0 and 1 in its output sequences are equal. Since Module P(x) is a linear diffusion operation and reversible, it is balanced. The module B(x) extracts specific bits from the binary vector of x, it does not affect the characteristics of the sequence. For the module H(x), it also is a balanced Boolean function. Therefore, the output sequence of the proposed reconfigurable pseudorandom sequence generator is balanced, and the period is 2 N . When N = 32, the period of the output sequence is 2 32 . The Implementation and Performance Analysis After designing the reconfigurable pseudorandom sequence generator, we implement it in hardware via FPGA. When b = 1, the hardware implementation diagram of a specific reconfigurable pseudorandom sequence generator is shown in Figure 12, the time series waveform is shown in Figure 13, and the consumption of hardware resources is shown in Table 1. From Figure 13, the output of the proposed reconfigurable pseudorandom sequence generator does show a noise-like behavior. In addition to using probability distribution to analyze the randomness of sequence, randomness detection is also another important method. At present, the main test method of randomness in the world is NIST-sp800 test suite, which is the national standard for measuring randomness in the world and published by the National Institute of Standards and Technology of the United States [34]. NIST-sp800 test suite focuses on a variety of models for sequence randomness detection, including many approximately independent statistical tests, such as linear complexity, approximate entropy, etc. The version of NIST-sp800 test suite used in this paper is 2.1.2, and 100 groups of sequences with a length of 1,000,000 are detected. The detection results are shown in Table 2. Table 2, only the lowest U-value in multiple results of these 5 different subtests are showed. The p-value is a statistical test value for the randomness of a group of measured sequences for NIST-sp800 test suite, and it is a positive number less than or equal to 1. The larger p-value is, the stronger the randomness of the measured sequence is. For a group of test sequences, p-value is the index to determine whether the sequence can pass all 15 subtests in NIST-sp800 test suite. If p-value is determined to be equal to 1, the sequence appears to be completely random, and p-value less than 0.01 indicates that the sequence appears to be completely non-random. In Table 2, U-value is a distribution of p-value of 100 groups of sequences, and U-value of each subtest can be seen from Table 2. As shown in Table 2, the output sequence has passed the test of NIST-sp800 test suite and shows good randomness. Since period and randomness can be determined, the proposed reconfigurable pseudorandom sequence generator not only has low resource consumption but also has good performances. For the chaos-based pseudorandom sequence generator, which has two main characteristics, one is period, the other is randomness. Therefore, the period and randomness are the core indexes. However, randomness has been widely concerned, but periodicity has not. In addition, with the development of chaos-based cryptosystem, increased attention has been paid to the consumption of hardware resources and reconfigurability has become an important principle in the design of pseudorandom sequence generators, stream cipher, and block cipher, especially stream cipher. For the design of stream cipher, pseudorandom sequence generator is a required important module, and its reconfigurability has gained extensive attention and been studied. Therefore, reconfigurability is regarded as the third important index of performance for the pseudorandom sequence generator based on the chaotic map. At present, the pseudorandom sequence generators based on chaotic map is an important part of the chaos-based cryptosystem, and the short period characteristic has gradually become a common attack method to crack chaos-based cryptosystems. Compared with randomness, the period is often ignored by most designers of chaos-based cryptosystems. Most of the proposed chaos-based cryptosystems do not analyze the period of the output sequence, and certain chaos-based cryptosystems suggest special algorithms for searching the period of output sequence via computer under low precision; only a few chaos-based cryptosystems give the proof of definite period of the output sequence in mathematics under any precision. This paper selects several existing typical and great chaos-based pseudorandom sequence generators and compares them with the methods proposed in this paper. The comparison results between the proposed and other existing methods are shown in Table 3. [16] 2(3 M−1 ) Success No Method in [17] 656(N = 19) Success No Method in [18] Not analyzed Success No The proposed 2 N Success Yes In Table 3, although all five methods have passed the random test, the periods of their output sequences are completely different. For Method in [15] and Method in [16], by transforming the accuracy between different systems, we can get the following relationship: N = K(log 2 3), N = 2M(log 2 3). For the method proposed by [15], a special algorithm of searching sequence period is given, and only an approximate value of the period is given under precision K = 20. By comparison, the period of output sequence generated by the proposed method in this paper is 2 K(log 2 3) ≈ 10 9 . For the method proposed by [16], the period of the output sequence is 2(3 M−1 ). Since N = 2M(log 2 3), the period of output sequence generated by the proposed method in this paper is greatly larger than the method proposed by [16]. For the method proposed by [17], an algorithm of exhaustive search is given, and the period of the output sequence is 656 under precision N = 19. However, when N = 19, the period of output sequence generated by the proposed method in this paper is 2 19 = 524288. For the method proposed by [18], the period of the output sequence is not analyzed, only the randomness is tested. Except for period comparison, all five methods have passed the randomness test, and only the method proposed in this paper is reconfigurable. As can be seen from Table 3, the results of the period, randomness, and reconfigurability analysis demonstrate that the proposed one is better than the related work in terms of security and performance. In addition to chaos-based pseudorandom sequence generators, there are many standard pseudorandom sequence generators [35], such as linear congruence generator (LCG), linear feedback shift register sequence generator (LFSR), well-known trinomial-based generalized feedback shift register sequence generator (GFSR), Mersenne twist generator (MT19937), subtract with borrow generator (SWB), and stream cipher algorithm. There are differences in their random performance, mainly whether they have encryption ability. AES in stream cipher modes (OFB, CTR, KTR) is the pseudorandom sequence generator with encryption ability. Besides NIST-sp800, Testu01 has the majority of types of randomness tests [35]. Offering a rich variety of empirical tests is the purpose of the TestU01 library. In TestU01 user's guide (compact version) [35], to test a pseudorandom sequence generator, it is recommended to start with the quick battery SmallCrush. If everything is fine, one can try Crush, and finally, the more stringent BigCrush. Testu01 testing has been added to the paper, and the test results are as follows. In Table 4, the AES-KTR mode uses a counter as key that is increased by 1 at each encryption iteration, and the speed is the required time (in seconds) to generate 10 8 random numbers. The randomness of the pseudorandom sequence proposed in this paper is higher than that of the ordinary standard pseudorandom sequence, but the speed is slower. Compared with the cryptographic algorithm, the randomness of the cryptographic algorithm is higher than that of the pseudorandom sequence proposed in this paper, and the speed is relatively slow, but they have the ability to encrypt information. However, the method proposed in this paper cannot encrypt the information and has no encryption ability. In Table 4, our implementations of pseudorandom sequence generators are not necessarily the fastest possible. According to the cipher document, we found that the optimized AES can run normally above 10 GB/s, which is a high speed. In view of the above indicators analysis, we can conclude that the pseudorandom sequence generator based on the proposed method in this paper has good randomness. Conclusions The complex periodic behavior of the digital chaotic system is a potential factor that affects the application of the digital chaotic system in the pseudorandom sequence generator. This paper studies a combined cubic chaotic map and analyzes the dynamic characteristics of the special cubic chaotic map after digitization. Compared with the chaotic attractor in real number field, we find that the attractor of the proposed cubic chaotic map in digital field changes and finally forms the periodic attractor. For the proposed cubic chaotic map, when a = −1, the chaotic attractor can be divided into two parts, which is different from that when a = 1. However, after digitization in finite field when a = −1, the attractor will deform greatly and become similar to the situation of a = 1. By offsetting the influence of truncation, for any precision N, we successfully prove the period of output sequence of the digital cubic map is 2 N in mathematics, which is the theoretical maximum value. Based on the great period behavior, a novel pseudorandom sequence generator is proposed, which is high symmetry and can greatly reduce the consumption of resources. Compared with the existing methods, the proposed pseudorandom sequence generator in this paper has advantages in period and reconfigurability. Through theoretical analysis and random detection, the sequence output by the proposed generator also has good randomness, which can be used to generate good periodic pseudorandom sequences and can be combined with other chaotic functions to further form the required chaotic pseudorandom sequences. Conflicts of Interest: The authors declare no conflict of interest.
12,168.8
2021-08-03T00:00:00.000
[ "Engineering", "Physics", "Computer Science" ]
Two-color x-ray free-electron laser by photocathode laser emittance spoiler Multispectral x-ray pump-probe experiments call for synchronized two-color free-electron lasers (FEL). This mode often implies a laborious setup or an inefficient use of the undulator. We report on a simple and noninvasive approach tested at SwissFEL for a two-color x-ray delivering almost 60% of the pulse energy compared with a single color. In this new method, a ps UV pulse is overlapped to the photocathode drive laser increasing the beam emittance, which locally inhibits the FEL process. This scheme permits highstability in energy and spectrum and the control of the two-color duration and intensity ratio. It enables shot-to-shot switching between one and two-color FEL and, since not associate to beam losses, it is compatible with high repetition-rate and high average-power FELs. Several x-ray free electron lasers (FELs) are currently in operation around the world [1][2][3][4][5][6]. At these facilities, fs or even sub-fs pulse durations, wavelengths tunable down to 1 Å and mJ-pulse energies can be achieved. The corresponding brilliance surpasses by several orders of magnitude that which is obtained at synchrotrons. Free-electron lasers offer x-rays with unprecedented peak power, temporal resolution and spatial coherence enabling advanced studies in biology, femtochemistry, material in extreme conditions and condensed matter physics including the production of exotic states of matter by nonlinear x-ray interactions [7]. For time-resolved studies, pump-probe experiments are generally implemented by combining a conventional laser pump with a x-ray FEL probe. This technique enables monitoring the temporal evolution of a variety of photoinduced processes on an ultrafast timescale [8]. In this context, the reduction of the temporal jitter down to fs-level between the pump and the probe pulses is a persistent challenge [9]. X-ray pump, x-ray probe experiments employing a two-color FEL are expected to drastically reduce the temporal jitter while the two-wavelengths can be tuned to excite and to explore different resonances [10][11][12]. The time-delayed two-color output of hard x-ray FELs is well-suited for multidimensional Auger and transient grating spectroscopy, and for multicolor diffraction imaging [10,13]. A two-color FEL output can be obtained by the uneven tuning of the undulator resonance or by the manipulation of the electron bunch properties. In the first approach, the undulator strength parameter K of two sections are tuned to radiate at two wavelengths with the potential of independent control of the pointing and the polarization. The temporal separation of the two colors can be controlled with a dedicated magnetic chicane between the undulator modules [14,15]. A major drawback for this configuration is that the same electrons generate both colors, resulting in FEL operation far from saturation, with minimum nonzero delay between the pulses due to slippage effects. Two-color hard x-ray pulses having a maximum delay of 200 fs, a total pulse energy up to 30% with respect to the standard SASE and a relative wavelength separation of 30% are reported in [15]. Alternatively, a fresh slice of the bunch can be set to lase in each undulator section, such that both the emission regions could reach saturation [16]. The fresh-slicing can be implemented with a time-dependent transverse kick [17], by transverse dispersion in the beam transport [18,19] or by transverse mismatch [20,21]. The same effect can be obtained by a two-wavelength laser seed, which is amplified in two undulators [22]. For two-color seeded FEL, tens of μJ pulses, with 20% relative FEL photon energy offset and a maximum delay of 900 fs were demonstrated at wavelength of few tens of nm [22]. In general, fresh-slicing schemes allow large tunability and FEL saturation with up to 800 μJ around 700 eVand a maximum delay between the two colors of ≈1 ps [16]. However, fresh-slice schemes still require a short gain length which is challenging to be reached at the hard x-ray regime and full saturation for the first color must be achieved upstream the chicane. The two-color FELs by electron beam manipulation are achieved, with a dedicated complex machine setup, by twin bunches accelerated to different energy and thus lasing at two wavelengths [23,24]. The two bunches can be accelerated in the same radio frequency bucket (twin bunches) or in different ones. In the last case, it is also possible to tune the two FEL pulses at the same photon energy but their temporal delay is multiple of the radio frequency period (typically few hundreds of picoseconds). Energy exceeding 1 mJ corresponding to ≈30% of the normal self-amplified spontaneous emission (SASE) FEL is reported for tens of fs two-color twins pulses [24]. The transport of the electron beam sets the maximum relative color separation of ≈2% and the temporal delay below 200 fs [24]. To obtain two-color FEL with a single electron bunch it is possible to suppress the lasing from the central part, using two current spikes by wakefield with a dechirper [25] or a double-slotted foil within the beam as an emittance spoiler [26]. The beam manipulation methods have limited temporal and wavelength tunability and typically they require a time-consuming setup. Passive structures have been used to generate multipulses at 7.5 keV having ≈40 μJ energy (≈30% of the standard SASE), temporal separation of 200 fs and 2% relative wavelength offset [25]. With a foil having two slots placed on the beam, few fs two-color FEL around 1 Å was demonstrated. In this approach, the peak power exceeds 10 GW, while the temporal and wavelength separation between the two-color pulses is ≈150 fs and ≈2% respectively [26]. Solid targets that spoil the electron bunch emittance present, however, the downside of significant radiation losses, preventing their applicability to high-repetition rate and high-average power x-ray FELs. A sextupole, in combination with a standard orbit control, was used to suppress the radiation from the bunch center, while keeping the head and the tail still lasing at two photon energies. With this method, two-color FEL around 7.2 keV, with energy <100 μJ, temporal and photon energy separation of up to 160 fs and 2% respectively was demonstrated [19]. Laser heater shaping [27] was demonstrated for single ten fs FEL pulses by spoiling the longitudinal emittance and was proposed for two-color FELs. This technique is compatible with high repetition rate accelerators, but introduces other issues due to the stability of the overlap between the laser and the electron beam. In general, all the above techniques require either precious beam time to implement a complicated electron beam shaping or an inefficient use of the undulator line, which eventually prevents the application of split-undulator schemes at harder x-rays. This article reports on a new noninvasive and straightforward method for the generation of a two-color FEL by electron beam emittance spoiling based on the exploitation of two lasers at the photocathode. Different from other beam-based techniques, the presented method delivers two-color x-rays using the optimal single-color FEL settings without dedicated accelerator and undulator retuning thus saving beam-time. Moreover, it enables a lock-in mode (with shot-to-shot switching between two-color and single wavelength mode) and it is directly applicable to high-average power x-ray FELs because it is not contributing to beam losses. The concept is schematically depicted in Fig. 1. A short laser pulse (emittance spoiler) is overlapped to the nominal photocathode (PC) laser. The excess of charge leads, at low energy, to a localized increase of the emittance, which reduces and eventually prevents the FEL amplification. The part of the bunch not interacting with the laser spoiler still produces FEL emission which is centred at two wavelengths due to the linear energy chirp accumulated along the linear accelerator. This mechanism is particularly efficient for an optimized FEL where most of the electron bunch contributes to the x-ray emission. The effect on the FEL process can be understood by referring to the condition on the transverse emittance ϵ that the beam should fulfil to assure that all the electrons radiate into the fundamental mode of the FEL [28]: ϵ=γ ≤ λ=4π, being γ the relativistic Lorenz factor and λ the FEL wavelength. At shorter λ, the condition on the emittance becomes more stringent and the FEL spectrum more sensitive to the laser emittance spoiler. The time and wavelength separation between the two colors (Δt and Δλ) are key parameters for time-resolved experiments. These quantities may be tuned by changing the compression and the energy chirp of the electron beam. Using the low-energy compressor allows the control of Δt with minor change on Δλ. The minimum Δt corresponds to the overlap of the two colors while a maximum delay of few hundreds of fs still guarantees sufficient peak current and FEL amplification. Δλ can be varied independently from Δt, by changing the phase of the accelerator downstream of the compressors. The maximum Δλ depends on the energy gain and the longitudinal wakefields. At SwissFEL, the maximum relative wavelength separation up to 2% was measured. Another parameter such as the relative intensity of the two colors can be easily varied by unbalancing the beam current distribution through the amplitude of the x-band cavity linearizer. The laser emittance spoiler presented here adds a free knob for the control of the duration, the spectral width and the intensity of the two-color pulses. Moreover by increasing the intensity of the laser emittance spoiler, the temporal and wavelength separation between the two color pulses can be easily expanded without modifications of the compression settings. The main characteristics of the two-color FEL methods demonstrated in literature are summarized in Table I. The methods based on electron beam manipulation are listed on the top rows, while the undulator-based two-color schemes are reported in the lower part of the table. We make a qualitative comparison between the laser emittance spoiler and other two-color methods including the specific advantages and limitations, with emphasis on the wavelength and the temporal separation Δλ and Δt respectively. A quantitative comparison between the different methods is not always possible due to the different machine layouts and accessible photon energies. We experimentally demonstrated the two-color FEL by laser emittance spoiler at the hard x-ray branch of SwissFEL (Aramis) [6]. At SwissFEL, 300 MeV electron bunches at 100 Hz, with a charge of 200 pC and peak currents of 20 A are produced in a rf photoinjector by illuminating a Cs 2 Te photocathode. The linear accelerator boosts the beam energy up to 3.15 GeV for the soft x-ray FEL (Athos) and to 5.8 GeV for the Aramis FEL. Two bunch compressors at 0.3 and 2.1 GeV reduce the bunch length to a few tens of fs. Transverse deflector structures are routinely used to measure the electron bunch longitudinal properties such as the current, the horizontal slice emittance and the beam tilt. The Aramis undulator consists of 13 planar variable-gap modules of 4 m length each and a magnetic period of 15 mm. This line produces radiation over the photon energy range of 1-7 Å. The FEL pulse energies and spectra are measured by a gas detector and a single-shot spectrometer, respectively [29]. In Table II, the main parameters of the PC and emittance spoiler lasers at the photocathode are reported. The two systems consist of an ultralow noise Yb-based oscillator and a diode-pumped Yb∶CaF 2 amplifier running at 100 Hz [30]. Each laser produces pulses with 3 mJ energy and of 500 fs FWHM duration. The amplified pulse is frequencyquadrupled to 260 nm reaching energies up to 300 μJ. Both laser oscillators are synchronized with respect to the SwissFEL reference with a jitter of 11 fs rms integrated from 2 mHz to 100 Hz. The relative time jitter between the two lasers at the photocathode is below 20 fs rms. The two UV pulses are independently Fourier filtered in a glass capillary and are stretched in time with a grating pair. The stretcher is bypassed for the spoiler laser to keep the pulse duration short. To produce a top hat distribution on the photocathode, the beams are apertured and passed through an imaging transport. The spoiler pulse could also be derived from the same laser before the UV stretcher, but for the present study we used two independent systems. The laser spoiler can be activated on shot-to-shot basis, moreover it can be delayed and controlled in pulse energy and beam size. Space charge and electron beam emittance can be regulated via the intensity and the beam size of the emittance spoiler. The emittance spoiler and the photocathode lasers temporal profiles are measured with an optical cross-correlator and are reported in Fig. 1(b) and 1(c) respectively. As previously mentioned, the laser spoiler intensity can be tuned in order to locally exceed the emittance condition for the FEL amplification. At SwissFEL, a nominal emittance of 430 nm is required for the bunch to radiate at the shortest wavelength (1 Å) [6]. With a charge density peak, the emittance can be increased at the center of the bunch well beyond this limit preventing the FEL emission. In this way, only the external parts of the bunch with low emittance will contribute to the FEL and will produce a two-color spectrum. In order to understand the underlying physics and define the operating parameters, we measured the electron beam slice emittance at the injector and upstream of the undulator line as function of the laser emittance spoiler intensity. We found experimentally that when the spoiler generates a charge of 19 pC the emittance increases sufficiently to completely prevent the lasing of the corresponding part of the bunch and determines two distinct FEL emission peaks. At lower spoiler intensity the FEL emission is reduced at the two laser overlap but not completely suppressed. By increasing the spoiler charge above 19 pC, the emittance peak grows and a larger part of the beam exceeds the emittance limit resulting in the generation of a two-color FEL with narrower spectral width and larger wavelength separation. Figure 2 shows the measured slice emittance as a function of the charge generated by the spoiler, which is reported in the legend of each plot. The dashed lines at -150 and 150 fs delimit visually the window where the overlap between the two lasers occurs at the photocathode. If the emittance spoiler laser is off, the slice emittance stays constant in the central part of the bunch with a value of 0.32 μm, well below the emittance limit for the FEL emission mentioned before. Already when the spoiler produces 11.5 pC, a peak in the emittance appears and the FEL amplification for the corresponding slice is reduced but not completely suppressed. At a generated spoiler charge larger than 15 pC, which corresponds to a slice emittance of 0.75 μm, the FEL radiation emission for these slices is completely inhibited. Increasing further the spoiler charge, the emittance peak grows and a larger part of the beam exceeds the emittance limit resulting in the generation of a two-color FEL with narrower spectral width and larger wavelength separation. The emittance peak induced by the laser emittance spoiler is preserved along the accelerator. Figure 3 shows the slice emittance upstream of the undulator line for the laser emittance spoiler off (black line) and on (red line). For this measurement, the nominal photocathode laser is adjusted at the typical intensity for the generation of the 200 pC bunch and the laser emittance spoiler was setup to produce a charge of 19 pC. The dashed lines delimit visually the temporal window where the overlap between the two lasers occurs at the photocathode. If the emittance spoiler laser is off, the slice emittance stays constant in the central part of the bunch below the emittance limit and the FEL emission mentioned occurs along the bunch. When the laser emittance spoiler is on, the FEL emission occurs over the two emittance minima and, due to the energy chirp in the electron beam, over two colors. In the following, we present the experimental demonstration and characterization of the two-color FEL radiation. As aforementioned, the switch between the standard single and the two-color FEL can be done on a FIG. 2. Beam slice emittance measured at the injector for different laser spoiler pulse energies. In the legend of each graph, the charge generated by the laser spoiler is reported. The slice emittance stays constant along the bunch when the laser emittance spoiler is switched off (lower panel). As the laser emittance spoiler is activated and increased in intensity the emittance grows at the temporal overlap at time zero and is delimited by the two dashed lines at -150 fs and 150 fs. shot-to-shot basis by enabling/disabling the emittance spoiler laser. The temporal structure of the two-color FEL can be estimated by the slice emittance of Fig. 3. The FEL emission occurs in two few tens of femtosecond pulses separated by about 40 fs. One remarkable feature of the generation of the two-color FEL by laser emittance spoiler, is its excellent spectral shape stability. Figure 4 shows a set of 6000 consecutive SwissFEL spectra recorded at a photon energy of 12 keV. The statistical scattering of the spectral intensity for each photon energy is represented with the light purple area. The results clearly demonstrates that a stable wavelength two-color FEL with well-separated photon peaks can be reached for all the shots. The photon energy offset between the two colors and the peak wavelength remain stable. The stability of the spectral shape is an appealing feature for pump-probe experiments using two x-ray wavelenghs. Additionally, the present scheme allows for an easy reshaping of the FEL spectra. By changing the relative delay τ between the two laser pulses, it is possible to control the relative intensity, the spectral width and the duration of the two individual colors. Figure 5(a) shows the FEL spectra averaged over 6000 pulses obtained by changing the relative delay between the two laser pulses. On top of each graph, the spoiler pulse delay is reported. When the emittance spoiler pulse overlaps with the centroid of the photocathode laser pulse, corresponding to τ ¼ 0, the two FEL colors generated are balanced in intensity due to the uniform lasing along the bunch. The durations and the relative amplitude of the two pulses at τ ¼ 0 fs, are consistent with the slice emittance profile reported in Fig. 3. For τ ¼ AE0.66 ps the spectrum can be unbalanced toward the high or low energy color. For larger delay one of the colors is strongly reduced. With delays of the spoiler pulse larger than 1.33 ps the single spectral line FEL is recovered. The duration of the individual color is also controlled by the relative spectral content which is a function of the emittance spoiler pulse delay and intensity. The calculated FWHM pulse durations for the longer and shorter wavelength color as a function of the spoiler delay τ are reported in Fig. 5(b). These values are retrieved by fitting the second order spectral correlation function with the model derived in [31]. The spectra are numerically filtered taking into account the most relevant machine parameters which are recorded synchronously. As shown in the figure, adjusting the spoiler delay, the duration of the two FEL colors is significantly unbalanced and can be reduced significantly with minimum values of 6 and 9 fs FWHM for the long and short wavelength pulses. Figure 5(c) displays the averaged FEL pulse energy and the energy stability as a function of the delay of the laser spoiler. It is worth noting, that no special tuning of the machine is required to obtain the pulse energy in the two symmetric pulses of 190 μJ, corresponding to 56% of the energy measured for standard single x-ray color. In this condition, the standard deviation of the energy is 17 μJ (AE9%), which is similar to the standard FEL energy stability confirming the excellent robustness of the presented scheme. The FEL energy increases for positive delays of the laser spoiler until reaching an energy of 320 AE 34 μJ comparable with the pulse energy for normal SASE FEL operation. Delays of the spoiler toward lower values produce a reduction of the energy indicating that the electron bunch radiates more in the tail. This shows a potential application of the laser emittance spoiler as diagnostic to reveal which part of the electron beam contributes to the FEL process. In conclusion, we present a simple and robust method to generate a two-color hard x-ray FEL. This novel approach relies on a laser emittance spoiler overlapped with the nominal photocathode laser. It enables the generation of two-color FEL pulses with high energy and spectral stability (energy 56% of the nominal FEL and comparable stability) with negligible interference to the machine setup. The accessible range for the wavelengths, wavelength separation and temporal delay of the two pulses is set by the electron bunch energy, chirp and duration of the single color FEL. the presented scheme allows shot-to-shot switching between single and two-color modes of operation which can be used for sophisticated lock-in detection schemes for pump-probe experiments. It does not contribute to additional charge losses along the machine and is therefore suitable for operation with the next generation high-repetition rate, high average power x-ray lasers. The reliability of the method and the ease of the setup responds to the demand of the FEL scientific community for lower risk two-color configurations to be used in advanced x-ray pump, x-ray probe experiments. We are indebted to A. Cavalieri, G. Knopp and C. Beard for the valuable discussions. We thank all the SwissFEL technical groups for their support.
4,981.2
2021-01-01T00:00:00.000
[ "Physics" ]
Performance of Successive Reference Pose Tracking vs Smith Predictor Approach for Direct Vehicle Teleoperation Under Variable Network Delays Vehicle teleoperation holds potential applications as a fallback solution for autonomous vehicles, remote delivery services, hazardous operations, etc. However, network delays and limited situational awareness can compromise teleoperation performance and increase the cognitive workload of human operators. To address these issues, we previously introduced the novel successive reference pose tracking (SRPT) approach, which transmits successive reference poses to the vehicle instead of steering commands. This article compares the stability and performance of SRPT with Smith predictor-based approach for direct vehicle teleoperation in challenging scenarios. The Smith predictor approach is further categorized, one with Lookahead driver and second with Stanley driver. Simulations are conducted in a Simulink environment, considering variable network delays (250–350 ms) and different vehicle speeds (14–26 km/h), and include maneuvers such as tight corners, slalom, low-adhesion roads, and strong crosswinds. The results show that the SRPT approach significantly improves stability and reference tracking performance, with negligible effect of network delays on path tracking. Our findings demonstrate the effectiveness of SRPT in eliminating the detrimental effect of network delays in vehicle teleoperation. Abstract-Vehicle teleoperation holds potential applications as a fallback solution for autonomous vehicles, remote delivery services, hazardous operations, etc.However, network delays and limited situational awareness can compromise teleoperation performance and increase the cognitive workload of human operators.To address these issues, we previously introduced the novel successive reference pose tracking (SRPT) approach, which transmits successive reference poses to the vehicle instead of steering commands.This article compares the stability and performance of SRPT with Smith predictor-based approach for direct vehicle teleoperation in challenging scenarios.The Smith predictor approach is further categorized, one with Lookahead driver and second with Stanley driver.Simulations are conducted in a Simulink environment, considering variable network delays (250-350 ms) and different vehicle speeds (14-26 km/h), and include maneuvers such as tight corners, slalom, low-adhesion roads, and strong crosswinds.The results show that the SRPT approach significantly improves stability and reference tracking performance, with negligible effect of network delays on path tracking.Our findings demonstrate the effectiveness of SRPT in eliminating the detrimental effect of network delays in vehicle teleoperation. I. INTRODUCTION A UTOMATED vehicles (AVs) have garnered increasing attention as a potential solution for future mobility.However, the deployment of AVs is still hindered by various difficulties and edge cases that have yet to be fully resolved.Teleoperation has emerged as a backup plan for AVs, offering a way to remotely support an AV when it reaches the limits of its operational design domain (ODD).Teleoperation is the remote control of a device or a vehicle from a distance.This can be done using either wired communication or wireless communication. Here the vehicle is a mobile robot that can be controlled remotely, typically wirelessly.The use of teleoperation technology is to offer a secure and effective method to get over these restrictions anytime an AD function hits the limits of its ODD.The AV can resume its voyage in full automation after it has been returned to its nominal ODD [1].Vehicle teleoperation has the potential to also revolutionize various industries, such as autonomous taxi service, industrial equipment teleoperation, disaster response, and military operations. Despite having great potential, vehicle teleoperation is currently facing various challenges, such as problems with human-machine interaction, limited situational awareness, network latency, and control loop instability.Although the challenges are significant, we are primarily focusing on reducing the detrimental impact of network latency.By doing so, we are Fig. 2. Vehicle teleoperation concepts in which the human operator is actively involved [1], [2], [3]. aiming to improve the stability of the control loop, which in turn will enhance the safety and effectiveness of teleoperation systems, and ultimately help to achieve more reliable and efficient teleoperation of vehicles. Daniel Bogdoll et al. [2] proposed a taxonomy for Remote Human Input Systems (RHIS) for vehicle teleoperation based on intervention levels of human operators.It broadly categorizes RHIS approaches into remote driving, remote assistance, and remote monitoring.This classification aligns with the classification of remote operations of vehicles by Oscar Amador et al. [3].Further refining the remote driving category, Domagoj Majstorovic et al. [1] distinguished between direct control, shared control, and trajectory guidance.They also classified remote assistance techniques into waypoint guidance, interactive path planning, and perception modification.Fig. 2 shows the vehicle teleoperation concepts in which human operator is actively involved. The time delays in vehicle teleoperation tasks reduce the accuracy and speed at which human operators can perform a remote task [4], [5].Significant delays can cause overcorrection by the operator, resulting in oscillations that impair teleoperation performance and may even destabilize the control loop [6], [7].Direct control [8], [9], [10], [11], [12], [13], [14], [15] approach in vehicle teleoperation involves the operator viewing sensor data and sending control signals like steering and throttle, but it suffers from reduced situational awareness and transmission latency.Shared control [16], [17], [18], [19], [20], [21] has a shared controller inside the vehicle that assesses operator commands to avoid collisions, improving safety but still suffering from latency.Trajectory guidance [22], [23], [24], [25], [26] involves the vehicle following a path and speed profile generated by the operator without being affected by network latency, although real-time profile generation is unfeasible.Interactive path planning [27], [28] uses the vehicle's perception module to calculate optimal paths, which the operator confirms to follow, bypassing network latency but requiring a functional set of AD perception module.Perception modification [29] involves the operator identifying false-positive obstacles to support the AD perception module, which largely depends on the availability of AD perception module. Our work on SRPT vehicle teleoperation stands out as it strengthens the direct control concept.Direct control concept doesn't rely on the automation and perception modules of autonomous vehicles.Unlike other teleoperation concepts that depend on the perception module, direct control offers independence, acting as a fallback option for autonomous vehicles.The perception module, while essential for autonomous driving, has drawbacks such as limited performance in adverse weather, vulnerability to sensor interference, processing time, computational load, and other challenges.In scenarios where the perception module fails, other teleoperation methods become infeasible.By focusing on enhancing direct control, our SRPT approach aims to overcome these limitations and provide a more dependable vehicle teleoperation solution. A. Related Work Direct control -In response to the rising interest in remote operation systems, Hofbauer et al. [30] developed a system that enables direct control interaction with a vehicle during teleoperation in the CARLA driving simulator.To address the lack of publicly available software for remote driving functionalities, Schimpe et al. [31] contributed by releasing an open-source software implementation.This software is designed for quick and flexible deployment across various automotive vehicles and has been successfully used in projects like UNICARagil [32] and 5GCroCo [33].Chucholowski et al. [11] evaluated the "Frame Prediction" method for teleoperating road vehicles, using a single-track vehicle dynamic model to predict vehicle positions [34].Tang et al. [12] introduced the Free Corridor for ensuring a safe end state in case of connection failure.Graf et al. [13] combined these ideas to create the "Predictive Corridor" approach.Predictive displays have shown effectiveness in compensating for delays and enhancing vehicle mobility in human-in-the-loop experiments [35], [36], [37], [38], [39], [40], [41].Predictive models can be model-based [38], modelfree [40], or a combination of both [41], each having specific strengths and limitations.Combining both approaches improves operation, though not significantly.In good performing modelbased prediction strategies, the Smith predictor control strategy is used, which was introduced by O.J. Smith in 1957 [42] for delays in chemical processes.We employ Smith predictor stategy in our article to compare it with SRPT vehicle teleoperation.In summary, predictive displays enable real-time vehicle control for human-in-loop teleoperation, but their effectiveness may decrease in the presence of strong disturbances such as low-adhesion roads or crosswinds, leading to asynchrony issues. Shared control -Equipped with obstacle avoidance capabilities, shared control aims to enhance the safety of the ego vehicle and other road participants in real-time.Schimpe and Diermeyer [18] proposed an MPC-based shared steering control for obstacle avoidance, modeling obstacles as repulsive potential fields.Qiao et al. [19] developed a human-machine interaction model using Nash equilibrium-based non-cooperative games.Schitz et al. [20] introduced an MPC-based assistance approach in cruise control mode.Storms et al. [21] presented an MPC-based shared control system for static obstacle avoidance, while Saparia et al. [16] used predictive displays to mitigate latency and MPC-based shared control for obstacle avoidance.Shared control faces similar challenges to direct control, such as prediction inaccuracy in disturbances.But with a functional perception module, it effectively assists the operator in collision avoidance and enhances safety.However, a downside is the strict requirement of a functional perception module. B. Previous Work In our prior research [43], we introduced SRPT, a pose-based control strategy for vehicle teleoperation.The driver model at the control station considers the delayed vehicle pose from the remote vehicle and the known mission plan.The control station discretely transmits the intended vehicle pose (reference pose) at 30 Hz.On the remote vehicle side, the controller receives reference pose and optimizes for steer and speed commands.It accounts for actuator constraints and environmental disturbances, utilizing IMU sensors in the vehicle to sense environmental changes.In other work [44], we evaluated SRPT, where the human operator creates waypoints (reference poses) by steering the augmented lookahead vehicle (blue) outline using joystick steering (fig.1). C. Contribution of paper This article focuses on assessing the performance improvement of the SRPT approach for vehicle teleoperation by comparing it with the Smith prediction strategy for a range of vehicle speeds (14-26 km/h).In the Smith prediction strategy, after predicting vehicle states, two types of driver models are assessed.One is the Lookahead driver model and second is the Stanley driver model.The test track consists of maneuvers with progressively increasing difficulty.The experiments are performed in a Simulink simulation environment, where variable network delays (250-350 ms) and a 14-dof vehicle model for the main vehicle are considered.Overall, our experiments show that the SRPT approach outperforms the Smith prediction strategy in terms of accuracy, and stability, especially for challenging maneuvers.These findings demonstrate the potential of the SRPT approach to improve the safety and efficiency of vehicle teleoperation in real-world applications. D. Outline of paper The rest of the article is organized as follows.Section II-A presents the characteristics of network delay.Section II-B presents the Smith predictor with two driver models.Section II-C explains the SRPT mode.Section III provides an overview of the simulation platform.Section IV discusses the experimental structure.Section V presents and discusses the results.Section VI concludes with the work summary, key findings, and future work. A. Network Delays The time delay involved in vehicle teleoperation can be divided into two parts from the perspective of the control station.The first part is termed as the downlink delay (τ 2 ), which pertains to the time taken for streamed images to reach the control station.The second part is referred to as the uplink delay (τ 1 ), encompassing the interval between generating driving commands at the control station and their execution in the vehicle.The downlink delay amalgamates several factors, including camera exposure delay, image encoding time, network transmission delay, and image decoding time, with network delay being the primary variable component. In contrast, the uplink delay comprises the network transmission delay of driving commands to the vehicle and the subsequent vehicle actuation delay.In scenarios involving wireless communication via 4G, variability impacts both downlink and uplink delays.Fig. 3 displays the corresponding delays for the utilized bandwidth.This illustration considers 5000 picture frames and driving commands in a typical urban environment, with the vehicle connected to 4G mobile connectivity and the control station linked to wired internet.Measurement of τ 1 occurs at the vehicle by subtracting the timestamp of driving commands from the current timestamp, while τ 2 is determined at the control station by subtracting the timestamp of a received image from the current timestamp. B. Smith Predictor With Two Types of Driver Model The Smith predictor approach [42] is a popular predictive control method used in bilateral teleoperation.It was first introduced by O.J. Smith in 1957 and is a model-based prediction approach.Fig. 4 shows a schematic of the Smith predictor in the control loop of vehicle teleoperation systems with variable time delays.The steering input is passed through the Smith predictor block, which outputs a correction term which needs to be added to the (received) delayed pose to predict the current pose of the vehicle.Smith predictor block is further elaborated in our previous work [43], where its transfer function is presented.It provides the human operator with the sense of controlling the vehicle in real-time by predicting the current position of the vehicle, bypassing the network delay.Thereupon, the human operator can steer based on vehicle current pose and the mission plan.In this article, two types of driver model are considered instead of human volunteers for the sake of reproducibility of results and as a preliminary comparison of the SRPT approach with the Smith predictor approach.Christoph Popp et al. [45] suggest that geometry base lateral controller for a vehicle works well for low lateral acceleration scenarios.Considering low-medium speed vehicle teleoperation, below mentioned two driver models are adopted. 1) Lookahead Driver, H 1 : This driver model represents the general control tendency while driving at low-medium lateral accelerations, in which the human operator steers the vehicle to try to align a look-ahead point with the desired trajectory (Fig. 5(a)).Look-ahead driver model based on the cross-track error at the look-ahead point (motivated by [46]) is given by δ : Steer angle.k 1 : Gain term, a constant for a given vehicle longitudinal speed. Δy L : Cross-track error of the look-ahead point from the reference trajectory. k 2 = 0.90 : look-ahead time.k 1 is tuned for a range of vehicle speeds to have minimum deviation of vehicle (CG) from the reference trajectory, while driving across region-A of the trajectory shown in Fig. 9. Observations are presented in Fig. 5(b). k 1 is tuned for a constant k 2 without considering network delays in the control loop.Although in the presence of delays a human operator can adapt his actions, but keeping [k 1 ; k 2 ] unchanged ensures no adaptability and highlights performance deterioration due to delays. 2) Stanley Lateral Controller Driver, H 2 : Kinematic Stanley controller [47] with the reference point at center of front axle, given by ) Δψ represents the vehicle's heading relative to the nearest segment of the trajectory.The variable Δy F represents the cross-track error at the front axle center.V x represents the vehicle speed.k is also tuned for the same range of vehicle speeds to have minimum deviation of vehicle (CG) from the reference trajectory while driving across region-A of the same trajectory.Observations are presented in Fig. 6. Parameters of both driver models are tuned in only region-A of the trajectory.Region-A is not constant radius but it carries variable curvature across itself as shown in Fig. 9.This means the driver models are tuned for variable curvatures. C. SRPT Teleoperation Approach With Reference-Pose Decider Driver Model In predictive display vehicle teleoperation, where the modelbased prediction approach (discussed above) is effective on normal roads and under normal conditions.Disturbances like strong winds, low-adherence roads, and bumps can alter vehicle dynamics.Parameter estimation techniques presented in articles [48], [49] can be useful for changes in dynamics that last from medium to high duration.These techniques use sliding window batch estimation, the estimations itself are delayed (due to convergence time).Momentary disturbances can have a significant impact on the vehicle output before the new plant dynamics are estimated at the control station and corrective action is taken by the human operator. The SRPT approach for vehicle teleoperation differs from traditional methods, in SRPT the human operator transmits reference poses instead of steer-throttle commands to the remote vehicle.These reference poses are generated with a look-ahead time of [1 + τ 1 + τ 2 ]s, which results in the vehicle receiving reference-poses approximately 1 s ahead of its current position.This horizon of 1 s is chosen arbitrarily based on the fact that a driver typically steers a vehicle based on upcoming vehicle position.The same time horizon of Δt Horizon = 1 s is also used (inside vehicle) for the NMPC block to optimize for vehicle steer-speed commands.While the SRPT approach is effective, it represents a departure from conventional vehicle teleoperation, where the human operator transmits steer-throttle commands to the remote vehicle. 1) Reference-Pose Decider Driver Model: The task of the human model block is to transmit information that informs the vehicle about its aiming direction.Referring to Fig. 8 It is lower bounded by l F , the front axle distance from CG.It is linearly proportional to the round trip delay (τ = τ 1 + τ 2 ) ( For this article, we adopt the second approach, which does not require the vehicle to explicitly estimate how much it has travelled during the round-trip delay.Modeling this driver model for simulation is straightforward as the entire trajectory is pre-known.However, in human-in-the-loop experiments, an equivalent driver model can be obtained, where the correction, P D C , can be decided by the human operator.In our previous work [44], this correction term is getting generated online using a steering joystick, briefly represented by the below relation ΔP Joystick -It is the correction term generated by the augmented lookahead vehicle (blue) outline on the visual interface (Fig. 1) with help of joystick steering. D. NMPC Block The NMPC block on the vehicle side takes into account the reference poses received, it analyzes the current states of the vehicle, and actuator constraints to generate optimized steer and speed commands.The prediction model of NMPC is presented in our previous work [43], [44].The objective is to synchronize the target reference pose with the trajectory of the vehicle while minimizing inputs (steer-rate and vehicle acceleration) and maintaining a speed close to the reference speed (V Ref ) asked by the human operator.It also respects input constraints.One input constraint is the maximum steer-rate of 360 is due to the actuator constraint of the motor for the steering actuation.Another input constraint is vehicle acceleration and deceleration limits.Further description of NMPC block is presented in the previous works mentioned earlier.A prediction horizon (Δt Horizon ) of 1 s is used, divided into 50 intervals through discrete multiple shooting, and solved by sequential quadratic programming with the real-time NMPC solver ACADOS [50], [51]. III. SIMULATION PLATFORM A faster than real-time simulation test platform for vehicle teleoperation with network delay is developed using Simulink + Unity3D, shown in Figs. 4 and 7. Unity3D is used only to provide visuals of vehicle maneuvers. Table I provides a brief description of the vehicle type used in the 14-dof Simulink vehicle model, which represents a typical FWD passenger vehicle.Table II provides additional descriptions of each block, including their working rate. The e −τ 2 s , Human model, and e −τ 1 s blocks work synchronously with each other at 30 Hz to simulate the usual discrete nature of video streaming to the control station.The downlink delay (τ 2 ) is considered a variable delay to simulate usual network delays, while the uplink delay (τ 1 ) is considered a constant of 0.060 s due to its lower magnitude and variability.To simulate the downlink delay, a generalized extreme value distribution, GEV (ξ = 0.29, μ GEV = 0.200, σ = 0.009) is used [38], [41].Positive ξ means that the distribution has a lower bound of (μ GEV − σ ξ ) ≈ 0.169 s (> 0) and a continuous right tail based on extreme value theory, keeping the variable downlink delay in the range of 0.169 s − 0.300 s. IV. EXPERIMENTAL SETUP Fig. 9 shows a 438 m test track consisting of eight regions labeled from A to H.These regions simulate increasingly challenging maneuvers and severe environmental conditions.Region A involves cornering with a radius of 15 m (R15), B involves cornering (R8) on a surface with road adherence coefficient of μ = 0.7.Region C is double lane change, D involves cornering with μ = 0.5, E-F includes strong lateral wind with a Chinese hat profile [52], [53], G involves a U-turn with μ = 0.33, and H involves a slalom.All the curves have gradually changing curvature, as shown for region-A.The objective is to follow the track centerline as closely as possible, with a maximum vehicle speed limit of V Ref , specified by the human block.It is anticipated that during difficult manoeuvres, the NMPC block regulates the vehicle speed (V opt ) to minimize the cross-track error, which is a desirable behavior. To compare SRPT performance over Smith-predictor performance, a total of eight modes are considered (as given below): 1 V. RESULTS AND DISCUSSION The extent of performance degradation due to delays is expected to vary based on the vehicle speed and path difficulty (tight corners).At lower speeds, latency effects are less pronounced as the human operator has more time to correct the maneuver and vehicle has more time to respond to commands. As speed increases, the available response time to perform a maneuver decreases, and latency can significantly impact the accuracy and safety of the maneuver.The chosen test track has an increasing level of difficulty along its length and it will get traversed at various speeds, one at a time, for this study.Just to understand the approximate steer-rate requirement for the track at corresponding vehicle speeds, the Ackermann steering relation can be used as given below: V : Vehicle speed.In Fig. 10, the steer-rate requirement for different vehicle speeds along the track is shown.It can be observed that in regions C and H, the steer-rate requirement exceeds the steer-rate capability of the steering motor for reference speeds ≥ 22 km/h.This indicates that, at elevated speeds, the vehicle might struggle to execute essential steering maneuvers, potentially resulting in increased cross-track error and diminished performance.Given that the track encompasses factors beyond steer-rate constraints that can adversely affect performance, we have chosen to use cross-track error as the performance metric, aiming for its minimization.Cross-track error is defined as the minimum Euler distance between the vehicle CG and the reference path at any given time.The RMS of the cross-track error at the vehicle's CG is computed for each track region to facilitate a performance comparison among the respective vehicle teleoperation modes. Fig. 11 presents a quantitative analysis of cross-track errors observed in regions A-H for different vehicle speeds and teleoperation modes.In region A-D, the Smith predictor ameliorates the negative effect of delays and tries to reduce the cross-track error to its respective undelayed mode (the green bars are shorter than the red bars).However, the SRPT mode, even with delay (purple bars), resulted in significantly smaller cross-track errors. In regions E-F with strong crosswinds, the Smith predictor approach results in larger cross-track errors because it is unaware of the wind disturbances.In contrast, the NMPC controller in the SRPT mode takes vehicle states as input, leading to a significant improvement in teleoperation.In region-G (μ = 0.33), for the high-speed lap, all teleoperation modes except the SRPT mode resulted in high lateral slip and therefore high cross-track errors.In region-H (the slalom), the cumulative impact of both the steer-rate constraint and delay in the control loop deteriorates the performance.Even in this region, the SRPT mode demonstrated a significant reduction in cross-track error. The analysis of the results revealed that the primary reason for the superior tracking performance of the SRPT mode is its ability to moderate the vehicle speed appropriately in areas where it is necessary to minimize the cross-track error.Consequently, this leads to a slight increase in the completion time, as shown in Fig. 12.An example of the trade-off between completion time and safety can be seen in region-H (slalom) where SRPT mode resulted in a 25% increase in completion time at V Ref = 26 km/h.Despite the longer completion time, this mode ensures higher safety and minimizes cross-track error, which is particularly important for this tight slalom at this vehicle speed.Fig. 13 presents the trajectory traversed with all the teleoperation modes for V Ref = 26 km/h.It qualitatively shows better performance of SRPT approach even in the presence of all the disturbances and variable delays.The red trajectory of look-ahead driver model resulted in big oscillations due to network delay and due to steer-rate saturation.If any mode deviates significantly from the track, to the extent that it may compromise the results of the subsequent region, the mode is reset before entering the new region, while maintaining the vehicle's initial speed as the reference speed. These large deviations and oscillations are not present in SRPT approach because NMPC block accounts for the steer-rate limitation and subsequently decelerates the vehicle to allow for more time to steer.Interestingly, the performance of the SRPT mode, with and without network delay, is similar (blue bars and purple bars are similar in height in Fig. 11).This can be attributed to the difference in SRPT mode operating principle, wherein the vehicle receives reference poses instead of steer commands from the control station. A. Discussion on Implications of Network Routing and Network Discontinuity In a real-world scenario, the vehicle is mobile and can be connected through 4G/5G networks.On the other hand, the control station, being a static entity, can be linked to a high-speed wired internet connection, as lower latency enhances safety.The delays experienced in network communication might be influenced not only by the inherent network characteristics but also by routing choices made between different network operators' systems (inter-domain) and within a single organization's network (intra-domain).In our latency measurement experiments, we used two distinct network operators. To maintain the broad applicability of our work, we considered lumped forms of downlink and uplink delays.Specifically, the lumped downlink delay is essentially the location-shifted distribution of the variable network delay part.The variable network delay distribution is well-fitted using the generalized extreme value (GEV) distribution with parameters including location, scale, and shape. The challenge of unreliable connectivity in 4G/5G networks is a tangible concern.For the scope of this work, we have chosen not to merge the issue of extreme network discontinuity, as unreliable connectivity can lead to substantially greater delays.This concern demands a distinct approach, possibly involving measures such as emergency stop mechanisms, the introduction of full autonomy for safe parking, or the implementation of redundant internet connections.Addressing this challenge is crucial to ensuring the robustness and reliability of vehicle teleoperation, especially in scenarios where network connectivity might be compromised. VI. CONCLUSION In this article, we evaluated the SRPT approach for vehicle teleoperation, which involves transmitting reference poses to the remote vehicle instead of steering commands.A simulation framework was established in a Simulink environment to assess the approach under variable network delays (250-350 ms).We compared the performance of SRPT with the Smith predictor approach, incorporating two driver models: Lookahead and Stanley.Our simulation experiments encompassed diverse maneuvers and vehicle speeds (V Ref = 14-26 km/h), with the performance index being the RMS of cross-track error across various sections of the test track. The findings demonstrated the effectiveness of the SRPT approach across all maneuvers and environmental disturbances, for a range of vehicle speed.It consistently exhibited lower cross-track error compared to other teleoperation modes.Notably, SRPT excelled in path tracking performance, particularly in challenging scenarios such as low-adhesion road and slalom regions, showcasing significant improvement compared to other modes.The inherent mechanism of the SRPT approach allowed adaptive vehicle speed moderation during critical moments, granting additional time for steering during maneuvers.Although this led to a slight increase in completion time for complex maneuvers, the SRPT approach remained robust despite network delays. For real-world deployment, integrating a state estimator within the vehicle becomes essential.As we look ahead, investigating the effect of network routing on delays and exploring the impact of state-estimation inaccuracies on SRPT performance are vital directions for future research.Ultimately, this framework is poised for implementation in actual vehicle teleoperation experiments, bridging the gap between simulation and practical application. Fig. 1 . Fig. 1.Pictorial representation of SRPT approach for direct vehicle teleoperation.The remote vehicle receives successive reference poses as it moves forward. Fig. 4 . Fig. 4. Smith predictor schematic for vehicle teleoperation simulation.H 1 and H 2 are types of driver models considered.Unity has no role in simulation, it is just to display the manoeuvres. Fig. 5 .Fig. 6 . Fig. 5. (a) Look-ahead driver model control.(b) Tuning of k 1 for the lookahead driver model by optimizing for minimum cross-track error in region-A keeping k 2 = 0.9 s constant. Fig. 7 . Fig. 7. SRPT schematic for vehicle teleoperation simulation.Unity has no role in simulation, it is just to display the manoeuvres. , human model block receives delayed vehicle states, X(t)e −τ 2 s , which consists of vehicle pose, P C O .It is the delayed vehicle pose in global reference frame, O. Being aware of the whole trajectory, the human model block first finds the closest point C on the reference trajectory.Then it finds the point D, which is L ind distance ahead of point C. The L ind is the look-ahead distance govern by below relation: Fig. 8 . Fig. 8. Working principle of the reference-pose decider block.Its task is to choose the future reference pose based on the received vehicle pose and lookahead distance. Fig. 9 . Fig. 9. Track contains various sections A-H of difficult manoeuvres and worstcase environmental conditions. Fig. 10 . Fig. 10.Approximate steer-rate requirement for the track for various vehicle speeds.(This figure provides a preliminary indication of the anticipated challenges during evaluations at higher vehicle speeds.). Fig. 11 . Fig. 11.Vehicle teleoperation simulation result on the metric of cross-track error (Δy CG ) with various modes for vehicle speeds 14 − 26 km/h.SRPT vehicle teleoperation is found to be accurately tracing the track, even in the presence of variable delays. Fig. 14 . Fig. 14.Evolution of speed and steer profile for Lookahead-Smith and SPRT mode under variable delays.Automatic speed reduction is evident in SRPT mode, this allows more time to steer in tight cornering regions. Fig. 14 Fig.14shows the vehicle speed profile along the track length for Lookahead-Smith mode and SRPT mode, both in the presence of network delays.The SRPT mode implemented automatic speed modulations, which were noticeable in all cornering regions, particularly in the slalom region.These modulations helped steer in advance, as shown in the zoomed rectangle inside the figure.Interestingly, the performance of the SRPT mode, with and without network delay, is similar (blue bars and purple bars are similar in height in Fig.11).This can be attributed to the difference in SRPT mode operating principle, wherein the vehicle receives reference poses instead of steer commands from the control station. Performance of Successive Reference Pose Tracking vs Smith Predictor Approach for Direct Vehicle Teleoperation Under Variable Network Delays Jai Prakash , Michele Vignati , Member, IEEE, and Edoardo Sabbioni TABLE I 14 -DOF MODEL: VEHICLE BRIEF CHARACTERISTICS TABLE II DESCRIPTION OF THE BLOCKS USED IN THE SIMULATION PLATFORM
7,149.8
2024-04-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Aluminum-Based Water Treatment Residue Reuse for Phosphorus Removal Aluminum-based water treatment residue (Al-WTR) generated during the drinking water treatment process is a readily available recycled material with high phosphorus (P) adsorption capacity. The P adsorption capacity of Al-WTR generated from Singapore’s water treatment plant was evaluated with reference to particle size range, adsorption pH and temperature. Column tests, with WTR amendments in sand with and without compost, were used to simulate the bioretention systems. The adsorption rate decreased with increasing WTR sizes. Highest P adsorption capacity, 15.57 mg PO43−-P/g WTR, was achieved using fine WTR particles (>50% particles at less than 0.30 mm). At pH 4, the contact time required to reduce effluent P concentration to below the detectable range was half compared with pH 7 and 9. The adsorption rate observed at 40 ± 2 °C was 21% higher compared with that at 30 ± 2 °C. Soil mixes amended with 10% WTR and compost were able to maintain consistently high (90%) total phosphorus (TP) removal efficiency at a TP load up to 6.45 g/m3. In contrast, TP removal efficiencies associated with columns without WTR amendment decreased to less than 45% as the TP load increased beyond 4.5 g/m3. The results showed that WTR application is beneficial for enhanced TP removal in bioretention systems. Introduction Water treatment residue (WTR) generated from the addition of alum (Al2(SO4)3•14H2O or ferric chloride (FeCl3) during the coagulation process in drinking water treatment offers a promising recycled material with high phosphorus (P) adsorption capacity.The use of WTR to control excess P in runoff as a result of fertilizer, manure or biosolids application in agricultural land has been receiving increasing attention [1,2].More recent applications focus on the use of WTR in soil media of bioretention systems, more commonly known as rain gardens, to remove phosphate content from urban runoff [3].In bioretention systems, plants only contribute to less than 20% of P retention, and therefore, WTR amendment could aid in long-term P removal [4].Sandy media and loamy sand media, which are common mixes used in bioretention systems, would be exhausted in less than five years and a decade of stormwater loads, respectively [4,5].In addition, the use of compost would result in significant performance deterioration due to exportation of P from compost [6].However, compost constitutes an important organic source in the soil mix to sustain healthy plant growth and the soil microbial community.These are important elements in bioretention systems. In Singapore, alum is mainly used in water treatment process for the removal of suspended matter.Aluminum-based WTR (Al-WTR) has been reported to have P adsorption capacities ranging from 6.6 to 18 g P/kg of WTR [1,7], while higher P adsorption of up to 23.9 g P/kg of WTR was attained at a lower reaction pH (pH 4) [8].The P adsorption capacity of WTR is notably higher than other materials evaluated for P removal.These include red mud and Krazonem soil, which are native to Queensland, Australia and the U.S., respectively [3].Likewise, the P adsorption capacities of perlite, zeolite and granular activated carbon (GAC) [9] are also lower than that of Al-WTR.Table 1 provides a summary on P adsorption capacities of the mentioned materials.Variations in P removal by WTR is a result of varying contents of aluminum oxide, other physicochemical parameters of the WTR, such as particle size, pH, retention time and the presence of inhibiting or competitive substances in the mixed liquor.Higher P adsorption values in WTR were also noted to correlate with higher aluminum oxides in the WTR.The general influences of the physicochemical parameters are summarized in Table 2. The surface runoff characteristics in Singapore had been reported to be in the pH range of 6.3-8.1 with total phosphorus (TP) concentrations of 0.5-3.2mg TP (as PO4 3− -P)/L [10]; secondary treated effluents from conventional domestic wastewater treatment (using activated sludge process) were in the pH range of 6.5-7.5, with mean TP concentration of 4.1 mg TP/L [11,12].In addition, runoff temperature could also vary significantly depending on the ambient temperature and temperature of the surface with which runoff comes in contact [13].These variations in pH, TP concentration and effect of runoff temperature could potentially influence Al-WTR performance for P adsorption.If Al-WTR were to be applied for P removal from runoff and/or secondary treated effluent, the P adsorption characteristics under these operating conditions would need to be determined.This paper therefore aims to evaluate the effects of WTR particle size, pH of reaction media and temperature on the P adsorption capacity and adsorption rate of the Al-WTR obtained from a local drinking water treatment plant.Column tests were further used to determine the P adsorption characteristics in mixes containing WTR and sand with or without compost. Water Treatment Residue WTR from a local water treatment works in Singapore, which employs coagulation using alum (aluminum sulfate), was used in this study.The physicochemical characteristics of the filter-pressed WTR are given in Table 3.The filter-pressed WTR was dried at 50 °C over a 5-day period.Dried WTR was further crushed into smaller sizes and sieved for selection of particle size range prior to the experiments. Experimental Phases The experiment was carried out in three phases.Phase 1 (P1) was carried out in batch tests using shake flasks with synthetic water comprised of di-potassium hydrogen phosphate (K2HPO4) and sodium chloride (NaCl) to simulate PO4 3− -P and ionic strength, respectively.The WTR of four different particle size ranges was used to evaluate the PO4 3− -P adsorption rates.In Phase 2 (P2), the effects of synthetic media pH and temperature on the PO4 3− -P adsorption rate were studied using the optimum WTR size determined from Phase 1. Phase 3 (P3) was carried out to determine the performance of WTR in column tests by using the optimum WTR size (determined in Phase 1) blended at a 10% composition into different soil mixes.The pre-dried WTR particles (at 50 °C) had average moisture and volatile solids content of 29.0% and 11.6%, respectively.After drying, the crushed particles were sieved through American Society for Testing and Materials (ASTM) sieves to obtain particle size ranges of more than 4.00 mm, 2.36-4.00mm and 1.18-2.36mm, while fine particles were obtained by further crushing the pre-dried particles of more than 4.00 mm using a pulverizer.Particle size distribution of the fine particles is summarized in Table 4.More than 50% of the fine particles was less than 0.30 mm.Synthetic feed comprised of K2HPO4 (150 mg PO4 3− -P/L) in 10.5 mg/L NaCl at pH ~7.2-7.3 was used to evaluate the effects of different WTR particle size ranges on P adsorption.P1, Test 1: Adsorption Isotherms of Different WTR Particle Size Ranges An amount of 150 mL synthetic feed was added into each 250-mL conical flask containing pre-weighed WTR of the respective particle size range and mixed using an orbital shaker (at 30 ± 2 °C, 200 rpm).The initial PO4 3− -P concentration and concentration after 48 h were tested for each condition.Samples were filtered through 0.45-µm pore size filter paper (GN-6 Grid 47 mm, Gelman Science, Ann Arbor, MI, USA) prior to determination of PO4 3− -P concentrations. The Freundlich isotherm model has been commonly used to model adsorption of PO4 3− -P onto solid adsorbents, such as aluminum oxide [17] and activated alumina [18].Zhao et al. [19] further demonstrated this isotherm model as the best model to fit the equilibrium data of Al-WTR.The Freundlich isotherm is expressed in Equation (1): where: = = mass of PO4 3− -P absorbed per unit mass of WTR (mg PO4 3− -P/g WTR). The Freundlich isotherm was used to model PO4 3− -P adsorption on WTR in this study.Equation ( 2) is derived from Equation (1): The Freundlich capacity factor and intensity parameter can be determined by plotting log against log for the different particle size range. P1, Test 2: Adsorption Rates of WTR with Different Particle Size Ranges An amount of 150 mL synthetic feed was added into each 250-mL conical flasks containing 2.50 ± 0.05 g of WTR of different particle size ranges.Samples were collected and filtered through 0.45-µm pore size filter paper (GN-6 Grid 47 mm, Gelman Science, Ann Arbor, MI, USA).The filtrates were collected from each flask on an hourly basis for analyses of PO4 3− -P concentrations.The absorption rate was then determined from the highest gradient on the slope of the plot with PO4 3− -P concentration against contact time. Phase 2: Effects of Reaction Media pH and Temperature on PO4 3− -P Adsorption Batch tests were carried out using 2.50 ± 0.05 g of fine WTR particles in 250-mL conical flasks containing 150 mL synthetic feed. P2-Test 1: pH Effect The reaction media were adjusted and maintained at pH 4.0 ± 0.5, 7.0 ± 0.5 and 9.0 ± 0.5 using 0.1 N hydrochloric acid (HCl).The adsorption tests were carried out in an incubator shaker (30 ± 2 °C, 200 rpm).Duplicate tests were performed for each condition, and samples were collected hourly. P2-Test 2: Temperature Effect The reaction media were buffered at pH 7.0 ± 0.5.The effect of temperature was evaluated at 30 ± 2 °C and 40 ± 2 °C in an incubator shaker set at 200 rpm.Duplicate tests were carried out for each condition, and samples were collected hourly. Samples collected in this phase were filtered through 0.45-µm pore size filter paper (GN-6 Grid 47 mm, Gelman Science, Ann Arbor, MI, USA), and the filtrates were analyzed for PO4 3− -P concentration.The maximum adsorption rate was determined as the highest rate of change in PO4 3− -P concentration per unit time, and the specific adsorption rate was determined as the adsorption rate per unit mass of WTR used in the tests. Phase 3: Phosphorus Removal Using 10% WTR Blended in Different Soil Mixes in Column Tests Influent water characteristics: Secondary treated effluent from a local domestic wastewater treatment plant was used as the influent to the column to simulate polluted runoff.The domestic wastewater treatment plant employed an anoxic-aerobic sequence that provided for enhanced nutrient removal.Hence, the total nitrogen (TN), TP, nitrate and phosphate (PO4 3− ) concentrations of the secondary treated effluent used in this study were significantly lower compared with conventional treatment plants that employed activated sludge process [11,12].The quality of the secondary treated domestic sewage effluent is summarized in Table 5.The water quality of secondary treated effluent used in this study was found to be comparable to urban runoff.TP content was in the upper ranges of Singapore's urban runoff, as reported previously by Chui [10].The column tests were carried out using various soil mixes packed in clear acrylic columns of dimension 30 cm × 3.4 cm (height × internal diameter).The base of the column was packed with supporting gravels (particle size range of 10.0-22.0mm) up to a height of 2 cm.This was followed by a 2-cm height of small gravels (particle size range of 1.5-3.0cm) and a 20-cm height of soil mix as the filter media.This arrangement provided an effective empty bed volume of approximately 182 cm 3 .Compositions of the four soil mixes used in this study are summarized in Table 6.A peristaltic pump was used to pump secondary treated effluent into the column from the top at a rate of 3 mL/min, equivalent to a hydraulic load of 0.2 m/h.The runoff was simulated in a batch sequence mode to represent each rainfall event.Each batch sequence was carried out by feeding 0.5 L secondary treated effluent and allowing it to percolate completely through the column before the effluent from each column was collected for water quality analysis.Feed and effluent samples were collected and analyzed for pH, TP and PO4 3− -P concentrations.A schematic diagram of the column test set-up is shown in Figure 1.Treatment performance was evaluated against the bed volume of packing media and TP load (per unit packing media volume).TP load to the column is calculated based on Equation (3): where TP load = total phosphorus load for every batch sequence (g/m 3 ); = TP concentration in the feed for the particular batch sequence (g/m 3 ); = feed volume of each batch sequence (m 3 ); = volume of packing column (m 3 ).Cumulative TP load for the n-th run is given by Equation (4): where ∑ = cumulative TP load for the n-th run (g/m 3 ) TP load1 = TP load for batch Sequence 1 (g/m 3 ) TP loadn = TP load for batch Sequence n (g/m 3 ). 2.2.4.Water Quality Analyses pH was measured using the Horiba pH meter F-54 BW (Horiba Ltd, Kyoto, Japan), while TP was analyzed using PhosVer ® 3 with the acid persulfate digestion method (Hach method 8190) [20].PO4 3− was analyzed using the Dionex LC 20 Chromatograph (Dionex Corporation, Sunnyvale, CA, USA) with a Dionex AS9-HC anion-exchange column after filtration using 0.45-µm pore size filter paper (GN-6 Grid 47 mm, Gelman Science, Ann Arbor, MI, USA).All water quality analyses were carried out in accordance with the Standard Methods for the Examination of Water and Wastewater Analysis [21].Water 2015, 7 1487 Phase 1: P Adsorption Characteristics of WTR Table 7 provides the K and n values determined from the experiments using Equation (2) after a 48-h contact time.The results demonstrated that the highest K value was obtained with fine WTR particles.The maximum PO4 3− -P adsorption rate (15.57mg PO4 3− -P/g WTR) using fine WTR particles was comparable to the maximum adsorption capacities of Al-WTR reported by Dayton and Basta [7], which ranged from 6.6 to 16.5 g/kg after 17 h of equilibration.The K value obtained using WTR in this experiment was at least 4-6-times higher than that reported using aluminum oxide of a similar particle size range [17].This could be due to the relatively higher Al content (157.9 g/kg) in the local WTR, which was about two-times more than that reported in other WTR materials [22].Higher Al oxide content had been demonstrated to achieve higher PO4 3− -P adsorption. Intrapore specific surface area was also noted to be 24-times the average particle size [23].Hence, smaller particles can significantly increase the intrapore specific surface area and, thus, higher effective area for PO4 3− -P adsorption.This explains the significant increase in maximum PO4 3− -P adsorption when the particle size was reduced from more than 1.18 mm to that of fine particles (with more than 50% of the particles being less than 0.30 mm). Effects of Particle Size on P Adsorption The adsorption of PO4 3− -P on Al-WTR was governed by the affinity of PO4 3− -P onto active surface sites, such as through electrostatic interactions and ligand exchange reactions [24].The adsorbed PO4 3− -P could be bound directly on the oxide surface in accordance with the processes dictated in Equations ( 5) and ( 6) [9]: Evidence showed that P adsorption onto Al2O3 was a mixture of complex mechanisms involving outer-and inner-sphere complexes with displacement of surface hydroxyl groups and water molecules with phosphate ions and surface precipitation [9,25]. Figure 2 illustrates the normalized residual PO4 3− -P concentration in the reaction media corresponding to different contact time for the WTR particles of various size range.Within approximately 7 h, 2.5 g of fine WTR particles were able to remove the initial PO4 3− -P concentration to a level below the detection limit.In contrast, the bigger WTR particles (1.18-4.00mm) required approximately 24 h, and those that were >4.00 mm required more than 24 h to achieve PO4 3− -P levels below the detection limit.Thus, this study demonstrated the rate of PO4 3− -P adsorption onto fine Al-WTR was rapid and the adsorption rate was strongly influenced by particle size (Figure 3).Fine particles also had the highest specific P adsorption rate, as compared with the bigger particles tested.The highest specific P adsorption rate of fine particles was observed to be 0.174 mg PO4 3− -P/g WTR/min.This value was approximately two-and five-times the specific rates obtained with larger particle size ranges of 1.18-2.36mm and >4.00 mm, respectively.The results indicated the adsorption was governed by intraparticle diffusion, which was highly significant in finer particles [18].The diffusion of adsorbed PO4 3− -P into the adsorbent resulted in precipitation of crystalline Al-phosphate and, eventually, irreversible binding of PO4 3− -P onto the WTR particles [9,26].It is important that rapid PO4 3− -P adsorption is achieved during a rainfall event when stormwater infiltrates through a bioretention system.This is because the hydraulic flow varies considerably during each storm event depending on the rainfall intensity.In Singapore, the rainfall intensity could range from less than 10 mm/h to more than 50 mm/h [27].Hence, this generates a high variation in the contact time as stormwater runoff flows through the filter media.The high adsorption capacity coupled with high adsorption rate using fine WTR particles could provide the characteristics required for PO4 3− -P removal from stormwater runoff in bioretention systems. Effect of pH on P Adsorption Figure 4 shows the maximum specific PO4 3− -P adsorption rate onto fine WTR particles was strongly dependent on pH of the synthetic feed.The specific PO4 3− -P adsorption rate was observed to increase with a reduction in pH.The chemical sorption onto aluminum oxide media would result in an exchange with the hydroxyl group, leading to a subsequent increase in pH (Equation ( 6)).Hence, the reaction would favor slightly acidic reaction conditions [17,28].The P adsorption onto WTR observed in this study concurs with that reported for other aluminum oxide materials.It is observed that a higher PO4 3− -P adsorption rate onto fine WTR was obtained at lower reaction media pH (pH 4) as compared with neutral conditions (pH 7).Conversely, the lowest PO4 3− -P adsorption rate was obtained at pH 9. Yang et al. [15] demonstrated the change in zeta potential, which correlates with the WTR surface charge, from positive to negative as the reaction solution pH changed from acidic to alkaline condition.The increase in hydroxyl ions on the surface of WTR under alkaline condition would lead to a reduction in phosphate adsorption affinity.Thus, this reduced the adsorption rate as observed at pH 9 compared with that at a lower pH.Maximum sorption obtained at slightly acidic conditions (pH 5) was also reported with aluminum oxide media [15,17].The P adsorption mechanisms onto WTR would be similar to that of alumina surfaces, which are related to ion exchange and complexation reactions [9,15,18].Generally, the release of hydroxyl ions leading to an increase in pH is expected when PO4 3− -P is adsorbed onto WTR (as shown in Equation ( 6)).The maximum specific adsorption rate for WTR was noted at pH 4. This rate was at least 13% higher than that obtained at neutral pH.The subsequent increase in pH to pH 9 reduced the adsorption rate further to 0.136 mg PO4 3− -P/g WTR/min.The PO4 3− -P removal efficiency at varying adsorption pH is shown in Figure 5.After 3 h of contact time, PO4 3− -P removal of more than 90% was achieved at pH 4, while only 87% and 83% of PO4 3− -P removal were achieved at pH 7 and 9, respectively.Overall, more than 48 h was required to reduce initial P concentrations to levels below the detectable limit at pH 7 and 9, while at the lower pH (pH 4), the contact time required for complete P adsorption was halved.However, extremely low pH may not be a favorable condition, as this would increase solubility and, hence, leaching of aluminum materials from the WTR [18].The pH dependency is related to the amphoteric properties of WTR surface, which is similar to that reported on alumina and the polyprotic nature of phosphate [17]. Effect of Temperature on P Adsorption The results of PO4 3− -P adsorption onto fine WTR particulates at different temperatures within the first 5 h of contact time are illustrated in Figure 6.This figure shows that PO4 3− -P adsorption onto fine WTR particulates occurred at a high rate with maximum specific adsorption rate occurring within the first h.More than 60% of PO4 3− -P removal was achieved at 30 ± 2 °C and 40 ± 2 °C within 1 h.Table 8 summarizes the maximum specific adsorption rates obtained at the two different temperatures.A higher PO4 3− -P adsorption rate (about 21% more) was achieved at 40 ± 2 °C compared to that at 30 ± 2 °C during the first h.Zhang et al. [29] demonstrated that the P adsorption capacity of aluminum oxide was generally higher at a higher temperature.The K value in Zhang et al.'s [29] study was approximately 48% higher at 35 °C compared with 30 °C.At a contact time of 5 h, the PO4 3− -P removal efficiency were 94% and 96% at 30 ± 2 °C and 40 ± 2 °C, respectively.After 24 h of contact time, close to 99% PO4 3− -P removal efficiency was achieved at both temperatures.This observation demonstrated that a higher temperature promoted a more rapid P adsorption rate onto fine WTR particles.However, following prolonged contact time, the effect of temperature did not influence the final PO4 3− -P removal efficiency.As the available adsorption sites and PO4 3− -P concentration were similar for both conditions, similar P removal efficiency was achieved after more than 5 h of contact time. The results from this study are important when applied to actual site conditions.The runoff temperature in tropical regions could vary in the range of 30-40 °C, depending on the surface temperature.On hot sunny days, heat may be transferred from hot impervious surfaces to the runoff, and hence, runoff that infiltrates into the soil mix would be of a higher temperature.Therefore, under such circumstances, the PO4 3− -P adsorption onto WTR particles could occur at a higher adsorption rate as compared to times when runoff is at a lower temperature. P Removal Using Soil Mixes in Column Tests The potential of WTR for long-term P removal from a polluted water source was evaluated by mixing approximately 10% (based on weight) of WTR with soil mixes that are commonly used in bioretention systems, namely sand with or without compost [30].Column 1 containing 100% sand was used as control.TP in the simulated runoff would include particulate P, organic P and inorganic P (PO4 3− -P).As noted in Figure 7, PO4 3− -P removal efficiencies by the different soil mixes were mostly higher than TP removal efficiencies.This phenomenon could be attributed to the fact that WTR only adsorbed PO4 3− -P.Particulate P and organic P, on the other hand, were not adsorbed by WTR.Particulate P could be trapped by the soil mix as it filters through the columns, while soluble organic P could remain in the treated runoff and flow out into receiving waterbodies.Similar results were observed in Lucas and Greenway's [3] study, where 97% PO4 3− -P removal was reported as compared to only 93% removal of total dissolved P. The overall P removal from the simulated runoff using the different soil mixes was evaluated based on TP concentration.Figure 8 illustrates the cumulative TP load removal with respect to the TP load into the columns packed with different soil mixes used in this study.It is noted from Figures 7 and 8 that the initial TP adsorption of all of the different soil mixes in the columns was insignificantly different up to a bed volume of 5.0 (or equivalent to 2.7 g TP/m 3 ) (p > 0.05, based on a two-sample t-test).Subsequently, as TP load increased, TP removal efficiencies by columns without WTR (Columns 1 and 4) decreased significantly from above 85% to below 45% when a TP load of more than 4.5 g TP/m 3 was applied (at more than a 30.0 bed volume).However, the columns containing 10% WTR (Columns 2 and 3) were able to maintain TP removal consistently above 90% throughout the study period.Table 9 summarizes the TP loads and removal efficiencies in columns containing 10% WTR.Column 1 containing 100% sand demonstrated limited capacity in retaining TP.TP removal deteriorated sharply beyond a TP load of 3.01 g/m 3 .This corresponded to bioretention studies that documented sandy media (at 85% sand by weight) could be exhausted after only five years' worth of TP loads from typical urban runoff [5]. The addition of 5% compost in sand media (Column 4) reduced the TP load removal by 14% as compared with the column containing only sand (Column 1).Column 4 was subjected to an overall C3 (85% sand + 10% WTR + 5% compost) C4 (95% sand + 5% compost) higher amount of P load due to the presence of compost, which has been known to leach P. Hence, it exhausted the sand's P adsorption capacity at a higher rate as compared to columns without compost (such as Column 1, which contained 100% sand).The addition of organic matter (10% compost and 10% mulch) to the soil media resulted in a net production of PO4 3− -P in the column test reported by Bratieres et al. [6].Similarly, in this study, the decrease in cumulative TP removal was observed in Column 4, which contained 5% compost after a cumulative TP load of more than 2.57 g/m 3 was fed to the column (corresponding to beyond a cumulative bed volume of about 7.6). The results from this study demonstrated that columns containing 10% WTR were able to provide long-term TP removal.Even with 5% compost in the soil mix, the presence of 10% WTR maintained a high TP removal, and the cumulative TP load removal was insignificantly affected by P leaching from the compost (p > 0.05, based on a two-sample t-test).The cumulative TP loads removed in Columns 2 and 3 were at 4.27 and 4.19 g/m 3 , respectively, when a TP load of up to 6.45 g/m 3 was applied to each column.The TP loads removed in these columns (Columns 2 and 3) were more than 2.0-times of those soil mixes without WTR.The column tests clearly demonstrated the ability of WTR to buffer TP removal capacity, even in the presence of materials that release P. Hence, WTR could provide a consistently low TP in the effluent (0.07-0.08 mg/L) to ensure high-quality treated water.Further, P adsorbed by Al-WTR has been reported to be irreversible and would remain stable for at least 7.5 years [1]. Although P adsorption onto aluminum oxide media has been reported to increase the treated water pH [17,18], such an observation was not evident in this study.The pH of the treated runoff was maintained at pH 6-8, which was within the acceptable pH range for discharge to receiving water bodies. Conclusions The high P adsorption capacity of recycled Al-WTR makes it highly attractive for application as a soil amendment for bioretention systems.In this study, the PO4 3− -P adsorption capacities and adsorption rates were determined for locally obtained Al-WTR of various particle sizes.A high P adsorption rate and P adsorption capacity of 0.174 mg PO4 3− -P/g WTR/min and 15.57mg PO4 3− -P/g WTR, respectively, were obtained for fine Al-WTR particles.This study also showed that the maximum specific PO4 3− -P adsorption rate was highly dependent on adsorption pH, where the highest adsorption rate was observed at pH 4 as compared with pH 7 and pH 9. A temperature at 40 ± 2 °C provided a higher initial PO4 3− -P adsorption rate (21% increase) as compared to 30 ± 2 °C, but both conditions resulted in close to 99% PO4 3− -P removal efficiency following prolonged contact time (24 h).In column tests, soil mixes amended with 10% WTR were able to consistently sustain TP removal efficiencies of more than 95%, even in the presence of 5% compost, while TP removal efficiencies in soil mixes without WTR amendment deteriorated to about 45% when the TP load increased beyond 4.5 g TP/m 3 .Hence, the use of Al-WTR amendment could sustain and lengthen the life-span of bioretention systems for TP removal. Figure 1 . Figure 1.Schematic diagram of column test set-up using different soil mixes. Figure 2 . Figure 2. Normalized PO4 3− -P concentration in the reaction media at different contact times. Figure 3 . Figure 3. Effects of different particle size ranges on specific PO4 3− -P adsorption rates of WTR. Figure 6 . Figure 6.PO4 3− -P removal using fine WTR particulates at varying temperature within the first 5 h of incubation. Figure 8 . Figure 8. Cumulative TP load removed by different soil mixes. Table 1 . Adsorbent materials for P removal. Table 2 . Influence of physicochemical characteristics on P adsorption by WTR. Table 3 . Physicochemical characteristics of Al-WTR from a local water treatment works. Table 4 . Particle size distribution of fine particles. Table 5 . Secondary treated domestic sewage effluent used in the study. Table 6 . Composition of soil mix in the column tests. *Note: * The compost used had moisture and organic matter contents of 19.5% and 51.5%, respectively. Table 7 . Freundlich isotherms K and n values for P adsorption by WTR with different particle size ranges. Table 8 . Maximum specific adsorption rate at different temperatures (within 1st hour of contact time) (n = 2). Table 9 . TP concentrations and removal efficiencies of columns containing 10% WTR.
6,741
2015-04-01T00:00:00.000
[ "Environmental Science", "Materials Science" ]
Fast algorithms for approximate circular string matching Background Circular string matching is a problem which naturally arises in many biological contexts. It consists in finding all occurrences of the rotations of a pattern of length m in a text of length n. There exist optimal average-case algorithms for exact circular string matching. Approximate circular string matching is a rather undeveloped area. Results In this article, we present a suboptimal average-case algorithm for exact circular string matching requiring time O(n) . Based on our solution for the exact case, we present two fast average-case algorithms for approximate circular string matching with k-mismatches, under the Hamming distance model, requiring time O(n) for moderate values of k, that is k=O(m/logm) . We show how the same results can be easily obtained under the edit distance model. The presented algorithms are also implemented as library functions. Experimental results demonstrate that the functions provided in this library accelerate the computations by more than three orders of magnitude compared to a naïve approach. Conclusions We present two fast average-case algorithms for approximate circular string matching with k-mismatches; and show that they also perform very well in practice. The importance of our contribution is underlined by the fact that the provided functions may be seamlessly integrated into any biological pipeline. The source code of the library is freely available at http://www.inf.kcl.ac.uk/research/projects/asmf/. Background Circular sequences appear in a number of biological contexts. This type of structure occurs in the DNA of viruses [1,2], bacteria [3], eukaryotic cells [4], and archaea [5]. In [6], it was noted that, due to this, algorithms on circular strings may be important in the analysis of organisms with such structure. Circular strings have previously been studied in the context of sequence alignment. In [7], basic algorithms for pairwise and multiple circular sequence alignment were presented. These results were later improved in [8], where an additional preprocessing stage was added to speed up the execution time of the algorithm. In [9], the authors also presented efficient algorithms for finding the optimal alignment and consensus sequence of circular sequences under the Hamming distance metric. *Correspondence<EMAIL_ADDRESS>1 King's College London, London, UK Full list of author information is available at the end of the article In order to provide an overview of our results and algorithms, we begin with a few definitions, generally following [10]. We think of a string x of length n as an array x[0. .n − 1], where every x[i], 0 ≤ i < n, is a letter drawn from some fixed alphabet of size σ = | |. The empty string of length 0 is denoted by ε. A string x is a factor of a string y if there exist two strings u and v, such that y = uxv. Let the strings x, y, u, and v be such that y = uxv. If u = ε, then x is a prefix of y. If v = ε, then x is a suffix of y. Let x be a non-empty string of length n and y be a string. We say that there exists an occurrence of x in y, or, more simply, that x occurs in y, when x is a factor of y. Every occurrence of x can be characterised by a position in y. Thus we say that x occurs at the starting position i in y when y[i. .i + n − 1] = x. The Hamming distance between strings x and y, both of length n, is the number of positions i, 0 ≤ i < n, such that x[i] = y [i]. Given a nonnegative integer k, we write x ≡ k y if the Hamming distance between x and y is at most k. http://www.almob.org/content/9/1/9 A circular string of length n can be viewed as a traditional linear string which has the left-and right-most symbols wrapped around and stuck together in some way. Under this notion, the same circular string can be seen as n different linear strings, which would all be considered equivalent. Given a string x of length n, we denote by x i = x[i. .n − 1] x[ 0. .i − 1], 0 < i < n, the i-th rotation of x and x 0 = x. Consider, for instance, the string x = x 0 = abababbc; this string has the following rotations: x 1 = bababbca, x 2 = ababbcab, x 3 = babbcaba, x 4 = abbcabab, x 5 = bbcababa, x 6 = bcababab, Here we consider the problem of finding occurrences of a pattern string x of length m with circular structure in a text string t of length n with linear structure. For instance, the DNA sequence of many viruses has circular structure, so if a biologist wishes to find occurrences of a particular virus in a carriers DNA sequence-which may not be circular-they must consider how to locate all positions in t that at least one rotation of x occurs. This is the problem of circular string matching. The problem of exact circular string matching has been considered in [11], where an O(n)-time algorithm was presented. A naïve solution with quadratic complexity consists in applying a classical algorithm for searching a finite set of strings after having built the trie of rotations of x. The approach presented in [11] consists in preprocessing x by constructing a suffix automaton of the string xx, by noting that every rotation of x is a factor of xx. Then, by feeding t into the automaton, the lengths of the longest factors of xx occurring in t can be found by the links followed in the automaton in time O(n). In [12], the authors presented an optimal average-case algorithm for exact circular string matching, by also showing that the average-case lower bound for single string matching of O(n log σ m/m) also holds for circular string matching. Very recently, in [13], the authors presented two fast average-case algorithms based on word-level parallelism. The first algorithm requires average-case time O(n log σ m/w), where w is the number of bits in the computer word. The second one is based on a mixture of word-level parallelism and q-grams. The authors showed that with the addition of q-grams, and by setting q = O(log σ m), an optimal averagecase time of O(n log σ m/m) is achieved. Indexing circular patterns [14] and variations of approximate circular string matching under the edit distance model [15]-both based on the construction of a suffix tree-have also been considered. In this article, we consider the following problems. Problem 1 (Exact Circular String Matching). Given a pattern x of length m and a text t of length n > m, find all factors u of t such that u = x i , 0 ≤ i < m. Problem 2 (Approximate Circular String Matching with k-Mismatches). Given a pattern x of length m, a text t of length n > m, and an integer threshold k < m, find all factors u of t such that u The aforementioned algorithms for the exact case exhibit the following disadvantages: first, they cannot be applied in a biological context since both single nucleotide polymorphisms as well as errors introduced by wet-lab sequencing platforms might have occurred in the sequences; second, it is not clear whether they could easily be adapted to deal with the approximate case. Similar to the exact case [12], it can be shown that the average-case lower bound for single approximate string matching of O(n(k + log σ m)/m) [16] also holds for approximate circular string matching with kmismatches under the Hamming distance model. To the best of our knowledge, no optimal average-case algorithm exists for this problem. Therefore, to achieve optimality, one could use the optimal average-case algorithm for multiple approximate string matching, presented in [17], for matching the r = m rotations of x requiring, on aver- We show how the same results can be easily obtained under the edit distance model. The presented algorithms are also implemented as library functions. Experimental results demonstrate that the functions provided in this library accelerate the computations by more than three orders of magnitude compared to a naïve approach. The source code of the library is freely available at http://www. inf.kcl.ac.uk/research/projects/asmf/. Properties of the partitioning technique In this section, we give a brief outline of the partitioning technique in general; and then show some properties of the version of the technique we use for our algorithms. The partitioning technique, introduced in [18], and in http://www.almob.org/content/9/1/9 some sense earlier in [19], is an algorithm based on filtering out candidate positions that could never give a solution to speed up string-matching algorithms. An important point to note about this technique is that it reduces the search space but does not, by design, verify potential occurrences. To create a string-matching algorithm filtering must be combined with some verification technique. The idea behind the partitioning technique was initially proposed for approximate string matching, but here we show that this can also be used for exact circular string matching. The idea behind the partitioning technique is to partition the given pattern in such a way that at least one of the fragments must occur exactly in any valid approximate occurrence of the pattern. It is then possible to search for these fragments exactly to give a set of candidate occurrences of the pattern. It is then left to the verification portion of the algorithm to check if these are valid approximate occurrences of the pattern. It has been experimentally shown that this approach yields very good practical performance on large-scale datasets [20], even if it is not theoretically optimal. For exact circular string matching, for an efficient solution, we cannot simply apply well-known exact stringmatching algorithms, as we must also take into account the rotations of the pattern. We can, however, make use of the partitioning technique and, by choosing an appropriate number of fragments, ensure that at least one fragment must occur in any valid exact occurrence of a rotation. Lemma 1 together with the following fact provide this number. Fact 1. Any rotation of x ; and any factor of length m of x is a rotation of x. Proof. Immediate from the pigeonhole principle-if n items are put into m < n pigeonholes, then at least one pigeonhole must contain more than one item. Based on Lemma 2, we take a similar approach to the one described by Lemma 1, to obtain the sufficient number of fragments in the case of approximate circular string matching with k-mismatches. Proof. Let f denote the length of the fragment. If we partition x in 2k + 4 fragments of length Therefore any factor of length m of x , and, by Fact 1, any rotation of x, must contain at least k + 1 of the fragments. For a graphical illustration of this proof inspect Figure 2. Exact circular string matching via filtering In this section, we present ECSMF, a new suboptimal average-case algorithm for exact circular string matching via filtering. It is based on the partitioning technique and a series of practical and well-established data structures such as the suffix array (for more details see [21]). Longest common extension First, we describe how to compute the longest common extension, denoted by lce, of two suffixes of a string in constant time (for more details see [22]). lce queries are an important part of the algorithms presented later on. Let SA denote the array of positions of the sorted suffixes of string x of length n, i.e. for all 1 ≤ r < n, we have • Compute arrays SA and iSA of x [21]. Algorithm ECSMF Given a pattern x of length m and a text t of length n > m, an outline of algorithm ECSMF for solving Problem 1 is as follows. Construct the string of length 2m − 1. By Fact 1, any rotation of x is a factor of x . 2. The pattern x is partitioned in 4 fragments of length (2m − 1)/4 and (2m − 1)/4 . By Lemma 1, at least one of the 4 fragments is a factor of any rotation of x. 3. Match the 4 fragments against the text t using an Aho Corasick automaton [25]. Let L be a list of size Occ of tuples, where < p x , , p t >∈ L is a 3-tuple such that 0 ≤ p x < 2m − 1 is the position where the fragment occurs in x , is the length of the corresponding fragment, and 0 ≤ p t < n is the position where the fragment occurs in t. 4. Compute SA, iSA, LCP, and RMQ LCP of T = x t. Compute SA, iSA, LCP, and RMQ LCP of T r = rev(tx ), that is the reverse string of tx . 5. For each tuple < p x , , p t >∈ L, we try to extend to the right via computing in other words, we compute the length E r of the longest common prefix of x [p x + . . 2m − 1] and t[p t + . . n − 1], both being suffixes of T. Similarly, we try to extend to the left via computing E l using lce queries on the suffixes of T r . 6. For each E l , E r computed for tuple < p x , , p t >∈ L, we report all the valid starting positions in t by first checking if the total length E l + + E r ≥ m; that is the length of the full extension of the fragment is greater than or equal to m, matching at least one rotation of x. If that is the case, then we report positions Approximate circular string matching with k-mismatches via filtering In this section, based on the ideas presented in algorithm ECSMF, we present algorithms ACSMF and ACSMF-Simple, two new fast average-case algorithms for approximate circular string matching with k-mismatches via filtering. Algorithm ACSMF The first four steps of algorithm ACSMF are essentially the same as in algorithm ECSMF. A small difference exists in Step 2, where the sufficient number of fragments in the case of approximate circular string matching with kmismatches is used. The main difference is in Step 5, where algorithm ACSMF tries to extend k + 1 times to the right and k + 1 times to the left. Given a pattern x of length m, a text t of length n > m, and an integer threshold k < m, an outline of algorithm ACSMF for solving Problem 2 is as follows. http://www.almob.org/content/9/1/9 3. Match the 2k + 4 fragments against the text t using an Aho Corasick automaton [25]. Let L be a list of size Occ of tuples, where < p x , , p t >∈ L is a 3-tuple such that 0 ≤ p x < 2m − 1 is the position where the fragment occurs in x , is the length of the corresponding fragment, and 0 ≤ p t < n is the position where the fragment occurs in t. 4. Compute SA, iSA, LCP, and RMQ LCP of T = x t. Compute SA, iSA, LCP, and RMQ LCP of T r = rev(tx ), that is the reverse string of tx . 5. For each tuple < p x , , p t >∈ L, we try to extend k + 1 times to the right via computing in other words, we compute the length E k r of the longest common prefix of x [p x + . . 2m − 1] and t[p t + . . n − 1], both being suffixes of T, with k mismatches. Similarly, we try to extend to the left k + 1 times via computing E k l using lce queries on the suffixes of T r . 6. For each tuple < p x , , p t >∈ L we try to extend, we also maintain an array M of size 2m − 1, initialised with zeros, where we mark the position of the i -th left and right mismatch, 1 ≤ i ≤ k, by setting 7. For each E k l , E k r , M computed for tuple < p x , , p t > ∈ L, we report all the valid starting positions in t by first checking if the total length E k l + + E k r ≥ m; that is the length of the full extension of the fragment is greater than or equal to m. If that is the case, then we count the total number of mismatches of the occurrences at starting positions max{p t −E k , p t + −m}, . . . , min{p t + −m+E k r , p t }, by first summing up the mismatches for the leftmost starting position For each subsequent position j + 1, we subtract the value of the leftmost element of M computed for μ j and add the value of the next element to compute μ j+1 . In case μ j ≤ k, we report position j. Since k < m, we can (pessimistically) replace k by m−1. Then we have 2m(m + 1)n/σ r ≤ cn. Solving for r, and using k ≤ (2m − 1)/2r − 2, gives the maximum value of k, that is Algorithm ACSMF-simple Algorithm ACSMF-Simple is very similar to Algorithm ACSMF. The only differences are: • Algorithm ACSMF-Simple does not perform Step 4 of Algorithm ACSMF; • For each tuple < p x , , p t >∈ L, Step 5 of Algorithm ACSMF is performed without the use of the precomputed indexes. In other words, we compute E k r and E k by simply performing letter comparisons and counting the number of mismatches occurred. The extension stops right before the k + 1th mismatch. Fact 2. The expected number of letter comparisons required for each extension in algorithm ACSMF-Simple is less than 3. Proof. Recall that on an alphabet of size σ , the probability that two random strings of length are equal is (1/σ ) . Thus, given two long strings, and setting r = 1/σ , there is probability r that the initial letters are equal, r 2 that the prefixes of length two are equal, and so on. Thus the expected number of positions to be matched before inequality occurs is S = r + 2r 2 + · · · + (n − 1)r n−1 , for some n ≥ 2. Hall & Knight [26, p. 44] tell us that which as n → ∞ approaches r/(1 − r) 2 < 2 for all r. Thus S, the expected number of matching positions, is less than 2, and hence the expected number of letter comparisons required for each extension in algorithm ACSMF-Simple is less than 3. In practical cases, algorithm ACSMF-Simple should be preferred over algorithm ACSMF as (i) it has less memory requirements (see Theorem 3); and (ii) it avoids the construction of a series of data structures (see Section 3 in this regard). Edit distance model Algorithm ACSMF-Simple could be easily extended for approximate circular string matching under the edit distance model (for a definition, see [10]). Since each singleletter edit operation can change at most one of the 2k + 4 fragments of x , any set of at most k edit operations leaves at least one of the fragments untouched. In other words, Lemma 2 holds under the edit distance model as well [27]. An area of length O(m) surrounding each potential occurrence found in the filtration phase (Steps 1-3 of algorithm ACSMF) is then searched using the standard dynamicprogramming algorithm in time O(m 2 ) [28] and space O(m) [29]. Since the expected number Occ of occurrences of the 2k + 4 fragments is O( kn Experimental results We implemented algorithms ACSMF and ACSMF-Simple as library functions to perform approximate circular string matching with k-mismatches. The functions were implemented in the C programming language and developed under GNU/Linux operating system. They take as input arguments the pattern x of length m, the text t of length n, and the integer threshold k < m; and then return the list of starting positions of the occurrences of the rotations of x in t with k-mismatches as output. The library implementation is distributed under the GNU General http://www.almob.org/content/9/1/9 Public License (GPL), and it is available at http://www. inf.kcl.ac.uk/research/projects/asmf/, which is set up for maintaining the source code and the man-page documentation. The experiments were conducted on a Desktop PC using one core of Intel i7 2600 CPU at 3.4 GHz under GNU/Linux. Approximate circular string matching is a rather undeveloped area. To the best of our knowledge, there does not exist an optimal (average-or worst-case) algorithm for approximate circular string matching with k-mismatches. Therefore, keeping in mind that we wish to evaluate the efficiency of our algorithms in practical terms, we compared their performance to the respective performance of the C implementation a of the optimal average-case algorithm for multiple approximate string matching, presented in [17], for matching the r = m rotations of x. We denote this algorithm by FredNava. Tables 1, 2, 3 illustrate elapsed-time and speed-up comparisons for various pattern sizes and moderate values of k, using a corpus of DNA data taken from the Pizza & Chili website [30]. As it is demonstrated by the experimental results, algorithm ACSMF-Simple is in all cases the fastest with a speed-up improvement of more than three orders of magnitude over FredNava. ACSMF is always the second fastest, while ACSMF-Simple still retains a speed-up improvement of more than one order of magnitude over ACSMF. Another important observation, also suggested by Corollaries 1 and 2, is that the ACSMF-based algorithms are essentially independent of m for moderate values of k. Conclusions In this article, we presented new average-case algorithms for exact and approximate circular string matching. Algorithm ECSMF for exact circular string matching requires average-case time O(n); and Algorithms ACSMF and ACSMF-Simple for approximate circular string matching with k-mismatches require time O(n) for moderate values of k, that is k = O(m/ log σ m). We showed how the same results can be easily obtained under the edit distance model. The presented algorithms were also implemented as library functions. Experimental results demonstrate that the functions provided in this library accelerate the computations by more than three orders of magnitude compared to a naïve approach. For future work, we will explore the possibility of optimising our algorithms and the corresponding library implementation for the approximate case by using lossless filters for eliminating a possibly large fraction of the input that is guaranteed not to contain any approximate occurrence, such as [31] for the Hamming distance model or [32] for the edit distance model. In addition, we will try to improve our algorithms for the approximate case in order to achieve average-case optimality. Endnote a Personal communication with author.
5,397
2014-01-01T00:00:00.000
[ "Computer Science" ]
LoRaWAN Base Station Improvement for Better Coverage and Capacity : Low Power Wide Area Network (LPWAN) technologies provide long-range and low power consumption for many battery-powered devices used in Internet of Things (IoT). One of the most utilized LPWAN technologies is LoRaWAN (Long Range WAN) with over 700 million connections expected by the year 2023. LoraWAN base stations need to ensure stable and energy-efficient communication without unnecessary repetitions with sufficient range coverage and good capacity. To meet these requirements, a simple and efficient upgrade in the design of LoRaWAN base station is proposed, based on using two or more concentrators. The development steps are outlined in this paper and the evaluation of the enhanced base station is done with a series of measurements conducted in Zagreb, Croatia. Through these measurements we compared received messages and communication parameters on novel and standard base stations. The results showed a significant increase in the probability of successful reception of messages on the novel base station which corresponds to the increase of base station capacity and can be very beneficial for the energy consumption of most LoRaWAN end devices. Introduction In order to develop the Internet of Things (IoT), a network of devices that communicate with each other and other systems over the Internet, it is necessary to have communication technology that provides long-range and low power consumption. Technologies in which high data rate is traded for wide coverage and energy efficiency are known as Low Power Wide Area Networks (LPWAN). According to [1], over 90% of all connected devices use one of four most utilized LPWAN technologies: NB-IoT (Narrowband IoT), LoRaWAN (Long Range WAN), LTE-M (Long Term Evolution for Machines), and Sigfox. It is considered that LoRaWAN will, along with NB-IoT connect 86% of all LPWAN devices by the year 2023 [2]. The key difference between the two is linked to the frequency spectrum they use. LoRaWAN operates in unlicensed spectrum and can provide lower prices in terms of devices and services. On the other hand, since the spectrum it uses is shared with other technologies, interference problems with other systems are possible. To reduce it, LoRaWAN use Chirp Spread Spectrum (CSS) [3] modulation technology in which data is encoded with frequency-modulated signal-chirp, which enhances long-range communication and robustness to interference. Another advantage of LoRaWAN over other LPWAN technologies is its big community which can be seen through a number of published research and experimental evaluations. For instance in [4], results of field experiments in Paris with performance evaluation and network coverage are given. In [5], LoRaWAN testbeds are made for the comparison of indoor and outdoor measurements with the results of LoRaSim simulation implemented and described in [6]. Further evaluations focused on different environments and Authors in [7][8][9][10][11][12][13] conducted analysis and experiments for urban and suburban areas, while in [14,15] maritime and mountain areas were analyzed, respectively. In particular, in [7,9] urban scenarios were thoroughly tested showing reasonable coverage with simple base stations highlighting the height of the base stations and topology as major factors affecting the deployment strategy. This conclusion was also confirmed in [10] who included indoor coverage in their measurements. In addition, in [13], urban, suburban and rural, stationary and mobile scenarios were considered, indicating that propagating conditions in the deployment scenario need to be carefully evaluated before actual implementation. As observed in these related works, network coverage and capacity in different terrains and for different configurations are generally acceptable, however, the network implementation should be done carefully to maximize the LoRaWAN potential. Additionally, as pointed out in [16,17] in which different real-life tests of the LoRaWAN network were conducted, on average there was around 30% of packet loss, indicating that there is significant room for improvement. There were also various research activities regarding coverage enhancement, but their focus was either on end node improvement [18,19] or constructing multi-hop network [20,21], while the base station enhancements were generally not considered. The problem of packet loss and efficiency is not critical in situations where there are just a few sensors, however, it becomes an issue in more complex applications. For example, in applications where there are many sensors in a small area, there is large probability of message collisions and consequently packet loss, which will result in the need for unnecessary message retransmission. Also, it can be an issue in remote areas where battery efficiency is critical and frequent retransmissions could significantly reduce the lifetime of the node. In these scenarios we need to improve the reception of the base station and we propose a simple and cost-effective upgrade of the LoRaWAN base station using additional concentrators. This enables more flexibility for the user, improved capacity, and consequently has a significant indirect impact on energy efficiency since low power end nodes will potentially require fewer repetitions to send their messages thus saving power. Also, by using a dual or multiple concentrator setup, a wide range of customizations for specific applications are possible. For example, it can be beneficial to create sectors with directional antennas and dedicated concentrators to separate and improve reception in certain directions. This simple enhancement of the base station requires little resources, but it can be a very valuable tool in extending and improving the range and capacity of the network alongside the end node improvements, deployment scenario considerations and similar. Within this paper, we present the idea and the development of the LoRAWAN system with an upgraded dual-concentrator base station. To evaluate the system and confirm the improvements, a series of measurements were taken in the city of Zagreb, Croatia using a reference single-concentrator base station and the upgraded one. Both base stations were positioned in the same location and the results confirmed the enhancement in the properties of the overall system. The paper is organized in the following way: in Section 2 a short overview of Lo-RaWAN is given providing basic terms needed to understand the problem this paper deals with. In Section 3 the novel design of the base station is presented and in Section 4 the results of the measurements and validation are shown. Overview of LoRaWAN Overview of LoRaWAN technology is given in this section for completeness. Lo-RaWAN is the most utilized LPWAN technology that operates in unlicensed spectrum with 151 network operators in 167 countries in 2021 according to [22]. The modulation technology used in the physical layer of LoRaWAN is patented [23] by Semtech Corporation, but to reach wider community, it is licensed to other commercial companies which produce radio modules. To keep the network complexity and power consumption low, LoRaWAN uses star topology where end devices communicate directly with base stations using LoRaWAN protocol on different channels and with variable data rates. End devices do not connect to a specific base station but send their data to all nearby base stations that forward it via TCP/IP protocol to a network server. Network server is the main part of LoRaWAN network as it manages join requests and device addressing, data delivery to and from a user application, duplicate frame filtering, adapting data rate and security functions. The last component of the architecture is an application server that communicates with the network server over TCP/IP, decodes the received data and initiates messages towards end devices. Before sending any data, the end device has to be activated using either Over-The-Air Activation (OTAA) or Activation By Personalization (ABP) method. The main difference between the two options is in generating the device identifier (DevAddr) and the session keys in the network: in ABP activation they are hardcoded in the device while in OTAA they are assigned at the start of the connection (in Join Request/Join Response phase). Session keys are used in Advanced Encryption Standard (AES) [24] to provide secure end-to-end communication. More detailed research regarding security in LoRaWAN can be found in [25,26]. In order to retain low power consumption while providing bi-directional communication, LoRaWAN is based on a simple ALOHA protocol with three defined end device classes: A, B and C. Class A is the basic one and is designed for ultra low power end devices enabling only two short receive windows for downlink transmission (from base station to end device) after a packet is sent. In class B, on top of class A, there is one more receive window open in the scheduled interval while class C devices are able to receive all the time when they are not transmitting providing lowest downlink latency [27]. On the other hand, as shown in [28], class A devices require lowest, while class C devices require highest power consumption. Compared to LPWAN technologies that operate in licensed spectrums, LoraWAN has lower data rate and payload size. Specifically, LoRaWAN can send a maximum of 256 B of payload with data rate of 27 kbps [27] while in NB-IoT maximum payload size is 1600B which can be transmitted with data rate of 250 kbps [29]. However, this is balanced by the fact that LoRaWAN devices are more energy efficient [30] and affordable [31]. In LoRaWAN, bandwidth (BW) can be 125, 250 or 500 kHz, where higher bandwidth corresponds with higher data rate, but also with lower sensitivity. Generally, all mentioned LPWAN technologies provide 2-5 km range in urban and over 10 km in rural areas and, with transmission power limited to around 25 mW, low energy consumption with device's battery life of 10 years [32]. Spreading Factor As mentioned, LoRaWAN uses CSS technology to cope with interference and to provide long-range. CSS is a modulation method where frequency modulated signals, also known as chirps, are used to encode the data. Chirp with increasing frequency is called upchirp, and with decreasing frequency is called downchirp. On the physical layer, one LoRa packet consists of: 2.25 downchirps for synchronization symbols, and • "choppy" upchirps for physical payload. Each of the "choppy" upchirps starts with frequency between f min and f max (BW = f max − f min ) that are minimum and maximum frequencies of CSS and define input information. Number of bits in the input information and chirp's length are dependent on a spreading factor (SF) which represents number of bits per symbol (one symbol has SF bits) and number of chips per symbol (there are 2 SF chips in one symbol) and can be any number between 7 and 12. The correlation between spreading factor and symbol duration is given in Figure 1. Since the chip duration is equal to 1/BW, symbol duration with 2 SF chips is 2 SF /BW. Therefore, the symbol rate is BW/2 SF which leads to the data rate: where CR is the code rate. Figure 1. Correlation of LoRa spreading factor and symbol duration is given with 2 SF /BW. Using higher spreading factor means longer symbol duration which provides better transmission, but decreases data rate. Lower spreading factor increases the data rate but decreases range and vice versa. For example, in conditions with constant bandwidth, when the distance between end device and base station is challenging, the use of spreading factor 12 is suggested to enable communication. In the opposite situation, when the distance between them is short, higher data rate is possible using spreading factor 7. Adapting the spreading factor and bandwidth to conditions in the communication channel is one of LoRaWAN advantages. Network server adapts the parameters based on the value of SNR (Signal to Noise Ratio) to provide the best data rate in the given situation. This feature is called Adaptive Data Rate (ADR). Spreading factor also has an impact on Time on Air (ToA) defined as elapsed time for a LoRaWAN packet to transmit between end device and base station. It is given with: where n preamble is the number of symbols in preamble, n data the number of symbols in data and T symbol the aforementioned symbol duration (2 SF /BW). Since LoRaWAN operates in unlicensed spectrum, its devices have duty cycle limitation which means that there is a maximum percentage of time during which a device can occupy a channel. Exact time a device has to wait after sending a message can be calculated for each subband using ToA: Duty cycles are usually regulated by governments and are dependent on subband [27]. Standard LoRaWAN Base Station The implementation of the LoRaWAN base station is based on [33] where a Raspberry Pi is used as a microcomputer and certified LoRaWAN board iC880A [34] as a concentrator. Raspberry Pi 4B is high-performance microcomputer with large support community, while iC880A concentrator is able to receive messages sent with different spreading factors on up to 8 channels in parallel. In the model, an omnidirectional antenna [35] with a frequency range between 824 and 896 MHz (for EU868 region) is connected to the concentrator, and concentrator to the microcomputer using Serial Peripheral Interface (SPI). The developed base station is registered to The Things Network (TTN) Server [36]. The radiation pattern of the omnidirectional antenna used in standard single antenna base station is given in Figure 2a). Proposed Improvement for LoRaWAN Base Station To improve reception, capacity and to provide added flexibility in the spatial coverage an upgrade of the existing base station design is proposed. This is achieved using an additional (second) concentrator and a prototype is built also using Raspberry Pi microcomputer. Both concentrators are connected with the Raspberry Pi via predefined SPI connections SPI0 and SPI1: Raspberry Pi and concentrators used in the measurements are shown in Figure 3. Pins on concentrators and Raspberry Pi used in SPI connections are defined by the manufacturer. Along with SPI connections, each concentrator has its own Reset signal connection which is used to start the initialization process of that concentrator. Reset pin on the Raspberry Pi can be any GPIO (General-Purpose Input/Output) pin. Detailed pinout of the system is given in Table 1. The implemented program initializes each concentrator, connects them as two gateways to TTN and for each of them runs packet forwarder. Defining two gateways on TTN is important so that communication on each concentrator can be tracked and analyzed separately. As mentioned, this paper focuses on base station model with two concentrators since it is the simplest case. However, the same principle can be used for base stations with more than two concentrators to provide more flexibility. 36 17 Apart from improving the capacity, two concentrators and primarily their antennas can be positioned in a way to best suit certain application. The example shown in Figure 4 positions the antennas next to the metallic edge (emulating a large pillar or edge of the building) to demonstrate one possibility. This is beneficial in dense sensor environments (Smart city case) where often due to congestion some messages can be lost. Two SPI connections enable two sets of concentrator and antenna to work in parallel without interrupting each other. The challenge of such a setup is that messages received from both concentrators are sent towards Network Server and that causes duplicates. This can be solved by adding a filtering function before forwarding packets to the network. However, since Network Server already filters duplicates for each application there is no need for that. Second concentrator/antenna set enhances network capacity and network coverage in the wanted direction when combined with metal surface that provides sector-shaped radiation pattern. Metal surface behind each antenna was 30 cm high and 50 cm wide. The antenna is set 8.6 cm away from the metal surface which is approximately λ/4 in LoRaWAN frequency range (868 MHz) to ensure that antenna remains well matched. Obtained radiation pattern for this setup for one of the antennas is shown in Figure 2b). Measurement Setup The aim of the measurements was to evaluate the proposed dual setup in a real-world LoRaWAN communication scenario. Two LoRaWAN base stations were implemented: one using existing code [33] based on Raspberry Pi (case A in Figure 5) with one concentrator and antenna, and the other using customized code running on second Raspberry Pi with two sets of concentrators and antennas connected via two predefined SPI connections according to Figure 4. The base stations are set on the top of the 60 m high building of the Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia. There were two rounds of measurements. The first was conducted without metal surface behind antennas of the novel base station (case C), and for the second one the metal surface was added (case D). Since the base stations are registered to TTN, all messages sent from devices in the vicinity of the building are received. In both cases base stations worked for five hours. In order to test the impact of metal surface on received signal strength indicator (RSSI) for each spreading factor, we developed a dedicated LoRaWAN end device. Pycom board [37] was programmed to send LoRaWAN messages with different spreading factors using LoRa radio module on LoPy board [38]. The antenna connected to the board was the same as those used on the base stations [35]. The end device sent messages from six locations in the dense urban area of Zagreb, Croatia approximately 1-2 km away from the building with the base stations. Prototype Validation The analysis of messages obtained from the measurements showed that even without the metal surface behind the antennas, the novel base station with two concentrators received more messages than the single base station. Specifically, in the same period of time (in 5 h), base station with two antennas without metal surface (case C) received 98.7% of all messages that base stations received combined. On the other hand, base station with one antenna received 76% of those messages which means that the second set of concentrator and antenna increased the probability of successful transmission. This is a very valuable improvement since it is important for energy efficiency of the entire network as LPWAN networks in most cases work with battery-powered devices. When the metal surface was added (case D), number of messages received on both base stations increased by 25% (from 2235 to 2797 messages). As expected this increase shows that adding the metal surface improved base station directivity and thus also the range. The percentage of the messages received on the novel base station remained the same as in the first case but that percentage decreased for the standard single antenna base station (67%) which indicates that the novel base station received more messages with the metal surface set behind the antennas. Table 2 shows the percentage of received messages on each base station. For the second part of novel base station model validation we used the aforementioned LoRaWAN end node to test the improvement when the metal surface is set behind the antenna. For comparison, we set the metal surface behind the antenna of one base station (case B) while the other was the standard one (case A). All components used in the development of both base stations were the same. The end node was programmed to send six messages with different spreading factors. Bandwidth was set to be 125 kHz and payload size 3 B so that communication parameters only depend on spreading factor and distance from the base stations. The end device sent 12 messages (two series of six messages (for SF7 to SF12)) from each of 6 locations set approximately 1-2 km away from the building with the base stations in the dense urban area of Zagreb, Croatia. Since the focus was to evaluate the impact of metal surface for sector radiation purposes, all end device locations were set in the same sector as shown in Figure 6. The results showed that base stations received 86% of the messages (31 of 36) sent by end device. In 80% of them (25 of 31) the base station with metal surface behind the antenna received with higher RSSI than the standard one. In some cases, the difference between RSSIs was 9 dB which could be significant for extended range. Regarding spreading factor, as shown in Figure 7, mean RSSI is higher for each spreading factor on base station with directed antenna (with metal surface behind the antenna). The biggest difference can be seen for SF8 where the increase of mean RSSI is around 3 dB while the smallest one is for messages sent with SF7. In order to evaluate RSSI for different spreading factors while keeping the distance between base stations and end device relatively short (urban environment), ADR had to be disabled (otherwise the ADR algorithm would always use SF7 or SF8) and spreading factor set manually for each LoRaWAN message. According to Figure 8, the number of received messages by spreading factor is similar on both base stations. The only difference is that the novel base station received one message more with SF8 and standard one with SF10. All 12 messages sent with SF12 were successfully transmitted, while messages with lower spreading factor, for instance SF7, were received in 8 of 12 cases by both of them. Since number of messages sent from each location with test end device is relatively low, this kind of behavior where novel base station receives similar number of messages by spreading factor as standard one is expected. It is important to emphasize that prior to any measurements all the components used were tested to determine any differences between them or potential problems. In particular, all passive components, antennas, connectors and cables, were tested since we had to choose the ones with the same losses in order not to influence the RSSI measurements. Also, the active components, concentrators and Raspberry Pi-s were switched between configurations in multiple iterations of the measurements to eliminate the possibility of technical problems and any differences in signal reception. Since the measurements were done in the urban environment it was also important to perform the repeated measurement from exactly the same locations and positions since even small deviations in position could result in significant reception differences due to propagation differences. Conclusions LoRaWAN networks have great potential for the development of new applications and systems. To improve the capacity of standard base stations, add flexibility, and potentially extend the lifetime of end nodes in the network, we designed and demonstrated an upgraded low-cost base station. The upgrade is based on adding a second set of concentrator and antenna, which is practically demonstrated using a low-cost base station design based on microcomputer Raspberry Pi, concentrators iC880A and additional software modifications. This upgrade allows also more flexibility in the positioning of the antennas leading to potentially extended range in certain directions or similar application scenarios. Increased capacity and potentially improved coverage or range can both be very beneficial for the operation of the sensor nodes since these improvements lead to a more successful reception and fewer message repetitions. To validate the improvements, two base stations were developed; standard one based on the aforementioned design, and novel dual-concentrator based one. In the first evaluation, a comparison of the number of messages received on them was performed for the real-world communication case and showed that dual-concentrator setup increased the probability of successful reception on the novel base station. This improvement can be very beneficial for the energy consumption of most LoRaWAN end devices or sensor nodes since they are usually battery-powered. Additionally, when metal surface was added behind the antennas of novel base station, the number of received messages increased by 25% which indicated that network coverage can also be easily extended. The comparison between communication parameters of the received messages also confirmed the improvement of the proposed base station when used in combination with a metal reflector. Communication between test end node and base stations in the dense urban area showed that messages were received with higher signal strength on the novel base station than on the standard one. The analysis showed that mean RSSI received for messages sent from different locations in 1-2 km range is higher in all spreading factor cases on the novel base station. The presented idea of low-cost base station upgrade and its validation demonstrated its feasibility, and it is a proof of concept for improvement of base station coverage and capacity in LoRaWAN. The setup will be further extended to multiple concentrators leading to potentially novel applications and additional further evaluations with multiple end nodes will be conducted to reveal the actual savings in terms of battery power.
5,661.6
2021-12-30T00:00:00.000
[ "Computer Science" ]
The effect of cooperation between universities and stakeholders: Evidence from Ukraine This paper presents a scheme of implementation of the protocol for diagnosing stakeholders of educational space, which aims to meet the needs and interests of both contractors and the university, and makes an efficiency assessment in achieving the goals of the university. The ranking of the social status of the stakeholders in the association is obtained, which demonstrates the importance of each stakeholder in the collaboration. The relationships between university and stake- holders, the degree of communication of stakeholders with each other, the attitude of each stakeholder towards the group of stakeholders as a whole and the total activity of group interaction based on the calculation of sociometric indicators are determined. Finally, the effectiveness of the level of involvement of stakeholders in the activities of the University is also proved. Introduction Collaboration is a key factor in universities' interactions with various counterparties. The level of effectiveness of both the university and the stakeholder depends on their cooperation. That is why it is necessary to diagnose the effectiveness of collaboration between the university and the stakeholder and between the stakeholders themselves. Therefore, exploring the priority of collaboration with a particular stakeholder is an urgent scientific task, the accomplishment of which will allow universities to operate more efficiently and purposefully. Literature review The process of university activity is accompanied by cooperation with various counterparties. The level of effectiveness of both universities and stakeholder activities depends on their cooperation. That is why it is necessary to diagnose the environment and its representatives in order to counteract the negative impact and enhance the positive effect of the interaction. Studying the priority of cooperation with a particular stakeholder is an urgent scientific task, the solution of which will allow all participants of the educational process to work more qualitatively and effectively (Kovalska, 2011). Schneider-Störmann et al. (2020) presented two studies of the industrial impact on student education in Germany and France as a benchmark for a new approach to academic cooperation between universities in Finland. They proved that stakeholders need the knowledge and commercial competencies to do their jobs well. Szücs (2018) paid particular attention to partnerships between industries and universities and found that the benefits of working with universities were enhanced by their academic quality. Ankrah and AL-Tabbaa (2015) argue that collaboration between universities and industry is increasingly seen as a means of enhancing innovation through knowledge sharing. Acworth (2008), in his work, suggested that universities cooperate with the stakeholders, but that feedback should always be present during this interaction. Plewa et al. (2013) argued that communication, understanding, trust, and people are universal drivers, but managers need to consider changes in the nature of these factors in order to ensure successful communication. Galan-Muros and Davey (2019) talked about bilateral relationship between universities and business and argue in their work that this collaboration is currently fragmentary. Leonchuk and Gray (2019) and Ievdokymov et al. (2020) explored the understanding of outcomes and results of science, technology and innovation through human capital, using a rare quasi-experimental method in two ways: traditional and non-center learning. Belyaeva and Belyaev (2019) conducted a study of the behavior of stakeholders at universities in Russia and China and identified a number of factors that influence the functioning of interaction between Russian and Chinese student structure: geopolitical situation, cost and quality of education, adaptation of courses for international students, comfort of the university's learning environment, the complexity of learning the language of the host country, the prestige of education. When it comes to Malaysian universities (Tajuddin et al., 2013), much emphasis is placed on academic advising and the formation of student's personality. Asian universities also attach great importance to academic counseling -a valuable academic service and an important component of higher education. This service is very important for student satisfaction, retention, recruitment and success (Van Nguyen et al., 2013). Immerstein et al. (2019) argued that in order to bring higher education to the future, it is important to develop models and methods for preparing students for working life. Zhao et al. (2019) in their study discuss contemporary problems of international education in China. In the next decade, China is expected to become the most significant area of education for foreigners. As information technology is a leading industry in China, IT-related scholars are sure to attract a large number of international students. Wen et al (2019) examined the influx of international students into China in recent years and relevant internationalization strategies in the higher education sector and found that major problems for international students were poor command of English, poor student-teacher interaction on campus, and difficulties in social and cultural adaptation. Brown et al. (2018) compared the academic virtue of national and international occupational therapy students in their work and identified possible adherents of student involvement in dishonest academic behavior. Aldrich and Grajo (2017) argued that to achieve effective collaboration between students and universities, there must be a high level of critical student awareness. Kinash et al. (2019) proposed to establish university-stakeholder collaboration through education and production cluster of public-private partnership. Andrusiv et al. (2020) argued that it is possible to improve the current situation by creating favorable conditions for the generation and implementation of business ideas through the integration of science and production. With regard to Ukrainian universities, we have only now begun to work on establishing cooperation between the universities and the stakeholders (students) of the educational process. The aim of the article is to investigate the interaction of stakeholders in educational space and universities of Ukraine. Results Implementation of the educational space Stakeholder Diagnostics Protocol, which is aimed at meeting the needs and interests of both external contractors and the university itself, will ensure effectiveness in achieving the goals of the university. However, the needs of external contractors and the university have the potential to change, so the proper monitoring and forecasting of the effectiveness of stakeholder collaboration with universities is an integral component. Express evaluation, its analysis and implementation of cooperation measures should be systemic. On the basis of the conducted research it is proposed to establish a frequency of surveys -6 months (this period is empirically confirmed in practice as optimal), if circumstances do not require to conduct it earlier. Thus, the protocol for diagnosing stakeholders in the educational space will function continuously and cyclically (Fig. 1). Each successive cycle is an activity that replicates the previous one, but has new content to identify new needs of external contractors and the university and develop new measures to meet them. It should also be noted that such a system of diagnosing stakeholders in the educational space should be accompanied by the achievement of the goals of the university, that is to be purposeful. Achieving the goals and cooperation of stakeholders with universities are two essential components of the protocol for diagnosing educational stakeholders. Thus, if we continue to repeat (every six months) the diagnosis of external contractors and the university, we will be able to establish their effective cooperation related to the satisfaction of their changing interests, which stimulates the continuous interest of managers in achieving high results. It is advisable to begin the educational space with an assessment of the level of involvement of stakeholders in the educational space, for this purpose we propose a methodology for self-diagnosis of stakeholders and the university, which is presented in Table 1. 11. Are the stakeholders informed about the implementation of the university's social responsibility? 12. Is honest information always available at the stakeholder's request? 13. Is there a mechanism for applying to the university for a disapproval of the project? 14. Are the deadlines for reporting to stakeholders settled? 15. Is the involved stakeholder monitored and evaluated? * adapted by the authors The above method is easy to use, since evaluating each university stakeholder we put a "+" if the answer is positive and "-" if the answer is negative, which allows us to quickly recognize what components of cooperation are performed at a high level, and where it is necessary to make adjustments. Stakeholder involvement in the educational space is graded as follows: 16-11 points -a stakeholder who considers him/herself involved in educational space activities to a great extent; 10-5 points -a stakeholder who considers him/ herself involved in educational space at an medium level; 4-0 points -a stakeholder who finds him/herself at least involved in the educational space. The self-diagnosis methodology of the level of stakeholder involvement in the activities of the university also allows to identify weaknesses in cooperation with stakeholders, as any cooperation should be effective for both parties. In order to assess the effectiveness of the stakeholders, it is proposed to set out a set of conditions. Such a set of conditions is proposed in the form of express assessment (Table 2). Table 2 Express assessmentof the effectiveness of stakeholder collaboration with the university Question Stakeholder 1 Stakeholder 2 …. Stakeholder n +/-+/-+/-Obligatory conditions Is an expected collaboration efficiency greater than zero? Is the cost per unit of university collaboration lower than the cost per unit of stakeholder collaboration planned? Does the collaborative stakeholder provide sufficient demand for services to shape the effectiveness of the university? At what stage of development is cooperation and is there enough time to establish cooperation before its termination? Desirable conditions Expected performance from an activity with stakeholder is greater than incremental transaction costs when setting up collaboration The rate of return on investment of a university is higher than the rate of return of a stakeholder The cost of interaction with institutional agents is lower than in the domestic education market * adapted by the authors based on (Kovalska, 2011). There are two sets of conditions in the express assessment of a university's stakeholder performance: compulsory and desirable. Each condition in these two groups is evaluated linearly on a dichotomous scale -"+" or "-". Express assessment reflects the fundamental profitability of the local market for the university in the event of its entry into it, or the absence of such profitability. It is advisable to use an expert team to evaluate the effectiveness of the stakeholder collaboration with the university, which will allow for the most objective express-evaluation. Let us outline the characteristics that an expert must possess to perform his or her duties effectively. We use the method of building an expert profile, the necessary characteristics are given in Table 3. (Popadynets, 2018) We detail the components of the expert's potential:  Orientation vectors: attitude towards other people as members of the team, attitude towards work and its results, attitude towards oneself. Accordingly, it is possible to single out the following foci: the focus on communication and interaction (C), work orientation and its effectiveness (D), the focus on recognizing his/her achievements (I);  The level of mental development -the level of creative, analytical and cognitive capabilities of the manager, embodied in the ability to freely use knowledge, skills, experience, acquiring new knowledge and skills. To characterize it in foreign and domestic practice, the coefficient of intelligence is used;  Creativity -the use of creative capabilities of the manager in making unusual and unexpected decisions, commitment to new ideas, breaking the established stereotypes that point to the ability to quickly find a way out and make decisions in unusual situations;  The level of aggressiveness is identified as an increased activity, not caused by external factors, but by the desire to dominate, link and resolve disputes by coercive methods;  Level of subjective control (locus of control -a way of concentrating attention and mobilizing the experience of action in various situations of life, which is suitable for use in management activities. There are external and internal types of localization of event control;  Communicability -tendency and ability to communicate with other people, kindness;  Assertiveness (confidence in oneself) -shows the ability to formulate and express the desires, tasks, goals, requirements and to organize the process of their fulfilment;  Self-esteem -assessmentby the manager of one's own abilities, actions and role in the team and society;  Extroversion and introversion characterize the type of terperation by the innate features of the individual's nervous system, which influences the content of managers' incentives and motivators. Extroverts are characterized by sociability, active participation in collective activities, democratic views, riskiness and impulsiveness in decision-making, recklessness and actions at the first instigation. Introverts do not actively succumb to stimulation, because in decisions and actions they rely on their own vision and experience, so they are balanced and calm, restrained, closed, focused on their own problems;  anxiety and self-confidence are characterized by a level of anxiety that reflects a manager's tendency to respond emotionally to threats and crisis manifestations of different nature, the possibility and need for psychocorrection to relieve tension or relaxation;  psychological status is determined by the place of the manager in the team, which determines his functions, rights, privileges and responsibilities;  style of communication and management is characterized by a relatively stable system of tools, methods and techniques of communication between the manager and the team, designed to perform management functions under certain conditions of operation of the enterprise or its structural unit. The main leadership styles include: authoritarian, democratic and liberal;  organizational skills are shown in the ability to influence people, quickly find the optimal way out of difficult situations, initiativity, responsibility for decisions and actions. Next, we will conduct sociometric studies, because sociometry enables us to see the structure of relations between stakeholders, to make assumptions about the stakeholder-leader, the degree of self-discipline of stakeholders. We group the obtained data from socio-matrices and use the obtained results to calculate the indicators: individual sociometric index; individual expansiveness index; group expansiveness index; stakeholder group reciprocity index (Popadynets, 2019). Individual index of stakeholder's sociometric status Ci: where, -the number of positive and negative ratings (votes) received, respectively; N -the number of stakeholders. Individual expansiveness index of stakeholder Ej : where, -the number of negative and positive evaluations respectively. Index Ej -characterizes the degree of stakeholder communication and reflects the attitude of each stakeholder to cooperation as a whole. In addition to individual sociometric indices, group indices are also calculated. Stakeholder group expansion index Е: where,  R і  R -total number of negative and positive evaluations respectively. Index Е characterizes the overall activity of stakeholders, expresses its dynamics. The closer it gets to the unit, the more intense the social activity of the stakeholders. Index of group reciprocity of stakeholders С: Index С expresses the relationship of stakeholders, their cohesion, close communication. Stakeholder group integration index І: N -the number of stakeholders who have not received or made any assessment. The next step is to identify the level of involvement of stakeholders in the educational space. Twelve stakeholders were selected for the study. To simplify the processing of socio-matrix data and for the privacy reasons the stakeholders are encrypted: in this case, a certain letter corresponds to each of them (Table 4). Table 4 The level of involvement of stakeholders in the educational space The assessment of the level of involvement of stakeholders in the educational space shows the following results: B, C, E, G, K та L -stakeholders who consider themselves involved in educational activities at high level; А, D, J -stakeholders who consider themselves involved in educational activities at medium level; F, H та I -stakeholders who consider themselves involved in educational activities at low level. Next, we form expert groups. It is advisable to use an expert team to evaluate the effectiveness of the stakeholder collaboration with the university, which will allow for the most objective express assessment. The experts were the leading specialists of the administrative staff of the university, namely:  expert 1 -leading specialist in scientific work;  expert 2 -leading specialist in research;  expert 3 -leading specialist in scientific and pedagogical work;  expert 4 -leading trade union specialist;  expert 5 -leading specialist in socio-economic development. For informational support and construction of the expert profile we recommend to use the data grouped in Table 5. We visualize the results in the Fig. 2. The main indicators of a high level of the expert's potential include: organizational and communicative abilities, partly absence of conflict, average level of intelligence, leadership, vector orientation and absence of conflict level, and low level of creativity and non-aggressiveness. Let us conduct an express assessment of the effectiveness of stakeholder collaboration with the university (Table 6). Stakeholders C, E, G, J, K and L consider collaboration in the educational space effective, while the rest of the stakeholders hold that cooperation in the educational space is of moderate effectiveness to them. The positive fact is that no stakeholder has diagnosed the collaboration as ineffective. Next, we conduct a sociometric study of the interaction of stakeholders in the educational space. The stakeholders were asked questions to study their interaction in collaboration: 1 Choose three stakeholders you would like to work with. 2 Select three stakeholders you would not like to work with. The Socio Matrix is based on the first criterion: Select three stakeholders you would like to work with. Table 7 shows the data. Let us analyze Table 1: "X" indicates that the stakeholder has not been selected: diagonally we denote self-selection, that is, the stakeholder cannot cooperate with himself. Moreover, horizontal "X" means that the stakeholder refused to participate in the poll. If a stakeholder is ranked first on the list of another stakeholder, he / she will receive +3 points, second -+2 points and third -+1 point. If the stakeholders have chosen each other to cooperate, then a positive mutual choice is made and rounded off. Next, we summarize horizontally and vertically the number of choices, and vertically also determine the point equivalent. That is, the sum of the received points is carried out, but with the changed value: +3 -1 point, +2 -2 points, and +1 -3 points. And, the last step is to determine the positive social status (social status +): we summarize the line 'the number of choices' and the sum of points (1 + 2, 3 + 5, 7 + 14). The following socio-matrix is made by the second criterion: Choose three stakeholders we would not like to work with. Table 8 shows the data. We compile the table in the same way as Table 6, only with a negative selection, that is, we identify the stakeholders who are not desirable. Let us combine the obtained results in the positive and negative selection socio-matrices into Table 9. As the index Ci takes into account the attitude of stakeholders to each other and characterizes the value of its prestige in different situations, the best for cooperation are 3 stakeholders.: C and L ( i C = 0.72), G ( i C = 0.55) because their indexes are approaching 1. They are highly respected by other stakeholders. Index Ej characterizes the degree of communication of stakeholders with each other and reflects the relationship of each stakeholder to the group of stakeholders as a whole. The closer to 1 or equals 1, the better the ratio of stakeholders to the group of stakeholders as a whole. In this case, there is only one stakeholder in which the individual expansibility index is closest to 1, this is the stakeholder L ( Ej = 0.55). In addition to individual sociometric indices, group indices are also calculated. The closer it gets to 1, the more intense the group's activity. The calculations show that E = 0.34, therefore the interaction activity in the group is average. Group reciprocity index С: The calculation shows that the stakeholders selected for the study are not in close communication with each other and are not cohesive as index C is significantly less than 1. Group integration index І: This stakeholder association integrates its members into an integral whole, although the stakeholder cohesion is not great. Also, based on knowing individual sociometric status indices, the status of each stakeholder in a cooperative association is evaluated and analyzed. According to the results of these indices, the "leader -star" is defined which are the stakeholders denoted by the letters L and C. Let's determine how many microgroups exist in this association of stakeholders. Microgroups were identified by positive mutual selection. We write in the column positive horizontal and vertical choices and with a help of crosses indicate those who selected whom horizontally and vertically. For simplicity of construction and anonymity we use the codes of surnames from socio-matrices 6 and 7, where the figure is the order number of the stakeholder and the letter stands for the code of the stakeholder. There are two microgroups in the stakeholder association, which can be seen in Fig. 3. Name 2 5 3 7 10 12 11 Fig. 3. Microgroups of the analyzed stakeholders * formed by the authors based on their own research The first microgroup is a dyad, which consists of two stakeholders -B and E, the second microgroup consists of 5 stakeholders: B, G, J, L and K. However, in Stakeholder association there is such a negative factor as "ignored". These are stakeholders who have received only negative reviews. They are stakeholders F, I and H. F and І received a negative rating because they refused to participate in a sociological survey. They always refuse to participate in the public life of the stakeholder association. Most stakeholders would prefer not to cooperate with them, but they have a high level of social investment and are desirable from a professional point of view. H received only one negative rating and none positive. Therefore, it can be considered isolated. This stakeholder is undesirable for the whole stakeholder association. He tries to please every stakeholder and interferes with his boredom. So they try to avoid him. But he also does his job well. Therefore, he is desirable for the professional level to cooperate. The stakeholder association is determined to put up with the discomfort and not lose a qualified stakeholder. Table 9 summarizes the social status of the stakeholder association:  L;  C;  G;  E;  K;  B;  D;  J;  А;  H;  I;  F. Let's analyze the social status of stakeholder association. The highest score was received by the stakeholder under the letter L (21 points). As this stakeholder is not the leader of the stakeholder association, he is the informal leader № 1. Second in the ranking is the stakeholder under letter C (19 points). This stakeholder is not the leader of the stakeholder association, and the informal leader № 1 already exists, he is the informal leader № 2. Third in the ranking is the stakeholder under the letter G (12 points). This stakeholder is the acting leader of the stakeholder association. Conclusions Thus, the system of diagnosing stakeholders in the educational space must be accompanied by the achievement of the goals of the university, that is to be purposeful. Achieving the goals and cooperation of stakeholders with universities are two essential components of the protocol for diagnosing educational stakeholders.The relationship of stakeholders with each other, the degree of communication of stakeholders with each other, the ratio of each stakeholder to the group of stakeholders as a whole, the total activity of group interaction based on the calculation of sociometric indicators were stablished. The scheme of implementation of the protocol of diagnosing stakeholders of educational space, which is aimed at meeting the needs and interests of both external contractors and the university, which will provide efficiency in achieving the goals of the university was proposed. The techniques of the level of involvement of stakeholders in the activities of the university and the express assessmentof the effectiveness of the cooperation of stakeholders with the university were improved. Stakeholder social rankings in the association were obtained, demonstrating the importance of each stakeholder in collaboration.
5,565
2020-01-01T00:00:00.000
[ "Education", "Economics" ]
Unveiling a Hidden Bar-like Structure in NGC 1087: Kinematic and Photometric Evidence Using MUSE/VLT, ALMA, and JWST We report a faint nonaxisymmetric structure in NGC 1087 through the use of James Webb Space Telescope Near Infrared Camera, with an associated kinematic counterpart observed as an oval distortion in the stellar velocity map, Hα, and CO J = 2 → 1 velocity fields. This structure is not evident in the MUSE optical continuum images but only revealed in the near-IR with the F200W and F300M band filters at 2 μm and 3 μm, respectively. Due to its elongation, this structure resembles a stellar bar although with remarkable differences with respect to conventional stellar bars. Most of the near-IR emission is concentrated within 6″∼500 pc with a maximum extension up to 1.2 kpc. The spatial extension of the large-scale noncircular motions is coincident with the bar, which undoubtedly confirms the presence of a nonaxisymmetric perturbation in the potential of NGC 1087. The oval distortion is enhanced in CO due to its dynamically cold nature rather than in Hα. We found that the kinematics in all phases, including stellar, ionized, and molecular, can be described simultaneously by a model containing a bisymmetric perturbation; however, we find that an inflow model of gas along the bar major axis is also likely. Furthermore, the molecular mass inflow rate associated can explain the observed star-formation rate in the bar. This reinforces the idea that bars are mechanisms for transporting gas and triggering star formation. This work contributes to our understanding of nonaxisymmetry in galaxies using the most sophisticated data so far. INTRODUCTION Stellar bars or "bars" are one of the most visual signs of non-axisymmetry in galaxies (de Vaucouleurs et al. 1991).Like other morphological structures in galaxies, bars are not well defined, although they are relatively easy to identity in composite images because of their bar-shape structure (e.g., Masters et al. 2011;Cheung et al. 2013), and their prominent bar dust lanes (Athanassoula 1992).NGC 1087 is a clear example where the combination of high spatial resolution data allowed us to reveal a hidden bar that is only observed in the near-infrared (NIR).The James Webb Space Telescope (JWST) is allowing us to reveal structures not seen before due to the lack of resolution and sensitivity.Their infrared bands are prone to detect the emission from old-stellar structures like bars ∼ 10 Gyr (Sánchez-Blázquez et al. 2011).In the optical, dust obscuration can prevent their detection, affecting the global statistics of galaxies hosting bars in the nearby Universe (e.g., Sellwood & Wilkinson 1993). Apart from their optical and NIR characteristics, bars leave imprints of non-axisymmetry in the kinematics of gas and stars (e.g., Wong et al. 2004;Fathi et al. 2005;López-Cobá et al. 2022); thus studying the dynamical effects of bars at local scales is crucial for understanding the role they play in galaxy evolution.A comprehensive study of bars could be addressed if kinematics and wide-range photometric observations were accessible for a considerably large sample of galaxies.Yet, spatial resolution plays an important role in separating the different structural components of galaxies. For example, the large integral field spectroscopic surveys like CAL-IFA (Sánchez et al. 2012) or MaNGA (Bundy et al. 2015), although providing a large statistical sample of galaxies, their nominal resolution of 2. ′′ 5 ∼ 1 kpc inhibits a detailed study of individual galaxies, while biasing the sample towards bar-lengths larger than the FWHM resolution.In a similar manner NIR photometric and optical catalogs provide resolution of few arcseconds which again limits the detection of small bars (Sheth et al. 2010). In this work, we make use of the most sophisticated data to unravel the detection of a bar in NGC 1087.This paper is structured as follows: in Section 2 we describe the data and data analysis; in Section 3 we address the detection of a faint stellar bar in the infrared while their ionized and molecular counterpart are addressed in Section 4; Section 5 describes the oval distortion and in Section 6 we present the discussion and conclusions. DATA AND DATA ANALYSIS NGC 1087 is an intermediate Sc galaxy located at 14 Mpc (e.g., Kourkchi & Tully 2017); at this distance the physical spatial resolution is 68 pc arcsec −1 . This work is based on public data from the Multi-Unit Spectroscopic Explorer (MUSE, Bacon et al. 2010); data from the JWST Near Infrared Camera (NIRCam); and ALMA CO J = 2 →1 observations, in particular we used the data products from the PHANGS-ALMA survey (e.g., Leroy et al. 2021a,b), namely moments 0 and 1.The MUSE-VLT data was obtained as part of the recent release of the MUSE-PHANGS datacubes (e.g., Schinnerer 2021;Emsellem et al. 2022).MUSE is an integral field spectrograph which provides spatially resolved spectra over a 1 ′ × 1 ′ field of view (FoV), covering the optical spectrum, with a spectral resolution at full width at half maximum (FWHM) of ∼ 2.6 Å.The estimated spatial resolution of the 2 ′ × 3 ′ MUSE mosaic covering NGC 1087 is not worse than 0. ′′ 9/FWHM (e.g., Emsellem et al. 2022).We used fully calibrated data products from JWST from the JWST Science Calibration Pipeline version 1.10.1 (e.g., Bushouse et al. 2022).Specifically, we made use of the PHANGS-JWST first result products (e.g., Rosolowsky, Erik 2022), namely, the NIRCam F200W filter (FWHM ∼ 0. ′′ 05 and 0. ′′ 031/pixel sampling), which has its nominal wavelength at 1.99µm, in addition to the F360M and F335M band filters to trace the stellar continuum.Finally the spatial resolution of the ALMA data is 1. ′′ 60/FWHM following Leroy et al. (2021a).The CO moment 0 map was transformed to surface density (Σ mol ), assuming a standard Milky Way α 1−0 CO = 4.35 M ⊙ /pc 2 conversion factor and Σ mol [M ⊙ pc 2 ] = 6.7 I CO(2−1) [K km/s] cos i (e.g., Eq. 11 from Leroy et al. 2021a), with i being the disk inclination angle estimated in 44.5 • . The MUSE data analysis was made with the pypipe3d tool (e.g., Lacerda et al. 2022).In short, pypipe3d performs a decomposition of the observed stellar spectra into multiple sim-ple stellar populations (SSPs) each with different age and metalicities.For concordance with Emsellem et al. (2022), we use the same stellar libraries based on the extended MILES (E-MILES, Vazdekis et al. 2016), FWHM = 2.5 Å; We convolved the spectral resolution of the SSPs to that of the MUSE line-spread function (e.g., Bacon et al. 2017).To increase the signal-to-noise (SN) of the stellar continuum, we performed a Voronoi binning-segmentation on a 2D-slice centered around the V-band, ensuring bin-sizes of the order of the MUSE-FWHM resolution and SN ∼ 50.This binned map will serve to compute different properties of the underlying stellar continuum, while the ionized gas properties are analyzed in a spaxel-wise sense.The result of pypipe3d is a set of maps comprising information about the stellar populations (stellar velocity, stellar mass, among others products), and the ionized gas (emissionline fluxes including: Hα, Hβ, [S ii]λ6717, 6731, λ6300, emissionline velocities, equivalent widths etc); see Lacerda et al. (2022) for a thorough description of the analysis and dataproducts.From this analysis we estimate the stellar mass of this object in log M ⋆ /M ⊙ = 9.9. FAINT STELLAR BAR The optical continuum image of NGC 1087 exhibits a bright, fuzzy and featureless nucleus with several dust lanes as revealed by the gri MUSE image in Figure 1, because of that it is unclear whether this object shows a stellar bar in the optical.Two bright spots are observed in the central region where the near-IR (NIR) nucleus is found.Unlike the majority of galaxies hosting stellar bars, NGC 1087 does not show a well defined bar, nucleus neither a star forming ring (e.g., de Vaucouleurs et al. 1991).Conversely, the high spatial resolution from the F200W imaging filter allows to resolve a clear elongated and clumpy structure of presumably stellar clusters aligned along a preferential po-sition angle (P.A.), which differs from the disk orientation 1 ϕ ′ disk = 359 • , as observed in Figure 1.At 2µm this structure resembles a faint stellar bar and it is obscured in the optical due to dust absorption.F200W traces mostly stellar continuum, hence, it is expected that most of the continuum emission along this structure comes from old stars (e.g., Leitherer et al. 1999).Elliptical isophotes with variable position angle and ellipticity (ε ′ ) show a preferential alignment of the 2µm emission as observed in the top-right panel from Figure 1. Bar light modeling The method adopted here is similar to the non-parametric method from Reese et al. (2007), which is the same algorithm used by XS (e.g., Lopez-Coba et al. 2021) for extracting the different velocities in the kinematic models, as we explain in the following sections.Our non-parametric model minimizes the function χ 2 = (I obs − I k (r)w k (r)) 2 , where I obs is the observed intensity map, I k represents a set of intensities that will be estimated at different annuli, and w k is a set of weighting factors that will serve for performing a linear interpolation between the estimated I k intensities.The implementation on the F200W and F300M band filters is shown in Figure 2. The NIR light from the bar-structure can be successfully modeled by an elliptical light distribution with with a constant orientation in the sky.The P.A. of such ellipse is ϕ ′ bar = 305 •2 and ε ′ bar = 0.5, as shown in Figure 2. From this analysis the estimated semi-major axis length of 1 From now on primed variables make reference to values measured on the sky plane, otherwise in the disk plane. 2 Sky angles relative the disk major axis can be translated to angles measured in the galaxy plane and vise versa through, tan ϕ bar = tan ∆ϕ/ cos i, with ∆ϕ = ϕ ′ bar − ϕ ′ disk .the two bright spots observed in the MUSE optical continuum images. IONIZED AND MOLECULAR GAS The central panel of Figure 1 shows the ionized gas distribution traced by the Hα, [S ii] and [O iii] emission-line fluxes.In general, galaxies hosting stellar bars do not frequently present cold or warm gas along the bar.This is often explained as a lost of angular momentum after the gas encounters the offset ridges, resulting in an infall of gas towards a central ring (e.g., Athanassoula 1992).NGC 1087 shows plenty of ionized gas throughout the disk and within the bar region.Furthermore, as we will see later, multiple enhanced CO bloops are spatially coincident with the F200W emission.To investigate the dominant ionizing source along the bar-like structure we made use of line-ratios sensitive to the ionization, in particular the [O i]6300/Hα and [O iii]/Hβ.The [O i]6300/Hα ratio has the advantage of being a good indicator of the presence of shocks.The top panel from Figure 3 shows these line-ratios color-coded with the 2µm emission.The observed ionized gas is compatible with being produced by star-formation (SF) according to the Kewley et al. (2006) demarcation line.How- ever, dust lanes in bars are often associated with shocks as a result of the complex gas dynamics (Athanassoula 1992).The bar-region in NGC 1087 shows several dust lanes, thus we adopt shock models to investigate whether shock ionization can reproduce the observed line ratios.For this purpose we used the photoionization grids from MAPPINGS V (e.g., Sutherland et al. 2018) computed by Alarie & Moris- Thus, if shocks are happening in the bar-region, then their optical emission is not expected to be the dominant ionizing source.Although there is still a possibility that a combination of SF plus shocks could contribute to the observed lineratios (e.g., Davies et al. 2016). Additionally, Figure 3 shows the equivalent width of Hα (EW(Hα)) for the ionized gas in the bar-region.Hot, old, and low-mass evolved stars are characterized for producing EW(Hα) ≲ 3 Å (e.g., Stasińska et al. 2008;Cid Fernandes et al. 2010;Lacerda et al. 2018).The large values of EW(Hα) ≳ 100 Å found in the bar, can not be explained by the ionizing continuum emitted by this population of stars.Therefore, if most of the 2µm emission observed along the bar is due to old stars, then their ionizing continuum is not sufficient to explain the observed line-ratios. The star formation rate (SFR) was estimated from the Hα luminosity (Kennicutt 1998), after correcting the Hα flux from dust extinction adopting the Cardelli et al. (1989) extinction law with R V = 3.1, and assuming an intrinsic flux ratio of Hα/Hβ= 2.86 and ionized gas temperature of T ∼ 10 4 K corresponding to case B recombination (e.g., Osterbrock 1989).The integrated SFR within the NIR bar (i.e., Figure 2) is SFR(Hα) bar = 0.08 M ⊙ /yr, while the total SFR(Hα) total = 0.4 M ⊙ /yr.The specific star formation rate (sSFR) was obtained from dividing the SFR surface density (=SFR/pixel area) by the stellar mass surface density as shown in Figure 5.We note a clear enhancement in SF along the bar main axis; furthermore, the molecular surface density shows an enhancement in the same region. Overall, our analysis suggests that the ionized gas in the bar structure is mostly associated to SF processes.Following we investigate whether the CO and SF concentration is induced by the bar potential. INNER OVAL DISTORTION If a perturbation in the gravitational potential, such as that induced by a bar potential, is causing the 2µm light distribution to elongate along a preferred direction; then the particles in such structure are expected to follow quasi-elliptical orbits (e.g., Athanassoula 1992).Hence kinematics of gas and stars should reflect this perturbation (Pence & Blackman 1984). Figure 6 shows the CO, ionized and stellar kinematics around the bar region.The gas, being collisional, is more sensitive to nonaxisymmetric perturbations.This is clearly reflected in the CO moment 1 map where a strong distortion of the semi-minor axis is observed.The former misalignment is a signature of an oval distortion produced by a bar (e.g., Pence et al. 1988;López-Cobá et al. 2022), here observed at high spatial resolution.The distortion is also observed in the Hα velocity map, however, since the ionized gas traced by Hα is hotter (T ∼ 10 4 K), and therefore it presents a larger intrinsic velocity dispersion than the one of the molecular gas traced with CO (T ∼ 10 K), it is less pronounced in the ionized gas and is probably affected by local SF.The stellar kinematics although affected by the pixel-coadding during the SSP analysis, it still reveal a clear distortion in the iso-velocities near the bar as noted in Figure 6.So far, it is clear that the stars, the molecular gas traced with CO and the ionized gas traced with Hα respond in a similar way to the presence of this bar-like structure detected in NGC 1087. Kinematic interpretation of the oval distortion The distortion in the velocity field such as the ones observed before could be caused by an elongated potential.However, only a few kinematic models in the literature attempt to describe the flow caused by an oval distortion (e.g., Spekkens & Sellwood 2007;Maciejewski et al. 2012).A bisymmetric distortion induced by a second order perturbation to the potential, such as that induced by an elongated potential, has been shown to successfully reproduce the velocity field in bars (e.g., Spekkens & Sellwood 2007).Since the F200W image evidences the presence of a faint bar-like structure oriented at 305 • in the sky, we performed bisymmetric models over the stellar, Hα and CO velocity maps, with a fixed orientation of the oval distortion. The bisymmetric model from Spekkens & Sellwood ( 2007) is described by the following expression: where θ bar = θ − ϕ bar ; with θ being the azimuthal angle measured on the disk plane from the line of nodes and ϕ bar the position angle of the bar-like structure on the disk plane.V t is the tangential or circular rotation and V 2r and V 2t represent the radial and tangential velocities that result from a bisymmetric distortion to the gravitational potential. Before this analysis we performed a circular rotation model to obtain the disk projection angles and the rotational curves.In this case and in the subsequent, we used the XS code for generating the kinematic models (e.g., Lopez-Coba et al. 2021).This code derives an interpolated model over a set of concentric rings evenly spaced r k , minimizing the function χ 2 = (V obs − V k (r)w k (r)) 2 /σ 2 , where V obs is the observed velocity map; V k are the set of velocities inferred at r k ; w k is a set of weights that depend on the specific kinematic model adopted and will serve to create a 2D model; σ is the error velocity map.We refer to Lopez-Coba et al. (2021) for a thorough description of this analysis.It is important to mention that the modeling does not assume any parametric function on the velocity profiles. A pure circular rotation, without non-circular motions, is described by the first term on the right from Equation 1.The disk orientation of NGC 1087 was estimated from modeling the Hα and stellar velocity maps with circular rotation only, obtaining ϕ ′ disk = 358.9• , i = 44.5 • and V sys = 1526 km s −1 .The residual velocity maps from this model i.e., V obs − V circular , are shown in the third column from Figure 6.The circular rotation models leave large-scale residual velocities of the order of 30 km s −1 with a mirror symmetry about the nucleus; this corresponds to de-projected amplitudes of ∼ 50 km s −1 for the non-circular velocities on the disk plane.These residual patterns have been observed in larger scales in galaxies with long bars in Hα (e.g., Lang et al. 2020;López-Cobá et al. 2022) and molecular gas (e.g., Pence et al. 1988;Mazzalay et al. 2014). The bisymmetric model with a fixed oval orientation at ϕ ′ bar is shown in the fourth column in Figure 6.This model reproduces simultaneously the twisted iso-velocities in the three velocity maps.The root mean square (rms) of the models decrease compared with the circular rotation one, therefore, in terms of the residuals, the LoS-velocities observed around the bar region can be reproduced by a kinematic model that considers a bar-like perturbation as the main source of non-circular motions.However, at such high spatial resolution ∼ 70 pc, there are still residual velocities that the bisymmetric model can not account for, this is reflected in the 15 km s −1 rms in the models, which is similar or larger than the level of turbulence of the ISM (e.g., Moiseev et al. 2015). An alternative interpretation to the observed non-circular motions in the Hα and CO velocity maps is the presence of gas inflow induced by the NIR-bar (Mundell & Shone 1999); in fact, hydro-dynamical models simulating bars predict inflow of gas along the offset ridges, or bar dust lanes, (e.g., Athanassoula 1992).As observed in Figure 1, the MUSE image of NGC 1087 shows several dust lanes in the central region making difficult the identification of those associated with the bar.Assuming the spiral arms are trailing (see bottom-right panel from Figure 1), NGC 1087 rotates counterclockwise in the sky.Thus, positive (negative) residuals in the near (far) side represent inflow; hence, a radial flow (V rad ) along the bar major axis is inflowing if V rad < 0. We implemented a non-axisymmetric model, with a flow streaming along ϕ ′ bar .The non-axisymmetric inflow model is described by the following expression: where V rad is the radial velocity of the flow, and ϕ bar is the misalignment between the disk and the bar position angles on the disk plane.This model assumes the gas is flowing along ϕ bar .This expression is similar to Hirota et al. (2009) and Wu et al. (2021) assuming that the gas flows parallel to the bar major axis. In order to consider pixels likely affected by this motion, we consider the elliptical region that better describes the bar-like light distribution and defined in Figure 2. The 2D representation of this model is shown in Figure 6, while the kinematic radial profiles of all models are shown in Figure 7.As observed, the radial velocity V rad results negative in the model, with maximum inflow velocities of the order of 50 km s −1 . The molecular mass flow rate ( Ṁmol ) associated can be computed following the expression (e..g, Di Teodoro & Peek 2021): with Σ mol being the de-projected molecular mass surface density, V inf low the CO inflow velocity and r the galactocentric distance.Figure 8 shows the spatially resolved Ṁmol and its average radial profile.We find an average Ṁmol ∼ −20M ⊙ /yr.For comparison, spiral arms induce radial flows and radial velocities of the order of 1M ⊙ /yr and 10 km s −1 respectively, (e.g., Di Teodoro & Peek 2021). DISCUSSION AND CONCLUSIONS The combination of spatially resolved spectra provided by MUSE-VLT, with the high spatial resolution from JWST and the ALMA CO J = 2 → 1 data allowed us to reveal a central kinematic oval distortion, as well as a small scale elongated structure in the NIR continuum in NGC 1087.This elongated structure is not evident in optical images, but only revealed with the NIRCam F200W and F360M filters at 2µm and 3µm respectively, thanks to its exquisite • in the galaxy plane).Lines in red colors represent results of models computed on the Hα velocity map, blue lines on the CO moment 1 and black lines on the stellar velocity map.Shaded regions represent 1σ errors.The non-circular velocities were estimated up to 18 ′′ , which covers the de-projected length of the bar.A 2 ′′ sampling step was adopted, i.e., larger than the FWHM spatial resolution of the data. resolution.The optical counterpart of this bar is likely to be affected by dust absorption and by the intense radiation from young stars.Its preferential elongation suggests the presence of a non-axisymmetric perturbation in the gravitational potential, in particular a bar-like type.At first order, the 2µm light distribution of this structure can be described with an exponential disk profile oriented at ϕ ′ bar = 305 • , and the ellipticity of the elongated structure is ε ′ = 0.5, (see Figure 2), with a half-major axis length of a ′ bar ∼ 14 ′′ or ∼ 1.2 kpc in the galaxy plane.Overall the shape of this structure resembles to a faint stellar bar, although with remarkable differences with respect to conventional stellar bars, in terms of bar-length, the lack of a SF nuclear ring and not clear associated dust lanes along the bar axis.The same argument however applies for an elongated bulge.The main reason this is not considered is due to the large reservoirs of molecular gas observed, since bulges tend to be quiescent structures with null or low SFRs (Shapiro et al. 2010).Furthermore, the [O i]/Hα and [O iii]/Hβ line-ratios and large EWHα from the bar (see Figure 1) indicate that the ionized gas is product from a recent or ongoing SF event given its spatial coincidence with the CO emission. The ionized and molecular gas kinematics and the stellar velocity, reveal simultaneously the presence of an oval distortion in the velocity maps in concordance with the extension of the faint bar.This evidences undoubtedly the im-prints of a non-axisymmetric perturbation in the potential, here captured in different phases and enhanced in dynamically cold gas. Kinematic models for a second order potential perturbation successfully described the LoS velocities in the three velocity maps.We showed that this is achieved just by fixing the orientation of the stream flow to the bar major axis in the models, and the radial profiles of the noncircular velocities are consistent with the residuals from circular rotation, with possible differences in amplitudes due to assymmetric drift and other random motions not considered in the model as we showed in Figure 7. Apart from the elliptical motions considered in the bisymmetric models, radial inflows are expected in bars, and these are observed to occur along the bar-dust lanes (Wu et al. 2021;Sormani et al. 2023).Our implementation of an inflow model is therefore physically motivated.An inflow of cold gas caused by a lose of angular momentum could trigger the SF observed along the bar.Although a fraction of ionized gas could arise by the shock with the dust lanes, the contribution of shocks is expected to be minor.Moreover, the sign of the radial flow suggests the gas is inflowing to the center at a maximum speed of 50 km s −1 .This kinematic model yields the lowest rms despite being the model with the lowest number of free variables.Statistically speaking, the inflow scenario is favored over the bisymmetric model, although given the complex behavior of gas in bars, the contribution of gas oval orbits can not be ruled out.In fact both type of motions are expected to happen simultaneously in bars (Athanassoula 1992;Regan et al. 1999).To date, there is no a 2D-kinematic model that includes such dynamics of bars.The gas inflow model is in concordance with the observed enhancement in Hα flux along the bar.The inferred inflow velocities translate in molecular mass inflow rates of Ṁmol = 20 M ⊙ /yr.This is larger than the measured SFR(Hα) along the bar, thus we argue that a major inflow of gas might be feeding the SF in the bar.In fact SF prevents the so-called continuity problem (e.g., Simon et al. 2003;Maciejewski et al. 2012) when often considering radial flow models.This large inflow rate is partially due to the large inflow velocities; although this amplitude of noncircular motions is observed in bars, the global inflow rate expected by hydrodynamical models is lower (Athanassoula 1992;Mundell & Shone 1999). To summarize, in this work we have taken the advantage of multiple archival data to reveal a hidden non-axisymmetric structure in the optical, thanks to the high angular resolution of the data.We were able to confirm the presence of an oval distortion and successfully model it with a bar-like flow model.We also show that the SFR in the bar could be explained by an inflow of gas along the bar major-axis.Our results contribute to understand the overall picture of non-axisymmetry in galaxies with the most so-phisticated data so far.Additionally, it highlights the importance of using the IR instead of optical bands to detect stellar bars, which could increase the fraction of barred galaxies detected in the local Universe. Figure 1 . Figure 1.Central panel: False color image showing the distribution of the ionized gas and NIR continuum in NGC 1087 with fluxes taken from MUSE-VLT and NIRCam imaging respectively (red: [S ii], yellow: Hα, blue: [O iii], brown: F360M, orange: F335M, white: F200W).Right panels: A zoom-in of the 30 ′′ × 20 ′′ innermost region is shown to highlight the bar-structure.The upper panel shows a true color image from the MUSE cube (R: i -band, G: r, B: g).The black cross in the middle represents the NIR nucleus.The middle panel shows the JWST-F200W image together with a set of isophotes describing the bar NIR light distribution; the isophotes show a constant orientation in the sky at ϕ ′ bar = 304 • ± 3 • .Two isophotes located at 6 ′′ and 14 ′′ are highlighted with thicker lines for reference.In each case the FWHM resolution is shown with a yellow circle.Bottom inset: Orientation of the object in the sky.theNIR bar is a ′ bar = 14 ′′ or 18 ′′ ∼ 1.2 kpc 3 on Figure 3 . Figure 3. Diagnostic diagrams for the ionized gas in the bar structure.Top panel: [O i]6300/Hα vs. [O iii]/Hβ diagram.The black dashed line represents the star forming demarcation curve from Kewley et al. (2006).Shock grid models from the updated MAPPINGS-V (Sutherland et al. 2018) computed by Alarie & Morisset (2019) are shown for n e = 10 cm −3 , Z ISM = 0.008 (equivalent to 12 + log O/H = 8.55), and a wide range of magnetic fields (blue lines) and pre-shock velocities (red lines).The colors of the points map the F200W light distribution shown in the inset figure.Bottom panel: EW(Hα) in absolute value vs. [N ii]/Hα line ratio.Old and evolved stars are expected to produce EW(Hα) < 3 Å.. Figure 4 . Figure 4. Oxygen abundance in the bar-region computed with the Pilyugin & Grebel (2016) calibrator.As in Figure 1 the white ellipses describe the NIR bar.The innermost ellipse shows an average metallicity of 12 + log O/H = 8.5. Figure 5 . Figure 5. Top: Specific star formation rate computed through the Hα based SFR and the stellar mass density maps, sSFR = SFR/M ⋆ .Bottom: Molecular surface density around the bar structure.The Hα flux is overlaid with reddish colors for comparison.The yellow circles represent the FWHM spatial resolution in each case.The white ellipses delineate the NIR bar extension. Figure 6 . Figure 6.First column: Line-of-sight velocities of CO, Hα and the stellar velocity in NGC 1087.Second column: Zoom-in around the bar-structure, with ±25 km s −1 spaced iso-velocity shown on-top; the black straight line shows the P.A. of the faint bar at ϕ ′ bar = 305 • .Third column: residual velocities after subtracting the circular rotation in each velocity map.Fourth column: Bisymmetric model after fixing the oval orientation at ϕ ′ bar .Fifth column: inflow model assuming the gas streams along ϕ ′ bar ; the individual radial profiles and expressions are shown in Figure 7.The root mean square of the models is shown at the bottom right of each panel.All maps share the same color-bar except for the residual maps which cover ±50 km s −1 following the same colors scheme. Figure 7 . Figure7.Radial distribution of the different velocity components for the considered kinematic models.V t is the disk pure circular rotation, or rotational curve; V 2r and V 2t are the bisymmetric velocities; V rad the radial flow from the nonaxisymmetric flow model.In all non-circular models the position angle of the oval distortion was fixed to ϕ ′ bar = 305 • (namely, 297 • in the galaxy plane).Lines in red colors represent results of models computed on the Hα velocity map, blue lines on the CO moment 1 and black lines on the stellar velocity map.Shaded regions represent 1σ errors.The non-circular velocities were estimated up to 18 ′′ , which covers the de-projected length of the bar.A 2 ′′ sampling step was adopted, i.e., larger than the FWHM spatial resolution of the data. Figure 8 . Figure 8.Molecular mass inflow induced by the bar-like structure.Top panel: 2D distribution of Ṁmol .Bottom panel: Average radial distribution of Ṁmol in black and radial velocity profile of the inflow velocity in red colors.Shadow black regions represent the standard deviation of Ṁmol at each radial bin.A SN > 3 was applied to the moment 0 map to exclude spurious CO detection.
7,191.8
2023-12-07T00:00:00.000
[ "Physics" ]
Chlamydia trachomatis fails to protect its growth niche against pro-apoptotic insults Chlamydia trachomatis is an obligate intracellular bacterial agent responsible for ocular infections and sexually transmitted diseases. It has been postulated that Chlamydia inhibits apoptosis in host cells to maintain an intact replicative niche until sufficient infectious progeny can be generated. Here we report that, while cells infected with C. trachomatis are protected from apoptosis at early and mid-stages of infection, they remain susceptible to the induction of other cell death modalities. By monitoring the fate of infected cells by time-lapse video microscopy and by analyzing host plasma membrane integrity and the activity of caspases, we determined that C. trachomatis-infected cells exposed to pro-apoptotic stimuli predominately died by a mechanism resembling necrosis. This necrotic death of infected cells occurred with kinetics similar to the induction of apoptosis in uninfected cells, indicating that C. trachomatis fails to considerably prolong the lifespan of its host cell when exposed to pro-apoptotic insults. Inhibitors of bacterial protein synthesis partially blocked necrotic death of infected cells, suggesting that the switch from apoptosis to necrosis relies on an active contribution of the bacteria. Tumor necrosis factor alpha (TNF-α)-mediated induction of necrosis in cells infected with C. trachomatis was not dependent on canonical regulators of necroptosis, such as RIPK1, RIPK3, or MLKL, yet was blocked by inhibition or depletion of CASP8. These results suggest that alternative signaling pathways regulate necrotic death in the context of C. trachomatis infections. Finally, consistent with the inability of C. trachomatis to preserve host cell viability, necrosis resulting from pro-apoptotic conditions significantly impaired production of infectious progeny. Taken together, our findings suggest that Chlamydia’s anti-apoptotic activities are not sufficient to protect the pathogen’s replicative niche. Introduction C. trachomatis is the causative agent of blinding trachoma, an ocular disease that is endemic in many developing countries [1]. Moreover, C. trachomatis is the most frequent agent of bacterial sexually transmitted disease worldwide Edited by S. Kumar * Barbara S. Sixt<EMAIL_ADDRESS>* Guido Kroemer<EMAIL_ADDRESS>[2]. Acute Chlamydia urogenital tract infections are often asymptomatic, but repeated and recurrent infections increase the risk for complications, such as pelvic inflammatory disease, ectopic pregnancy, and infertility [3]. C. trachomatis' replication is restricted to the intracellular environment of epithelial cells [4]. Within the host cell, C. trachomatis undergoes a developmental cycle, alternating between the reticulate body (RB) that replicates within an intracellular membrane-bound compartment termed inclusion and the elementary body (EB) that is eventually released from the host cell to infect neighboring cells [5]. Bacterial egress occurs via extrusion, which is a process that is non-destructive for the host cell, or via induction of a caspase-independent mode of host cell death that can be accompanied by necrotic and/or apoptotic morphological features [6][7][8]. At early and mid-stages of infection, cells infected with Chlamydia spp. are protected from the induction of apoptosis upon exposure to potent inducers [9], including for instance UV irradiation, cytotoxic chemicals (e.g., staurosporine (STS)), and immune mediators (e.g., tumor necrosis factor alpha (TNF-α) and ligation of CD95) [10,11]. It has been proposed that the apoptotic machinery in C. trachomatis-infected cells is blocked upstream of the permeabilization of the mitochondrial outer membrane. Indeed, activation of BAX/BAK and the release of mitochondrial cytochrome c do not to occur in infected cells upon exposure to pro-apoptotic stimuli [10][11][12][13][14]. Infection with C. trachomatis also blocks the activation of apoptotic caspases, PARP cleavage, and pyknosis [10][11][12][13]. Accordingly, multiple anti-apoptotic activities have been attributed to C. trachomatis. These include for instance the stabilization of MCL-1 [15], downregulation and degradation of TP53 [16,17], and enhanced recruitment of hexokinase-II to mitochondria [18]. Together these anti-apoptotic activities are predicted to protect the pathogen's replicative niche from cytotoxic insults, such as from infection-induced stress and death signals emanating from immune cells. Here we demonstrate that under pro-apoptotic conditions the death of Chlamydia-infected cells was not abolished, but rather shifted from apoptosis to an atypical form of necrosis. We further provide evidence that this necrotic death partially relies on an active contribution of the bacteria and that C. trachomatis fails to generate infectious progeny under pro-apoptotic conditions. Results Treatment with STS fails to activate apoptotic effector caspases in C. trachomatis-infected cells but still leads to host cell lysis While elaborating a strategy to identify Chlamydia factors that contribute to the inhibition of apoptosis, we monitored DEVD cleavage as a simple read-out for apoptotic effector caspase (CASP3/CASP7) activity [19]. Consistent with C. trachomatis' reported anti-apoptotic activity [10,[12][13][14]20], we observed that treatment of C. trachomatis-infected HeLa cells with STS failed to induce DEVD cleavage activity (Fig. 1a). Moreover, the decrease in DEVD cleavage correlated with the percentage of infected cells (Fig. 1b). Unexpectedly, microscopic inspection of STS-treated cultures indicated widespread induction of necrotic death in infected cells (Fig. 1c). Time-lapse microscopic analysis of infected cells confirms the engagement of necrotic death upon exposure to pro-apoptotic stimuli To assess the extent and mode of death induced by pro-apoptotic stimuli in Chlamydia-infected cells, we monitored live infected cultures by time-lapse video microscopy. We based our quantitative assessments on characteristic morphological hallmarks of apoptosis and necrosis (Fig. S1A). We first analyzed the effect of classical inducers of apoptosis, including TNF-α (50 ng/ml; added together with 2.5 µg/ml cycloheximide (CHX)), actinomycin D (ActD, 1 µM), and STS (1.3 µM), on uninfected HeLa cells. Most mock-(DMSO-) treated cells displayed normal adherent morphology, only sporadically interrupted by mitosis, throughout the period monitored (until 17 h post treatment (hpt)) ( Fig. 2a, b, movie S1). In contrast, a large proportion of cells treated with TNF/CHX or ActD developed a typical apoptotic morphology, including cellular shrinkage, membrane blebbing, and detachment, followed eventually by secondary necrosis (Fig. 2a, b, movies S2-3). Cells treated with STS displayed drastic changes in morphology, such as a pronounced cell shrinkage [21], within minutes after addition of the drug (Fig. 2a, movie S4). These early effects hindered a quantitative analysis of apoptotic traits in STS-treated cells by timelapse microscopy. We next analyzed C. trachomatis-infected cells that were treated with apoptosis inducers at 24 h post infection (hpi), a time point at which Chlamydia-mediated inhibition of apoptosis is robust [10,13] and inclusions were large enough to distinguish inclusion-bearing from inclusion-free cells by microscopy (Fig. S1B). The majority of infected DMSO-treated cells maintained a normal morphology and bacterial replication and inclusion expansion were readily detectable (Fig. 2a, b, movie S1). In the presence of TNF/CHX, inclusion-bearing cells predominately died by necrosis, displaying a sudden rupture of the host cell membrane in absence of morphological signs of apoptosis, while neighboring inclusion-free cells died by apoptosis (Fig. 2a, b, movie S2). The death of infected cells was not considerably delayed compared to the death of uninfected cells (Fig. 2c). Consistent with our initial observations, STS-induced necrotic death in >90% of infected cells during the period monitored ( Fig. 2a, b, movie S4). Interestingly, infected cells appeared to be partially protected from cell death mediated by ActD, although this drug also induced necrotic death in a significant proportion of infected cells (Fig. 2a, b, movie S3). Weak pro-apoptotic stimulation is sufficient to induce necrotic death in infected cells We next studied how infected cells would respond to weaker stimulation. Live cell monitoring of uninfected HeLa cells indicated that the percentage of cells displaying apoptotic features in response to TNF/CHX decreased as we lowered the concentration of TNF-α (Fig. 2d). Similarly, the percentage of infected cells undergoing necrosis diminished with decreasing concentrations of TNF-α, yet the overall percentage of cells that succumbed to TNF/CHX treatment was similar between infected and uninfected cultures at each concentration tested (Fig. 2d). A similar analysis could not be made for STS, because the onset of morphological features of apoptosis in STS-treated cells could not be assessed unequivocally. However, necrotic death of infected cells was readily observed when the STS concentration was reduced to one-tenth or one-hundredth of the initially tested standard dose (Fig. 2d). The induction of necrosis in response to proapoptotic stimuli is not cell line specific Although apoptotic cells will eventually lyse in cell culture, the integrity of their plasma membrane is not compromised during the initial stages of apoptosis [22]. Indeed, 7 h of exposure to TNF/CHX or STS did not cause significant release of the host enzyme lactate dehydrogenase (LDH) from uninfected HeLa cells, while LDH activity was readily detected in the supernatant of treated infected cultures (Fig. 3a). By conducting parallel measurements of DEVD cleavage activity in cell lysates, we further confirmed that TNF/CHX and STS induced apoptotic effector caspase activity within 7 h in uninfected cells, and that this activation was significantly reduced in infected cultures (Fig. 3b). Similar results were obtained when caspase activity was assessed at 4 hpt (Fig. 3c), a time point preceding necrotic death of infected cells (Fig. 3d). To distinguish necrotic from apoptotic death we also used the Annexin V/propidium iodide (PI) assay [23], in which staining of intact cells with fluorescently labeled Annexin V indicates externalization of phosphatidylserine, a hallmark of apoptosis, whereas staining with the membrane-impermeable DNA dye PI indicates loss of plasma membrane integrity, a sign of necrosis. While most dying cells in uninfected TNF/CHX-treated cultures stained positive for Annexin V but not PI, most dying cells in infected cultures stained double positive (Fig. S2), indicating that uninfected cells died by apoptosis, while infected cells died by necrosis. We next determined whether the shift from apoptosis to necrosis also occurs in other cell lines. We tested three additional human epithelial cell lines, including A2EN (endocervix), U2OS (bone osteosarcoma), and HT29 (colorectal adenocarcinoma) cells. TNF/CHX and STS added to infected cells induced necrotic death in all tested cell lines (Fig. 3a). In contrast, in uninfected cells, treatment resulted in most instances in the induction of DEVD cleavage activity (Fig. 3b). TNF/CHX-and STS-induced necrosis of Chlamydiainfected cells partially depends on bacterial activity We considered the possibility that large inclusions could favor necrosis of infected cells under pro-apoptotic conditions due to mechanical stress. We thus tested how infected cells would respond to pro-apoptotic stimuli at 14 hpi (i.e., 10 h earlier than in previous experiments), a time point at which Chlamydia inclusions were still relatively small (Fig. 3e). While we observed that the block in the induction of DEVD cleavage activity was slightly weaker when proapoptotic drugs were added this early (Fig. 3f), STS and TNF/CHX induced similar extents of necrotic death regardless of the time point of treatment (Fig. 3g). Our earlier observation that ActD, compared to other pro-apoptotic drugs tested, induced less necrosis in infected cells (Fig. 2b, d), suggested that this inhibitor of transcription may block a host or bacterial activity required for the execution of necrotic cell death. Because necrosis readily occurred in the presence of CHX, an inhibitor of host protein synthesis, we hypothesized that an activity exerted by the bacteria may be required. Indeed, antibiotics that block bacterial protein synthesis (chloramphenicol and tetracycline) partially reduced necrosis of infected cells when added shortly before exposure to STS or TNF/CHX, while penicillin G (which does not affect bacterial protein synthesis) had no protective effect (Fig. 3h). TNF/CHX-induced necrosis in Chlamydia-infected cells is not canonical necroptosis, but requires caspase activity When the apoptotic machinery is blocked, pro-apoptotic signals, such as TNF/CHX and STS, induce a form of regulated necrosis known as necroptosis [24,25]. We thus tested the effect of inhibitors of the necroptotic pathway, including necrostatin-1 (inhibitor of RIPK1) [26], GSK'872 (inhibitor of RIPK3) [27], and necrosulfonamide (inhibitor of MLKL) [28], on death induced in infected cells. All three inhibitors blocked death in HT29 cells in which necroptosis was induced by addition of TNF-α in the presence of the Smac mimetic BV6 and the pan caspase inhibitor Z-VAD-FMK (a drug combination referred to as TSZ [29]) ( Fig. S3). However, these inhibitors did not reduce necrosis in Chlamydia-infected HeLa or HT29 cells treated with apoptosis inducers (Fig. 4a). Consistent with these findings, RIPK3-deficient and MLKL-deficient HT29 cells were protected from TSZ-induced necroptosis (Fig. 4b, c), but not from necrotic death induced in infected cells upon exposure to pro-apoptotic drugs (Fig. 4d). We further tested more directly for the activation of the necroptotic signaling pathway. In a HT29 biosensor cell line that expresses RIPK3-YFP, induction of necroptosis with TSZ led to RIPK3-YFP-aggregation, a hallmark of its activation, and elevated YFP fluorescence intensity (Fig. 4e, f). RIPK3 activation was neither observed in DMSO-treated infected cells nor was it induced when infected cells were exposed to pro-apoptotic drugs (Fig. 4e, f). A time course analysis indicated that RIPK3 activation was also not Fig. 2 Time-lapse video microscopy confirms that apoptosis inducers stimulate necrotic death in C. trachomatis-infected HeLa cells. a-c Apoptosis inducers trigger necrotic cell death in inclusion-bearing cells. Uninfected and Chlamydia-infected (5 IFU/cell) HeLa cells were treated with apoptosis inducers (TNF-α (50 ng/ml + 2.5 µg/ml CHX), ActD (1 µM), or STS (1.3 µM); added at 24 hpi) and monitored by time-lapse microscopy until 17 hpt. (a) Selected images from timelapse movies (movies S1-S4). Asterisks, arrowheads, and arrows indicate examples of inclusions, necrotic cells and apoptotic cells, respectively. (b) Quantitative assessment of the frequency of apoptosis and necrosis until 17 hpt (mean ± SD, n = 3, ANOVA (DMSO, TNF/ CHX, ActD), t-test (STS); if not indicated else for each viability group significant differences compared to uninfected cells are marked). The category "uninfected" refers to cells in uninfected cultures, whereas the categories "inclusion-free" and "infected" refer to inclusion-free and inclusion-bearing cells in infected cultures. The total number of cells (c) analyzed in each group is indicated in the figure. (c) Comparison of the frequency of dying/dead cells (necrotic + apoptotic) among uninfected and infected cells at different time points post addition of TNF/CHX (mean ± SD, n = 3, ANOVA). d Weak proapoptotic stimulation is sufficient to induce necrosis in infected cells. Time-lapse microscopy-based quantitative assessment of cell death (necrotic + apoptotic) in infected (5 IFU/cell) and uninfected cells exposed for 17 h to various concentrations of apoptosis inducers (TNFα, 50 ng/ml, 1 ng/ml, 0.1 ng/ml (+2 µg/ml CHX); STS, 1 µM, 0.1 µM, 0.01 µM; ActD, 1 µM, 0.1 µM, 0.01 µM). The category "uninfected" refers to cells in uninfected cultures, whereas the category "infected" refer to inclusion-bearing cells in infected cultures (mean ± SD, n = 4 (DMSO, TNF-50, TNF-1), n = 3 (all other groups), ANOVA; nd, not determined). The total number of cells analyzed for each group was ≥ 145 Fig. 3 The Chlamydia-mediated shift from apoptosis to necrosis occurs in multiple cell lines and is partially dependent on a bacterial activity. a-b C. trachomatis shifts apoptosis to necrosis in multiple human cell lines. The graphs display early release of LDH (a) and reduced induction of DEVD cleavage (b) from/in infected (10 IFU/ cell) cultures treated with pro-apoptotic drugs (STS (1 µM) or TNF-α (50 ng/ml (HeLa, U2OS) or 200 ng/ml (HT29, A2EN) + 2.5 µg/ml CHX); added at 24 hpi). Culture supernatants and cell lysates were collected/prepared at 7 hpt (HeLa) or 9 hpt (other cell lines) for measurement of LDH activity (a) and DEVD cleavage activity (b), respectively (mean ± SD, n = 3 (DEVD (all cell lines), LDH (U2OS)), n = 4 (LDH (other cell lines), ANOVA). c-d C. trachomatis blocks the induction of DEVD cleavage activity at a time point preceding necrotic cell death. HeLa cells were treated as described for (a, b). Culture supernatants and cell lysates were collected/prepared at 4 hpt for measurement of DEVD cleavage activity (c) and LDH activity (d), respectively (mean ± SD, n = 3, ANOVA). e Representative images displaying the difference in inclusion size in HeLa cell cultures infected with Chlamydia (10 IFU/cell) for 14 h or 24 h (Hoechst, blue; CellTrace CFSE, white; Slc1 (Chlamydia), yellow; scale bar, 20 µm). f, g Reduced induction of DEVD cleavage activity (f) and enhanced release of LDH (g) in/from HeLa cultures treated with pro-apoptotic drugs at an early stage of infection. Cells were treated with apoptosis inducers (as described for (a, b)) at 14 hpi or 24 hpi (10 IFU/cell). DEVD cleavage activity in cell lysates (f) and LDH activity in culture supernatants (g) were measured at 9 hpt (mean ± SD, n = 3, ANOVA). h Inhibitors of bacterial protein synthesis partially block necrotic death of infected (10 IFU/cell) HeLa cells that were exposed to apoptosis inducers (STS (1 µM) or TNF-α (50 ng/ml + 2.5 µg/ml CHX); added at 24 hpi). Antibiotics (chloramphenicol (1.5 µg/ml), tetracycline (2 µg/ml), and penicillin G (1 U/ml)) were added prior to the addition of apoptosis inducers. LDH activity in culture supernatants was measured at 9 hpt (mean ± SD, n = 6 (no drug, chloramphenicol), n = 3 (other groups), ANOVA). Statistically significant differences marked in Fig. 3 relate to differences (within each treatment group) in relation to uninfected cells (a-d, f, g) or the no drug control (h) detectable at earlier time points after stimulation with STS or TNF/CHX (Fig. S4). Similarly, phosphorylation of MLKL, an indicator for MLKL activation, was not observed in infected HT29 cells that were treated with DMSO or apoptosis inducers (Fig. 4g, h). Interestingly, these experiments also revealed that infection with Chlamydia did not prevent RIPK3 or MLKL activation induced by TSZ in HT29 cells (Fig. 4e-h) and rather enhanced resulting necrotic cell death (Fig. 4i). After excluding a major role for the canonical necroptotic cell death pathway in TNF/CHX-and STS-induced necrosis of infected cells, we next tested for a potential involvement of CASP1-mediated pyroptosis [30]. We observed that the CASP1 inhibitor Z-YVAD-FMK, like the necroptosis inhibitors, failed to significantly affect the induction of necrotic death in infected TNF/CHX-or STS-treated HeLa cells (Fig. 5a). Interestingly, the pan caspase inhibitor Z-VAD-FMK strongly reduced LDH release from infected cultures that were exposed to TNF/CHX, while its effect on STS-induced death was only minor (Fig. 5a). By testing a panel of caspase-specific inhibitors, we determined that TNF/CHX-induced necrosis in infected cells could be blocked by Z-IETD-FMK, an inhibitor of CASP8 (Fig. 5b). CASP8 is an apoptotic initiator caspase that acts downstream of the TNF-α receptor in the canonical TNF/CHXmediated apoptosis pathway [31], but only plays a minor role in STS-induced apoptosis [32]. Depletion of CASP8 with specific siRNAs protected infected cells from TNF/ CHX-induced necrosis (Fig. 5c, d). Consistent with these findings, infected CASP8-deficient HeLa cells were protected from TNF/CHX-induced necrotic death (Fig. 5e, f), while uninfected CASP8-deficient cells were protected from TNF/CHX-induced apoptosis (Fig. 5g). Together these data indicate that, unlike the activation of apoptotic effector caspases (Fig. 3b), the early steps in TNF/CHXinduced apoptosis leading to CASP8 activation are not blocked in cells infected with C. trachomatis and participate in TNF/CHX-induced necrosis of infected cells. Consistent with this notion, western blot analysis confirmed that CASP8, but not CASP3, was processed into its active form in infected TNF/CHX-treated cells (Fig. S5). Apoptosis inducers perturb Chlamydia development The inhibition of apoptosis by Chlamydia spp. is commonly interpreted as a means to maintain host cell integrity to enable continued bacterial replication [9,10,33]. Yet, our findings suggest that C. trachomatis-infected cells undergo necrosis under pro-apoptotic conditions. We thus tested the effect of apoptosis inducers on the production of infectious EBs-a hallmark of a completed infectious cycle. For this purpose, HeLa cells infected with C. trachomatis were treated with apoptosis inducers at 24 hpi and the number of infectious particles released into the culture supernatant or contained in remaining cells was quantified at different time points post treatment. During normal development, C. trachomatis L2 RBs start to differentiate into infectious EBs at around mid-stage of the infection cycle (about 20-24 hpi), yet host cell integrity is maintained until late stages (about 40-48 hpi) when EBs are eventually released by host cell lysis [6,34]. Indeed, in DMSO-treated control cultures, the overall number of infectious particles produced until 28 hpi was low, but increased continuously thereafter (Fig. 6a). Moreover, until 38 hpi no significant release of infectious particles from infected cells was detected (Fig. 6b). Consistent with the induction of necrotic host cell death and hence a loss of host plasma membrane integrity, STS and TNF/CHX added at 24 hpi induced the release of a small amount of infectious bacterial particles (Fig. 6b). Yet, the premature loss of the replicative niche abolished further production of infectious bacteria (Fig. 6a). Importantly, inhibition of necrotic host cell death with Z-VAD-FMK or Z-IETD-FMK restored normal production of infectious bacteria in presence of TNF/CHX (Fig. 6c). Discussion The anti-apoptotic effect of C. trachomatis was first recognized two decades ago [10] and described as being of "unusual strength and quality" when compared with similar activities exerted by other intracellular bacterial pathogens [35]. Given Chlamydia's obligate intracellular nature [5], it was reasonable to conclude that one of the roles of assuming an anti-apoptotic state is to protect the host cell, the replicative niche, from cytotoxic insults and stress emanating from immune mediators or the infection itself [9,33]. LDH activity in culture supernatants was measured at 9 hpt (mean ± SD, n = 6 (control, Casp3-A/B, Casp4-A/B, Casp8-A/B, Casp9-A/B), n = 3 (other groups), ANOVA) (d). e Western blot analysis displaying the presence or absence of CASP8 in wild-type and CASP8-deficient HeLa cells. The designation C1-C3 refers to three distinct clonal cell populations obtained after selection of transduced cells. f CASP8deficient infected (10 IFU/cell, 24 h) HeLa cells are protected from TNF/CHX-induced necrosis. Cells were treated with STS (1 µM) or TNF-α (50 ng/ml + 2.5 µg/ml CHX). LDH activity in culture supernatants was measured at 9 hpt (mean ± SD, n = 3, ANOVA). g Uninfected CASP8-deficient HeLa cells are protected from TNF/CHXinduced, but not STS-induced, apoptosis. Cells were treated with STS (1 µM) or TNF-α (50 ng/ml + 2.5 µg/ml CHX). DEVD cleavage activity in cell lysates was measured at 7 hpt and was normalized to the activity detected in DMSO-treated cells (mean ± SD, n = 3, ANOVA). Statistically significant differences marked in Fig. 5 relate to differences in relation to no inhibitor/no depletion controls (a-b,d) or wildtype cells (f-g) naturally occurs at the end of the infection cycle. Indeed, the onset of necrotic death in infected cells that were exposed to pro-apoptotic stimuli occurred with similar kinetics as the manifestation of apoptotic morphologies in uninfected cultures (Fig. 2c), demonstrating that C. trachomatis does not prolong the lifespan of its host cell under these conditions. Consistent with this notion, the production of infectious Chlamydia particles was stunted by pro-apoptotic stimulation (Fig. 6a). Previous studies that described the absence of hallmarks of apoptosis in Chlamydia-infected cells exposed to proapoptotic stimuli, focused on very early time points after treatment [10,[12][13][14]20]. Cell death is a highly dynamic process. When exposed to a pro-death stimuli, individual cells in cultures die at different times, progress through different stages and can be lost for analysis as they detach or disintegrate [36]. Thus, short incubation periods may be optimal for the detection of apoptotic hallmarks. However, they do not allow monitoring the ultimate fate of these cells. While we did not conduct an exhaustive monitoring of all molecular hallmarks of apoptosis, our observation that infection blocked DEVD cleavage activity and externalization of phosphatidylserine (Fig. 3b, c and S2) is consistent with previously described anti-apoptotic activities ascribed to Chlamydia [10][11][12][13][14]. However, live cell imaging enabled us to monitor infected cells responding to pro-apoptotic stimuli over prolonged periods of time (Fig. 2), which uncovered an underappreciated complexity in Chlamydia's ability to modulate programmed cell death. It should be noted that Jungas and coworkers previously noticed that, although Chlamydia-infected cells were protected from STS-induced apoptosis, the drug still caused a reduction in cell numbers in cultures of adherent Quantification of IFU production in C. trachomatis-infected HeLa cells exposed to pro-apoptotic stimuli. Confluent monolayers of HeLa cells were infected with C. trachomatis and treated with apoptosis inducers (STS (1 µM) or TNF-α (50 ng/ml + 2.5 µg/ml CHX)) at 24 hpi. Cell lysates and culture supernatants were prepared/collected at indicated time points and the number of IFUs was quantified and normalized to the input IFUs used for the initial infection. Displayed data represent total numbers of IFUs (lysates + supernatants) (a) and numbers of IFUs in supernatants (b), respectively (mean ± SD, n = 3, ANOVA; significant differences compared to the DMSO control are displayed for each time point). c Inhibition of CASP8 restores production of infectious progeny in presence of TNF/CHX. HeLa cells were treated as described for (a, b), yet caspase inhibitors (Z-VAD-FMK (10 µM), Z-IETD-FMK (10 µM)) were added prior to the addition of pro-apoptotic drugs. Displayed data represent IFUs in cell lysates prepared at 24 hpt (mean ± SD, n = 3, ANOVA; significant differences compared to the DMSO control are displayed for each inhibitor group) Fig. 7 Model of pro-death signaling induced by pro-apoptotic conditions in uninfected cells or cells infected with C. trachomatis. Blue arrows indicate canonical signaling in the extrinsic and intrinsic pathways of apoptosis, as observed in uninfected cells. The gray dashed arrow indicates direct activation of effector caspases and apoptosis by CASP8 in a manner independent of MOMP (which only occurs in certain cell types [44]). Red arrows indicate deviations from canonical signaling observed in cells infected with C. trachomatis. It is currently unknown whether the pathways of TNF/CHX-induced and STS-induced necrosis in infected cells are distinct or converge (for instance at the level of BAX/BAK activation) infected cells [37]. Although the fate of the detached cells was not further characterized, these findings are consistent with our observation that Chlamydia fails to maintain host cell viability. The pathways leading to necrotic cell death in Chlamydia-infected cells are non-canonical. The two major proapoptotic stimuli used in this study, STS and TNF/CHX, induce apoptosis via different routes of signaling, the intrinsic and the extrinsic pathway of apoptosis, respectively (Fig. 7). A central step in the intrinsic pathway is the mitochondrial outer membrane permeabilization (MOMP) [38], which is mediated by the pro-apoptotic proteins BAX and BAK [39] and leads to the release of cytochrome c and activation of the initiator caspase CASP9 [40]. CASP9 in turn activates the effector caspases (CASP3 and CASP7) that initiate the demolition of the cell [41,42]. In the extrinsic pathway, engagement of death receptors, such as for instance the TNF-α receptor, can lead to the formation of a signaling complex that activates initiator caspase CASP8 [43]. While CASP8 can activate apoptotic effector caspases directly in some cell types, in most instances cell death induction requires an amplification of pro-death signaling via CASP8-dependent induction of MOMP [44]. When CASP8 is absent or inactivate, TNF-α induced signaling can induce RIPK1-RIPK3-MLKL-dependent canonical necroptosis as alternative mode of cell death [31]. Because conflicting observations have been reported on the effect of Chlamydia infection on CASP8 activation downstream of death receptor ligation [20,[45][46][47], we initially considered canonical necroptosis as a possible explanation for the death observed in TNF/CHX-treated infected cells. However, we observed that this cell death was not mediated by the RIPK3-RIPK1-MLKL machinery (Fig. 4a-h). Furthermore, we determined that CASP8 activation occurs in infected cells exposed to TNF/CHX (Fig. S5) and is required for the induction of necrotic cell death (Fig. 5). Further work will be required to decipher how CASP8 induces this non-canonical form of necrotic cell death and if there is convergence between TNF/CHX-and STS-induced necrosis. For instance, both pathways may lead to activation of BAX and BAX-dependent caspase-independent modes of cell death were proposed to play a role in Chlamydia exit at the late stage of infection [7,8,48]. However, this scenario would need to be reconciled with the lack of BAX activation in response to pro-apoptotic stimuli in cells infected with C. trachomatis [11,14,37]. Why does C. trachomatis employ multiple anti-apoptotic strategies [15-18, 45, 49], if these are insufficient to keep the host cell alive? Chlamydia spp. induce necrotic host cell lysis as a natural mode of egress at the end of the infection cycle [6], although apoptotic morphological features have also been occasionally described in dying infected cells [7,8]. Apoptotic cell death, if occurring sufficiently late in the infection cycle, may be a more favorable exit mechanism for the bacteria if it enables silent spread of infection by avoiding inflammatory responses often associated with necrosis [48]. However, Chlamydia spp. may not rely exclusively on this mechanism of spread, because they can also exit cells by extrusion [6]. During extrusion Chlamydia inclusions or parts of it are expelled from infected cells as vesicles that expose phosphatidylserine at their surface and may promote silent spread of infection, because the vesicles are taken up by neighboring cells or phagocytes in a manner analogous to the clearance of apoptotic bodies [6,50]. It is possible that by blocking apoptotic death, the bacteria established necrosis as a second mode of egress to potentiate the number of cells that can be infected by the progeny emerging from a single infected cell. Alternatively, it is also possible that the anti-apoptotic state associated with infections with Chlamydia spp. has no specific benefit for the bacteria, but may be a secondary consequence of Chlamydia's modulation of other host cell processes, such as the reprogramming of host cell metabolism. Altogether our results suggest that the consequences of C. trachomatis' anti-apoptotic activities on the viability of infected host cells are more nuanced than initially thought and that the role of the modulation of apoptosis in Chlamydia pathogenesis remains to be determined. Infection with Chlamydia The majority of experiments described were made using the C. trachomatis wild-type strain L2/434/Bu (ATCC VR-902B). Only in the initial experiments (i.e., the testing of the DEVD cleavage assay (Fig. 1)) we used a rifampinresistant variant, described elsewhere [56]. For the preparation of infection stocks, EBs were released from infected Vero cells by H 2 O-mediated lysis, purified by density gradient centrifugation, and stored at −80°C in SPG buffer [57]. Bacteria were titered (determination of inclusion-forming units (IFUs)) and tested for Mycoplasma contamination as described previously [57]. For infection, cells were seeded in multi-well plates, followed by addition of bacteria (number of IFU/cell as specified), centrifugation (1500 × g, 30 min), and incubation for indicated periods of time. During the initial testing of the DEVD cleavage assay (Fig. 1) infections were made in suspension, i.e., cells and bacteria were mixed before seeding and centrifugation. Induction and inhibition of cell death Apoptosis was induced by replacing the growth medium with medium containing STS, ActD, or human TNF-α added together with CHX (each purchased from Sigma-Aldrich) at the indicated concentrations. Necroptosis was induced by replacing the growth medium with medium containing TSZ (i.e., 20 ng/ml human TNF-α (Sigma-Aldrich), 1 µM Smac mimetic BV6 (Tocris), and 50 µM Z-VAD-FMK (Tocris)). In control wells, DMSO was added instead of pro-death drugs. Treated cells were incubated in presence of pro-death drugs until the end of the monitoring period (time-lapse microscopy) or the time point of analysis (all other experiments). When indicated, cell death inhibitors (necrosulfonamide (Tocris), necrostatin-1 (Tocris), GSK′872 (Merck), Z-VAD-FMK (Tocris), or inhibitors from the caspase inhibitor set IV (Enzo)) or antibiotics (chloramphenicol, tetracycline, and penicillin G (each purchased from Sigma-Aldrich)) were added at the indicated concentrations prior to cell death induction. Live cell phase contrast and time-lapse microscopy Phase contrast images not derived from time-lapse microscopy experiments were made on an EVOS FL microscope (Thermo-Fisher-Scientific). Time-lapse microscopy experiments were conducted using two different instrumental setups. For the initial experiments testing single concentrations of pro-apoptotic drugs, cells were seeded and infected (5 IFU/cell) in glass bottom 6-well plates (In Vitro Scientific). At 22 hpi the medium was replaced with HEPES (10 mM)-buffered DMEM (without phenol red). At 24 hpi apoptosis inducers were added to respective wells, cells were placed on an atmospherically controlled stage (37°C, 5% CO 2 ) and imaged at 5-10 min intervals until 17 hpt on an Inverted Axio Observer.Z1 microscope (Zeiss). For later experiments testing variable concentrations of pro-apoptotic drugs, cells were seeded and infected (5 IFU/cell) in black clear-bottom 96-well plates (Greiner Bio One). At 24 hpi, apoptosis inducers were added to respective wells and cells were imaged at 12-15 min intervals until 17 hpt on an ImageXpress Micro XL system (Molecular Devices) (37°C, 5% CO 2 ). The software used to operate these systems (Metamorph 7.8 (Molecular Devices) and MetaXpress 5.3.0.4 (Molecular Devices), respectively) was also used to generate time-lapse movies. The fate of individual cells was determined by manual inspection of these movies. Cells that left the microscopic field during the period of imaging were excluded from the analysis. Daughters of dividing cells were considered as separate individual cells. The manual inspection was conducted by a researcher knowledgeable about the treatment group. However, a reanalysis of a randomly selected set of movies by a person blinded to treatment group and expected outcome gave virtually identical results. Annexin V/PI assay The AlexaFluor488 Annexin V/Dead Cell Apoptosis Kit (Thermo-Fisher-Scientific) was adapted for use with adherent cells. Briefly, to each well in a 96-well plate containing treated or control cells in 50 µl growth medium, 50 µl of a 2x staining solution (PI (2 µg/ml) and Annexin V-AlexaFluor488 (1:10) in 2x binding buffer) were added. After 15 min incubation at room temperature, 100 µl 1x binding buffer containing 2 µg/ml Hoechst 33342 were added per well and cells were imaged on a Cellomics ArrayScan VTI HCS imaging system (Thermo-Fisher-Scientific). Fluorescence intensity thresholds for distinction between PI-and Annexin V-positive or negative cells were set so that about 95% of all cells in untreated uninfected control wells were classified as double negative (viable) cells. LDH release and DEVD cleavage Quantification of LDH release as indicator for host cell lysis was conducted using the colorimetric in vitro toxicology assay kit from Sigma-Aldrich, as described recently [57]. LDH activity was normalized to the activity observed in total cell lysates, i.e., the maximum activity expected to be observed if 100% of the cells would lyse. Quantification of DEVD cleavage activity in cell lysates as indicator for the activity of apoptotic effector caspases (CASP3/ CASP7) was conducted using the fluorimetric CASP3 assay kit from Sigma-Aldrich, as described recently [57]. Fluorescence intensity values were normalized to the mean fluorescence intensity observed for DMSO-treated uninfected cells. To display the correlation between infection and inhibition of DEVD cleavage, DEVD cleavage activity detected after infection with different doses of bacteria was normalized to the activity detected in STS-treated uninfected cells. Absorbance and fluorescence readings were made on an EnSpire 2300 (PerkinElmer), SpectraMax i3 (Molecular Devices), or Tecan infinite 200 (Tecan) plate reader. Depletion of caspases with siRNAs HeLa cells were transfected with siRNAs according to transfection guidelines provided by Dharmacon. DharmaFECT-1 (Dharmacon) was used as transfection reagent; siRNAs were used at a concentration of 25 nM. To reduce toxicity, the transfection medium containing siRNAs and reagent was removed after 6 h incubation and replaced with fresh growth medium. Cells were incubated for additional 12 h prior to infection and further processing. In control transfections siRNA-free siRNA buffer (Dharmacon) was added to the transfection medium instead of siR-NAs. For each target two different ON-TARGETplus siRNAs (purchased from Dharmacon) were used, including siRNAs targeting human CASP1 Determination of infectious progeny To quantify infectious progeny formation, 96-well plates with confluent HeLa cell monolayers were infected with a low dose of C. trachomatis (<50% infected cells) and treated with apoptosis inducers (STS (1 μM), TNF-α (50 ng/ml + 2.5 μg/ml CHX)) at 24 hpi. At various time points post treatment culture supernatants and cell lysates (prepared by H 2 O-based cell lysis [57]) were collected. Infectious particles in the initial inoculum (input) and collected samples (output) were quantified by infecting confluent Vero cell monolayers with serial dilutions, followed by fluorescence microscopic determination of inclusion numbers at 28 hpi (as described previously [57]). From the IFUs detected in the input and the output, the number of infectious particles formed per infected cell could be determined.
8,318.8
2018-10-30T00:00:00.000
[ "Biology", "Medicine" ]
Swelling-Activated Ca2+ Channels Trigger Ca2+ Signals in Merkel Cells Merkel cell-neurite complexes are highly sensitive touch receptors comprising epidermal Merkel cells and sensory afferents. Based on morphological and molecular studies, Merkel cells are proposed to be mechanosensory cells that signal afferents via neurotransmission; however, functional studies testing this hypothesis in intact skin have produced conflicting results. To test this model in a simplified system, we asked whether purified Merkel cells are directly activated by mechanical stimulation. Cell shape was manipulated with anisotonic solution changes and responses were monitored by Ca2+ imaging with fura-2. We found that hypotonic-induced cell swelling, but not hypertonic solutions, triggered cytoplasmic Ca2+ transients. Several lines of evidence indicate that these signals arise from swelling-activated Ca2+-permeable ion channels. First, transients were reversibly abolished by chelating extracellular Ca2+, demonstrating a requirement for Ca2+ influx across the plasma membrane. Second, Ca2+ transients were initially observed near the plasma membrane in cytoplasmic processes. Third, voltage-activated Ca2+ channel (VACC) antagonists reduced transients by half, suggesting that swelling-activated channels depolarize plasma membranes to activate VACCs. Finally, emptying internal Ca2+ stores attenuated transients by 80%, suggesting Ca2+ release from stores augments swelling-activated Ca2+ signals. To identify candidate mechanotransduction channels, we used RT-PCR to amplify ion-channel transcripts whose pharmacological profiles matched those of hypotonic-evoked Ca2+ signals in Merkel cells. We found 11 amplicons, including PKD1, PKD2, and TRPC1, channels previously implicated in mechanotransduction in other cells. Collectively, these results directly demonstrate that Merkel cells are activated by hypotonic-evoked swelling, identify cellular signaling mechanisms that mediate these responses, and support the hypothesis that Merkel cells contribute to touch reception in the Merkel cell-neurite complex. Introduction The mechanical senses of touch, proprioception, hearing and balance are initiated by cells that transduce force into electrical signals. Though the mechanotransduction events underlying audition and balance have been extensively investigated [1,2], little is known about the signals that initiate the somatic mechanical senses of proprioception and touch [3,4]. Progress has been hampered by the anatomy of the somatosensory system. Mechanotransduction occurs in peripheral afferent terminals, which are heterogeneous and widely distributed throughout the body, making most somatosensory mechanoreceptors difficult to study directly. Merkel cell-neurite complexes, which are cutaneous mechanoreceptors critical for fine shape and texture discrimination [4], have several characteristics that overcome these experimental challenges. Unlike most cutaneous touch receptors, the response patterns of these touch receptors have been identified in semiintact recording preparations: when the skin is displaced they generate the slowly adapting type I (SAI) response [5,6]. Furthermore, in transgenic Math1/nGFP mice, Merkel cells are specifically labeled with green fluorescent protein (GFP) [7], permitting enrichment of Merkel cells from dissociated skin using fluorescence-activated cell sorting (FACS) [8]. Since their discovery, epidermal Merkel cells have been proposed to be mechanosensory cells that transduce mechanical stimuli and then transmit sensory information to underlying sensory afferents [9]. This hypothesis stems from their location adjacent to sensory afferent terminals in highly touch sensitive areas of skin. This anatomy resembles that of hair cells, the forcesensitive epithelial cells of the inner ear that make synaptic contacts with afferent terminals. Moreover, Merkel cells express presynaptic molecules essential for synaptic transmission [8,10,11]. Finally, Merkel cells have functional VACCs [8,12], which trigger vesicle release at neuronal synapses. Despite this evidence, physiological experiments have failed to conclusively determine if Merkel cells are required for the SAI response [13]. Eliminating Merkel cells by laser ablation or genetic deficiency abolished the SAI response in some studies [14,15] but not others [16,17]. Previous reports exploring Merkel-cell mechanosensitivity in vivo and in vitro have been likewise inconclusive [18,19]. To determine if Merkel cells are mechanosensory cells, we asked if purified Merkel cells directly respond to mechanical stimuli in vitro. When the skin is displaced, forces must alter Merkel-cell shape, though the exact nature of this deformation is not known. To induce such a shape change in vitro, we chose to stimulate Merkel cells with osmotic stimuli because they permit simultaneous stimulation of many cells. Also, this is a robust stimulus that activates force-sensitive ion channels [20][21][22] as well as mechanosensitive cells such as hair cells and somatosensory neurons [23,24]. Moreover, MscL, the mechanosensitive channel whose structure and function is best understood, is directly activated by hypotonic stimuli in native cells [25]. Our findings demonstrate that hypotonic stimuli cause Ca 2+ influx in purified Merkel cells, and indicate that this Ca 2+ influx is initiated by swelling-activated ion channels. We used RT-PCR and pharmacology to identify candidate ion channels that may mediate this response. Our results demonstrate that Merkel cells are directly activated by a mechanical stimulus, which supports the hypothesis that they function as mechanoreceptors in Merkel cell-neurite complexes. Cell preparation All animal research was conducted according to protocols approved by Institutional Animal Care and Use Committee (IACUC) of Baylor College of Medicine (BCM) and the University of California, San Francisco (UCSF). Merkel cells were dissociated from the skin of postnatal day 3-6 (P3-P6) Math1/nGFP [7,8] mice after euthanasia by decapitation with sharp scissors. The skin from the body and face was dissected and washed in 10% Hibiclens (Regent Medical) and Hank's balanced salt solution (HBSS) supplemented with penicillin, streptomycin and amphotericin B. Tissue was cut into 1-cm 2 pieces and incubated for 1 h at 23uC in dispase (BD Biosciences) suspended to 25 U/mL in Ca 2+ and Mg 2+ free HBSS. The epidermis was peeled from the dermis with sharp forceps and incubated in 0.1% trypsin and 1 mM EDTA-4Na solution (Gibco) for 15 min with periodic vortexing. Trypsin was neutralized with fetal bovine serum (FBS) and cells were triturated with a 5-ml serological pipette. Cells were filtered with 70-and 40-mm cell strainers, spun at 4006g for 12-15 min and then resuspended in keratinocyte media (CNT-02, Chemicon) with 10% FBS. GFP-positive Merkel cells were enriched to approximately 85% by FACS into a landing media containing 50% FBS and 50% keratinocyte media (CNT-02, Chemicon). Merkel cells were spotted onto either collagen-coated coverslips for Ca 2+ imaging or collagen-coated eight-well chamber slides (LAB-TEK) for cell-volume analysis and grown with 5% CO 2 at 37uC in antibiotic-free keratinocyte media (CNT-02, Chemicon). Live-cell Ca 2+ imaging After 2 days in culture, Merkel cells were loaded for 20 min with 2 mM fura-2 acetoxymethyl ester (Molecular Probes) and 0.02% pluronic F-127 (Molecular Probes) in a modified Ringer's solution containing (in mM): 110 NaCl, 5 KCl, 10 HEPES (pH 7.4), 10 D-Glucose, 2 MgCl 2 , 2 CaCl 2 and 30 mannitol (290 mmol?kg 21 ). Cells were allowed to digest the ester bonds for 30 min and were imaged in modified Ringer's solution. Twenty percent hypotonic solutions contained all the same elements as modified Ringer's solution except mannitol. At these concentrations, mannitol did not fluoresce significantly in the fura-2 fluorescence spectra. To make 30% hypertonic solution, modified Ringer's solution was supplemented with an additional 45 Cells were viewed with a BX61WI epiflourescence upright microscope (Olympus) equipped with XLUMPlanFl 206, 0.95 NA and 606, 0.9 NA dipping objective lenses. Cells were illuminated with a 300-W Xenon lamp equipped with a highspeed excitation filter wheel (Sutter). Emission was captured with a cooled CCD camera (Hamamatsu). Data were acquired with Metafluor software of Meta Imaging series (version 6.4.7, Molecular Devices), and analyzed with custom algorithms written in Igor Pro (Version 5.03, Wavemetrics). The Ca 2+ dissociation constant of fura-2 in Merkel cells was determined by performing a three point calibration in situ with solutions containing (in mM): 135 KCl, 2 MgCl 2 , 10 HEPES, and one of the following: 10 EGTA (for R min ), 10 CaCl 2 (for R max ), and 8.5 EGTA plus 1.5 CaCl 2 (for R mid , effective free [Ca 2+ ] = 0.9 mM as measured in vitro by fura-2 imaging). Merkel cells were rendered Ca 2+ -permeable by ionomycin (1 mM) and triton X-100 (0.01-0.015%). Cellular respiration was inhibited with 2 mM 2-deoxy-D-glucose to block active pumps. Only cells with stable F 340 /F 380 and fura-2 concentrations were considered to be clamped at extracellular [Ca 2+ ] and included in the analysis. Resting [Ca 2+ ] ranged from 40-150 nM in healthy Merkel cells, consistent with resting [Ca 2+ ] in sensory cells [26,27]. Pharmacology Unless noted, reagents were purchased from Sigma. To assess pharmacological sensitivity of hypotonic-evoked signals, Merkel cells were bathed in modified Ringer's solution containing either 10 mM Ruthenium Red or 50 mM amiloride. At higher concentrations, the fluorescence of amiloride obscured fura-2 fluorescence signals. L-, P/Q-and N-type VACCs were blocked by a mixture of 10 mM nimodipine (Tocris) and 10 mM v-conotoxin MVII-C (Tocris) [8]. Internal Ca 2+ stores were depleted by application of 1 mM thapsigargin (Tocris) followed by repeated high-K + pulses to activate store release. Volume imaging and analysis Merkel cells were cultured for two days in an eight-well coverglass chamber (Lab-Tek). Cells were imaged in modified Ringer's solution described above containing 0.1-mm fluorescent microspheres (TetraSpeck, Invitrogen). Microspheres were allowed to settle onto the surfaces of Merkel cells and coverslips for 20-30 min. Merkel cells were imaged with an LSM 5-LIVE imaging system with a Plan-Apocromat 636, 1.4 NA oilimmersion objective lens (Zeiss). Microsphere fluorescence was excited at 532 nm, and epifluorescence emission passed through a 535-nm dichroic beam splitter and a 550-nm long-pass filter. Stacks of confocal sections were imaged once every 7 s. Microsphere locations were determined with the ''spot'' identifying utility in Imaris 5.0.1 software (Bitplane AG). Volume calculations and graphs were generated in custom programs written for MATLAB (Mathworks). Confocal imaging of Merkel-cell morphology Cells were imaged in a modified Ringer's solution described above containing 1 mM BODIPY FL C5-ceramide (Molecular Probes) and 0.02% pluronic F-127 (Molecular Probes). Fluorescent sphingolipids were allowed to diffuse into cell membranes for 5 min and then were imaged with the system described for volume imaging. RT-PCR GFP + Merkel cells were purified from P3-P6 mice using FACS with strict gating conditions to achieve $95% purity. Cell from two to six mice were used for each sort. Total RNA from 2-10610 4 GFP + cells was isolated using commercially available reagents (Qiagen RNeasy kit) and DNAse treated according to manufacturer's instructions to remove contaminating genomic DNA. First-strand cDNA was synthesized using oligo(dT) [12][13][14][15][16][17][18] primers at 42uC for 2 h with SuperScriptIII (Invitrogen). PCR products were amplified with touchdown PCR using a PTC-200 Peltier thermal cycler (MJ Research); cDNA from ,1000 cells was used for each reaction. To evaluate reproducibility, each primer pair was tested on two to four independent biological samples, that is, cDNA produced from cells purified in separate experiments. We considered amplicons robust if they were present in at least half of biological samples tested. In all experiments, control PCRs lacking cDNA template were performed to confirm the absence of contamination, and primer performance was verified with positive control cDNA template from brain, skin, or a mixture of liver, heart, spleen and kidney tissue. With the exception of PKD-REJ, a gene predicted to have no introns (Entrez Gene accession number NM_011105), primer pairs were designed to span introns to ensure that amplicons were not derived from genomic DNA. Hypotonic stimuli evoke cytoplasmic Ca 2+ signals in Merkel cells In cells lacking cell walls, hypotonic extracellular solutions cause water flux across the cell membrane to induce cell swelling. If Merkel cells express ion channels activated by membrane tension, we reasoned that cell swelling might increase membrane tension to open such channels. The resulting membrane depolarization could then activate VACCs to allow Ca 2+ influx. To determine if Merkel cells respond to changes in osmolality, we monitored intracellular Ca 2+ with the ratiometric, fluorescent indicator fura-2. In epidermal-cell suspensions, Merkel cells represent <0.2% of dissociated cells. Using FACS we enriched GFP + Merkel cells to approximately 85% percent: the remaining 15% consisted predominately of GFP-negative keratinocytes. Cells were subjected to Ringer's solutions of varying osmolality. Most Merkel cells showed an increase in free [Ca 2+ ] in response to 20% hypotonic stimuli (6563% cells, N = 19 experiments, 10-33 cells/experiment, Fig. 1A To characterize the dose-response of hypotonic-triggered Ca 2+ increases in Merkel cells, we challenged Merkel cells with solutions of progressively decreasing osmolality. Ringer's solutions of decreasing osmolality induced larger peak Ca 2+ transients in individual Merkel cells (Fig. 1E). Merkel cells with the largest Ca 2+ transients at 10% hypotonic Ringer's solution also had the largest Ca 2+ increase at 20 or 30% hypotonic Ringer's solution. In addition, solutions of progressively lower osmotic strength elicited responses in a greater proportion of Merkel cells than mildly hypotonic solutions (261 mmol?kg 21 recruited 29% of cells, 232 mmol?kg 21 recruited 52% of cells, 203 mmol?kg 21 recruited 71% of cells, Fig. 1F). Our data indicate that Merkel cells respond to relatively mild 10% changes in osmolality, yet, even 30% hypotonic solutions do not appear to saturate Merkel cells' responses. Hypotonic solutions induce cell swelling that activates stretchsensitive channels in bacteria [28], so we asked whether similar hypotonic-induced swelling occurred in Merkel cells. To ascertain if hypotonic solutions altered Merkel-cell volume, we monitored cell shape in three dimensions with fluorescent microspheres attached to the plasmalemma of Merkel cells while perfusing cells with a 20% hypotonic bath solution. Microspheres settled onto Merkel cells and the surrounding coverslip within 30 min of bath application and remained tightly coupled during solution changes ( Fig. 2A, movie S1). Microspheres were imaged with high-speed confocal microscopy, and their positions were used to model the location of Merkel-cell surfaces in relation to the coverslip. By integrating the volume between reconstructed cell surfaces and the coverslip, we estimated that Merkel cells' volume in isotonic Ringer's solution was 334639 mm 3 (mean6SD, N = 8). To determine if Merkel cells swelled in response to a hypotonic stimulus, we imaged Merkel cells in time series while perfusing with 20% hypotonic Ringer's solution (Fig. 2B). In this condition, Merkel-cell volume was significantly higher (358643 mm 3 , mean6SD, N = 8, p,0.001, paired Student's t test), which represents an average volume increase of 7.362.9%. Merkel cells began swelling within 7 s of the onset of hypotonic perfusion, which was the temporal resolution of the three-dimensional imaging. Merkel cells remained enlarged during 120 s of hypotonic stimulation and relaxed to their original volume after perfusion of isotonic Ringer's solution. One Merkel cell subjected to prolonged hypotonic stimulation showed volume decreases in ,300 s. Hypotonic-induced Ca 2+ transients are concentrated in processes Mechanosensitive ion channels are often located within specialized cellular processes that are thought to leverage forces to the channels. Hair cells have mechanoelectrical transduction channels near the tips of modified microvilli called stereocilia [29], and kidney cells detect fluid flow with mechanosensitive channels located in cilia [30]. Similarly, Merkel cells in vivo have actin-filled processes that penetrate overlying keratinocytes [5,31]. To determine if Merkel cells extend cytoplasmic processes in vitro, we stained Merkel cells with fluorescent sphingolipids to visualize membrane morphology (Fig. 2C). We found that most cultured Merkel cells have processes arranged in a branch-like pattern, with numerous fine protrusions (1-8 mm in length) at the terminals of large processes (2-15 mm in length). Although the larger cytoplasmic processes were visible in fura-2 fluorescence images, fine protrusions were not (Figs. 1, 3). During hypotonic stimuli, elevated [Ca 2+ ] was evident in Merkel-cell cytoplasmic processes and around nuclei (Fig. 3A, B, movie S2). In cytoplasmic processes, [Ca 2+ ] increased first in regions adjacent to the plasmalemma and then in regions located deeper within the cytoplasm (Fig. 3C, D) Fig. 4A, E), which recovered upon reintroduction of external Ca 2+ . These data indicate that extracellular Ca 2+ influx is required for hypotonic-triggered Ca 2+ transients. To ascertain whether VACCs also contribute to hypotonicinduced increases in free [Ca 2+ ], we blocked these channels with a cocktail containing 10 mM conotoxin MVIIC to block N-and P/ Q-type Ca 2+ channels and 10 mM nimodipine to block L-type Ca 2+ channels [8]. The efficacy of this cocktail was tested by depolarizing Merkel cells with high-K + Ringer's solution (Fig. 4B). The blocking cocktail inhibited peak high-K + -induced Ca 2+ transients by 9661% (mean6SEM, N = 4 experiments). By contrast, the VACC-inhibitor cocktail only curtailed peak hypotonic-induced Ca 2+ transients by 51613% (mean6SEM, N = 4 experiments). Thus, in the presence of VGCC blockers, 60% of Merkel cells had larger hypotonic-induced transients than high-K + induced transients (Fig. 4B). This partial inhibition of hypotonic-evoked Ca 2+ signals by VACC blockers indicates that VACCs contribute to, but are not the sole source of, hypotonicinduced Ca 2+ influx across the plasma membrane. To determine whether internal Ca 2+ stores add to the hypotonic-induced Ca 2+ increases, we designed an experimental protocol to empty internal Ca 2+ stores (Fig. 4C, D). We blocked Ca 2+ reuptake into internal stores with 1 mM thapsigargin, which inhibits the sarco/endoplasmic reticulum Ca 2+ -ATPase. As expected, Merkel cells displayed an increase in cytoplasmic Ca 2+ upon thapsigargin treatment (data not shown and Piskorowski et al., submitted). To ensure that Ca 2+ stores were depleted after thapsigargin treatment, we depolarized cells with high-K + Ringer's solution to activate Ca 2+ -induced Ca 2+ -release. Store depletion was verified at the end of experiments by chelating extracellular Ca 2+ with EGTA and permeabilizing cell membranes with the Ca 2+ ionophore ionomycin. In the absence of thapsigargin, ionomycin elicited cytoplasmic Ca 2+ transients (Fig. 4D), demonstrating that Merkel cells have detectable Ca 2+ stores. By contrast, ionomycin-induced transients were reduced by 9261% in cell pretreated with thapsigargin (Fig. 4C, mean6SEM, N = 31 cells). Pre-treatment with thapsigargin also reduced Ca 2+ transients evoked by hypotonic stimuli by 8063% (mean6SEM, N = 4 experiments). Together, these data demonstrate that the amplitudes of hypotonic-induced transients in the presence of thapsigargin and VACC antagonists, though reduced, were significantly larger than those in the presence of 10 mM EGTA (p,0.05, paired Student's t test, N = 3-4 experiments, Fig. 4E). These results indicate that internal Ca 2+ stores augment Ca 2+ influx induced by hypotonic solutions. Pharmacology of hypotonic-evoked Ca 2+ signals Hypotonic induced extracellular Ca 2+ influx implies the presence of a swelling-activated ion channel in the plasma membrane but does not speak to its identity. The most obvious candidate is TRPV4 (GenBank accession number NM_022017), which is expressed in Merkel cells [32] and is activated by hypotonic solutions when expressed in human embryonic kidney (HEK) cells [33,34]. To determine if TRPV4 is required for hypotonic activation, we analyzed hypotonic-induced Ca 2+ transients in Merkel cells from TRPV4-deficient mice (Fig. 5). The magnitude and time course of Ca 2+ transients in Merkel cells from TRPV4 -/-mice were indistinguishable from those of heterozygous littermate controls (Fig. 5) and from wild-type responses. Thus, TRPV4 is unlikely to mediate hypotonic induced Ca 2+ influx in Merkel cells. We next broadened our search for the identity of swellingactivated channels in Merkel cells. Two ion-channel families have been implicated in mechanotransduction in mammals: transient receptor potential (TRP) channels and Degenerin/Epithelial Na + channels (DEG/ENaC), [35]. Ruthenium red is a broad-spectrum blocker of many channels, including TRPV channels. We found . Hypotonic responses of Merkel cells exposed to these compounds were normalized to control hypotonic responses. Error bars indicate SEM. Asterisks denote statistically significant differences between EGTA and VACC or thapsigargin treated responses (p#0.04, paired Student's t test). The un-normalized EGTA, VACC and thapsigargin treated responses were significantly different from their respective controls (p#0.05, Student's t test). doi:10.1371/journal.pone.0001750.g004 that 10 mM ruthenium red inhibited peak hypotonic-induced Ca 2+ transients by 7168% (mean6SEM, N = 2 experiments, N = 9-10 cells/experiment); however, ruthenium red inhibited high-K +induced Ca 2+ transients to the same extent (68612%). Because ruthenium red did not fully block hypotonic-induced Ca 2+ transients, we sought to determine if hypotonic-evoked Ca 2+ signals were sensitive to the DEG/ENaC antagonist, amiloride. Amiloride (50 mM) did not inhibit the hypotonic-induced Ca 2+ transient (28613%, N = 2 experiments, N = 20-27 cells/experiment). Merkel cells express TRP channels Since amiloride did not inhibit the hypotonic-evoked Ca 2+ increases, we used RT-PCR to screen for amiloride-insensitive non-selective cation channels expressed in Merkel cells. We focused on channels from the TRP family because many of these channels fit this profile and because they function in diverse modes of sensory transduction. Primer efficacy was tested against cDNA derived from brain, skin, or a mixture of liver, skin, heart, spleen and kidney (Fig. 6). Merkel-cell cDNA yielded robust amplicons for six TRP channels and occasional amplicons for an additional five channels (Table 1). Notably, we detected amplicons for TRPC1 (GenBank accession number NM_011643), PKD1 (NM_013630) and PKD2 (NM_008861), channels previously implicated in mechanotransduction in other cell types [36,37]. Discussion This study demonstrates that dissociated Merkel cells are directly activated by hypotonic-evoked cell swelling, and introduces a robust in vitro assay for analyzing the molecular mechanisms that underlie swelling-evoked signals. Based on our findings, we propose that swelling triggers Ca 2+ entry through an as yet unknown cation channel; the resultant depolarization activates VACCs and together these two sources of Ca 2+ influx activate Ca 2+ release from internal stores (Fig. 7). We identified 11 TRP channels expressed in Merkel-cells, seven of which have pharmacological profiles matching the hypotonic-evoked Ca 2+ responses we observed. Of these, PKD1 and PKD2 are promising candidates because they have been previously implicated in mechanotransduction in cilia of kidney cells [36]. Our data support a model in which skin indentation applies force to Merkel cells, whose mechanosensitive channels allow Ca 2+ influx. This Ca 2+ influx is augmented by VACCs and Ca 2+ release from internal stores. These Ca 2+ increases could trigger synaptic signaling to the underlying sensory afferent. Merkel cells express hypotonic-activated ion channels Several lines of evidence indicate that Ca 2+ -permeable ion channels generate hypotonic-induced Ca 2+ transients in Merkel cells. First, the requirement for extracellular Ca 2+ suggests Ca 2+ ions enter the cell across the plasma membrane. Second, blocking VACCs or emptying intracellular stores curtails, but does not eliminate, hypotonic induced Ca 2+ increases, suggesting other sources of Ca 2+ entry (Fig. 7). Although it is formally possible that a hypotonic-activated G-protein coupled receptor could induce membrane depolarization and Ca 2+ release from stores, such a hypotonic-activated receptor would have to be inactivated by extracellular EGTA to explain our findings. We know of no receptors that match these requirements. Thus, the most parsimonious interpretation of our data is that hypotonic solutions induce Merkel cells to swell, causing increased membrane tension that activates Ca 2+ -permeable ion channels (Fig. 7). Does osmosensitivity imply that Merkel cells are mechanosensory cells? Most cells have volume-regulating mechanisms that permit adaptation to osmotic changes in their environment. In addition, a variety of excitable and non-excitable cells have detectable Ca 2+ increases in anisotonic extracellular solutions [38][39][40]. Two lines of evidence suggest hypotonic solutions might activate mechanotransduction machinery rather than ubiquitous volume regulating pathways. First, in some non-excitable cells, regulatory volume decrease is preceded by hypotonic-evoked Ca 2+ transients with 1-5 min latencies followed by 2-10 min time to peak. By contrast, a subset of sensory cells isolated from the trigeminal nucleus has robust hypotonoic-evoked responses that develop within seconds [24]. These fast responding neurons have been proposed to be a mechanosensitive subset of trigeminal neurons. We found that hypotonic solutions induced a Ca 2+ signal in Merkel cells similar to this rapidly activating neuronal population. Furthermore, these Ca 2+ signals were not temporally correlated with decreases in cell volume, as would be expected for regulatory volume decreases. The latency of hypotonic-induced Ca 2+ influx in Merkel cells in vitro is <11 s, which is much longer than the 200 ms latency of the SAI response [41]. One explanation for this difference is that distinct molecular mechanisms may transduce hypotonic stimuli in vitro and touch in vivo. Alternatively, it is possible that the same mechanotransduction molecules are differentially activated in these two contexts. For example, Merkel cells may respond rapidly to touch-evoked pressure, whereas osmotic stimuli might take longer to develop sufficient membrane distortion to activate channels. Although our volumetric data indicate that Merkel cells begin to swell within 7 s of hypotonic-solution perfusion, it may Table 1. Bars at base of figure mark lanes with products amplified from GFP + Merkel cells or control cDNA, respectively. Control cDNA was derived from brain or skin. doi:10.1371/journal.pone.0001750.g006 take the observed 11 s to generate sufficient membrane tension to activate mechanotransduction channels. Alternatively, optimal gating of the Merkel cells' transduction channels in vivo may require extracellular linkages that are not present in culture. Extracellular links are required to transmit force to mechanotransduction channels in hair cells [35]. Moreover, mutations in specific extracellular molecules disrupt touch responses in Caenorhabditis elegans [42]. We postulated that hypotonic-induced membrane tension constitutes a global mechanical stimulus that is sufficient to activate force-transducing machinery in Merkel cells, but is less efficient than touch-evoked channel gating in vivo. The lack of extracellular linkages might also explain why direct touch failed to generate membrane currents in isolated Merkel cells in a previous report [12]. Finally, the latency of mechanotransduction is unknown in Merkel cells in vivo. Although the SAI response has a latency of 200 ms, it is possible that the initial phase of the SAI response is generated by the afferent and the static phase is transduced by Merkel cells [43]. In vivo, Merkel cells' superficial surfaces are studded with dozens of actin-rich microvilli that are proposed to be sites of mechanotransduction [9,31]. In culture, we observed Merkel cells with two basic morphologies: cells with large processes 2-15 mm in length that branch into smaller protrusions and spherical cells with only the smaller protrusions. Both cell shapes are observed in vivo, where they are termed dendritic and non-dendritic Merkel cells [44][45][46]. In dendritic Merkel cells in vitro, we often observed hypotonic-triggered Ca 2+ transients in processes before global cytoplasmic Ca 2+ levels increased. This might reflect Ca 2+ influx into the confined cytoplasmic volume of the process or local differences in Ca 2+ regulation. Alternatively, these data could indicate a higher density of osmotically activated channels in these processes. Candidate transduction channels in Merkel cells Several gene families have been implicated in vertebrate mechanotransduction, including members of the TRP family and the DEG/ENaC family. In particular, TRPV4 responds to hypotonic solutions in HEK cells [33,34] and is expressed in Merkel cells [32]. Somewhat surprisingly, our analysis TRPV4deficient mice indicates that this channel is not required for swelling-activated Ca 2+ signals in Merkel cells. Thus, we used pharmacology to elucidate the molecular nature of Merkel cells' hypotonic-activated channel. Amiloride (50 mM), which inhibits most DEG/ENaC channels with an IC 50 of 0.1-20 mM [47], does not block hypotonic induced Ca 2+ signals, making these channels unlikely candidates for Merkel-cell transduction channels. Ruthenium red is a broad-spectrum inhibitor that blocks some TRP channels, including TRPV channels. We found that 10 mM ruthenium red inhibits both hypotonic and high-K + induced Ca 2+ transients in Merkel cells. Since ruthenium red has also been shown to inhibit both VACCs and Ca 2+ -store release in some cell types [48,49], it is unclear to what extent ruthenium red directly inhibits swelling-activated channels or downstream Ca 2+ signaling. At any rate, because ruthenium red does not completely block hypotonic-induced Ca 2+ transients, they cannot be solely mediated by ruthenium-red-sensitive channels, including the TRPV subfamily. Because other TRP channels play prominent roles in sensory transduction, including mechanotransduction in zebrafish and invertebrates, they remain attractive candidates for investigation. Moreover, many of these channels are not blocked by ruthenium red. We demonstrated with RT-PCR that Merkel cells express robust transcripts encoding six TRP channels, and transcripts were occasionally amplified for an additional five channels. The sporadic amplification of these latter transcripts may indicate that they are present in very low copy number [50]. Among the TRP channels expressed in Merkel cells, TRPC1, PKD1 and PKD2 have been previously implicated in mechanotransduction. Both are Ca 2+ permeable, and neither is known to be blocked by ruthenium red [36,51,52]. Although PKD2 is blocked by amiloride, its IC 50 is 80 mM, larger than the concentrations than we tested here (see Methods and [53]). Interestingly, PKD1 and PKD2 form Ca 2+ permeable complexes that transduce fluid flow in the kidney [30]. In human polycystic kidney disease, patients are heterozygous for either PKD1 or PKD2 mutations and suffer from cyst formation and eventual kidney failure [54]. As homozygous deletion in PKD1 or PKD2 in mice causes embryonic death, examination of touch sensitivity in PKD-deficient mice will require a conditional knockout [55]. TRPC1 is activated by membrane tension in CHO-cells and Xenopus laevis oocytes [37]; however, the role of TRPC1 in vivo is still undetermined [56]. Many of the TRP channels we found expressed in Merkel cells are orphans: little is known of their biophysical properties, let alone their physiological roles in vivo. Consequently, they represent candidate transduction channels. Thus, our findings set the stage for future gene disruption experiments to identify which of these channels mediate swelling-activated signals in Merkel cells in vitro and to determine whether the same channels transduce touch stimuli in vivo. Supporting Information Movie S1 Time series of confocal z-series projections of a Merkel cell during a 20% hypotonic perfusion. The cell surface was visualized with fluorescent microspheres (white spots). Microspheres on the coverslip surface have been removed for clarity. Z-series were collected every 7 s.
6,841.8
2008-03-12T00:00:00.000
[ "Biology" ]
DTM GENERATION WITH UAV BASED PHOTOGRAMMETRIC POINT CLOUD Nowadays Unmanned Aerial Vehicles (UAVs) are widely used in many applications for different purposes. Their benefits however are not entirely detected due to the integration capabilities of other equipment such as; digital camera, GPS, or laser scanner. The main scope of this paper is evaluating performance of cameras integrated UAV for geomatic applications by the way of Digital Terrain Model (DTM) generation in a small area. In this purpose, 7 ground control points are surveyed with RTK and 420 photographs are captured. Over 30 million georeferenced points were used in DTM generation process. Accuracy of the DTM was evaluated with 5 check points. The root mean square error is calculated as 17.1 cm for an altitude of 100 m. Besides, a LiDAR derived DTM is used as reference in order to calculate correlation. The UAV based DTM has o 94.5 % correlation with reference DTM. Outcomes of the study show that it is possible to use the UAV Photogrammetry data as map producing, surveying, and some other engineering applications with the advantages of low-cost, time conservation, and minimum field work. INTRODUCTION The Digital Terrain Model (DTM) is an important topographic product and essential demand for many applications.Traditional methods for creating DTM are very costly and time consuming because of land surveying.In time, Photogrammetry has become one of the major methods to generate DTM.Recently, airborne Light Detection and Ranging (LiDAR) system has become a powerful way to produce a DTM due to advantage of collecting three-dimensional information very effectively over a large area by means of precision and time (Polat and Uysal, 2015).However, the main disadvantage of aerial manned platforms such as airplanes is being expensive, especially for small study areas.During the last decades, low-cost Unmanned Aerial Vehicles (UAVs) are used to pass this handicap.Nowadays, the use of UAVs is increasing day by day due to its advantages at cost, inspection, surveillance, reconnaissance, and mapping (Remondino et.al., 2011). DTM is a structured surface that contains elevation data with some critical features of terrain such as ridge lines, peak points etc. (Podopnikar et. Al., 2005).During the DTM generating process, higher vegetation-horizons, buildings and also other, non-terrain objects are removed (Lohman and Koach, 1999).Removing non-terrain objects to get bare earth is called filtering.The ground filtering is an essential step to separate points which are from the ground surface and which are from non-ground features for almost all topographic applications.Distinguishing ground from non-ground can be a considerable challenge in regions with high surface variability (Vosselman, 2000;Axelsson, 2000).Besides accurate DTMs can only be generated if non-ground points are removed successfully (Shan and Sampth, 2005).In this study the generation of DTM with UAV based photogrammetric point cloud and its accuracy analysis is presented. STUDY AREA AND DATA The study area is located in Izmir-Bergama (2.06 km2) (Figure 1).A Lidar derived DTM is used as reference data.Besides, five checkpoints are used in the calculation of root mean square error.420 UAV based images are captured.The average ground sample distance is 7.59 cm.7 Ground Control Points (GCP) were measured before the flight in purpose of geo referencing. Image processing The main aim of process is to produce a georeferenced 3D point cloud by handling with overlapping aerial image data (Siebert and Teizer, 2014).The approach of point cloud generation from images is called as Structure from Motion (SfM).SfM runs under the same basic conditions as stereoscopic Photogrammetry.It uses overlapping images in order to get a 3d structure of interested object.Existing software's can generate a 3D point cloud such as; Agisoft PhotoScan (commercial software) that has been used in this study.The software advances in UAV applications and allows generating orthophoto in a willed coordinate system.For a full performance of software, it's recommended to use a powerful computer due to the huge amount of data (Siebert and Teizer, 2014).The data processing is relatively easy.It starts with uploading photos from camera to computer and eliminating distorted or blurred ones.We used 420 selected images.The GCPs were used for geo referencing.At the end of image processing 30461747 georeferenced points are obtained with a density of 8.76 (points/m2) (Figure 2). Cloth Simulation Filtering (CSF) The CSF algorithm is designed for the producing of DTM from LiDAR point cloud (Zhang et al., 2016).It is assumed that a soft enough cloth sticks to the surface and refers to digital surface model.But, if the surface is turned upside down, the final shape of the cloth defines a digital terrain model (Figure 3). CONCLUSION This paper indicates the capability of UAVs, which is an alternative data collection technology, in a geomatic application in a small area by means of DTM generation with.Comparing with traditional manned airborne platforms, they reduce the working costs and minimize the danger of reaching to risky study sites, with sufficient accuracy.In fact, the UAV systems have lots of advantages (low-cost, real time, high temporal and spatial resolution data, etc.) which are very important for not only geomatic but also various disciplines.The application indicates that the UAV combined digital camera systems can allow to collect usable data for geomatic applications.The study shows that UAV based data can be used for DTM generation by photogrammetric techniques with a vertical accuracy of 17.1 cm.It can be stated that the UAV Photogrammetry can be used in engineering applications with the advantages of low-cost, time conservation, minimum field work, and competence accuracy.Moreover the created 3D model is satisfactory to realize topography with texture.On the other hand, except GCP some parameters such as weather, vibrations, lens distortions, and software directly affects the process and model accuracy.Beyond all these, the UAVs system is not fully automated and still needs a user decision.Future studies may offer an automated approach for UAVs that minimizes the user attraction. Figure 3 . Figure 3. Overview of the cloth simulation algorithm (Zhang et al. 2016)At the end of filtering process, 26138351 points are detected as ground point (Figure4).The obtained ground points are used to generate a 0.5 m resolution DTM
1,386
2017-11-13T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Version 4 CALIPSO IIR ice and liquid water cloud microphysical properties, Part II: results over oceans Following the release of the Version 4 Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) data products from Cloud15 Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) mission, a new version 4 (V4) of the CALIPSO Imaging Infrared Radiometer (IIR) Level 2 data products has been developed. The IIR Level 2 data products include cloud effective emissivities and cloud microphysical properties such as effective diameter (De) and ice or liquid water path estimates. This paper (Part II) shows retrievals over ocean and describes the improvements made with respect to version (V3) as a result of the significant changes implemented in the V4 algorithms, which are presented in a companion paper (Part I). The analysis of the three-channel 20 IIR observations (08.65 μm, 10.6 μm, and 12.05 μm) is informed by the scene classification provided in the V4 CALIOP 5-km cloud layer and aerosol layer products. Thanks to the reduction of inter-channel effective emissivity biases in semi-transparent (ST) clouds when the oceanic background radiance is derived from model computations, the number of unbiased emissivity retrievals is increased by a factor 3 in V4. In V3, these biases caused inconsistencies between the effective diameters retrieved from the 12/10 and 12/08 pairs of channels at emissivities smaller than 0.5. In V4, microphysical retrievals in ST ice clouds are 25 possible in more than 80 % of the pixels down to effective emissivities of 0.05 (or visible optical depth ~ 0.1). For the month of January 2008 chosen to illustrate the results, median ice De and ice water path (IWP) are, respectively, 38 μm and 3 g⋅m in ST clouds, with random uncertainty estimates of 50 %. The relationship between the V4 IIR 12/10 and 12/08 microphysical indices is in better agreement with the “severely roughened single column” ice crystal model than with the “severely roughened 8-element aggregate” model for 80 % of the pixels in the coldest clouds (< 210 K) and 60 % in the warmest clouds (> 230 K). Retrievals in 30 opaque ice clouds are improved in V4, especially at night and for 12/10 pair of channels, owing to corrections of the V3 radiative temperature estimates derived from CALIOP geometric altitudes. Median ice De and IWP are 58 μm and 97 g⋅m at night in opaque clouds, with again random uncertainty estimates of 50 %. Comparisons of ice retrievals with Aqua/Moderate Resolution Imaging Spectroradiometer (MODIS) in the tropics show a better agreement of IIR De with MODIS visible/3.7 μm than with MODIS visible/2.1 μm in the coldest ST clouds and the opposite for opaque clouds. In prevailingly supercooled liquid water clouds 35 with centroid altitudes above 4 km, retrieved median De and liquid water path are 13 μm and 3.4 g.m in ST clouds, with estimated random uncertainties of 45 % and 35 % respectively. In opaque liquid clouds, these values are 18 μm and 31 g.m at night, with estimated uncertainties of 50 %. IIR De in opaque liquid clouds is smaller than MODIS visible/2.1 and visible/3.7 by 8 μm and 3 μm, respectively. https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c © Author(s) 2020. CC BY 4.0 License. Due to its sensitivity to small particles, the split-window technique is an attractive option for retrievals of liquid droplets sizes (Rathke and Fisher, 2000), and microphysical retrievals in liquid water clouds are now included in the V4 IIR products. All other things being equal, the performance of the split-window technique increases with the radiative contrast between the cloud and the surface. Consequently, retrieval uncertainties are larger for liquid water clouds, which typically form relatively close to the Earth's 80 surface, and hence these retrievals were not included in V3. Liquid water clouds such as marine stratocumulus clouds, which are an important component of the Earth system, have optical depths typically larger than 10, well beyond the range of applicability of the technique. However, infrared observations have the potential to provide new insight into the microphysical properties of thin liquid water clouds (Turner et al., 2007;Marke et al., 2016) and of supercooled mid-level liquid water clouds. The IIR analyses start with the retrieval of cloud effective emissivities in each channel, which are then converted to effective 85 absorption optical depths as a,k = -ln(1 -eff,k), where εeff,08, εeff,10, and εeff,12 are the effective emissivities retrieved in IIR channels 08.65 (k = 08), 10.6 (k = 10), and 12.05 (k = 12), respectively. Effective emissivity is mostly a measure of cloud absorption, and the term "effective" refers to the contribution from scattering, which is the most significant at 08.65 µm. The first IIR microphysical index, eff12/10 = a,12/ a,10, is the ratio of the effective absorption optical depths at 12.05 and 10.6 µm and the second one, eff12/08 = a,12/ a,08, is the ratio of the effective absorption optical depths at 12.05 and 08.65 µm. Two main pieces of information are 90 needed to retrieve these quantities: the cloud Top Of Atmosphere (TOA) blackbody radiance, which requires a good estimate of the cloud radiative temperature, and the TOA background radiance that would be observed if no cloud were present. The former drives the accuracy at large emissivities and the latter the accuracy at small emissivities. The first step into any retrieval approach is the detection of a cloud and the determination of its thermodynamic phase and radiative temperature. The ability to ascertain cloud amounts and characteristics varies with the observing capabilities of different passive 95 sensors (Stubenrauch et al., 2013). Even though IIR has only three medium resolution channels, its crucial advantage is the quasiperfect co-location with CALIOP observations. Indeed, as emphasized by Cooper et al. (2003), cloud boundaries measured by active instruments provide an invaluable piece of information for obtaining accurate estimates of cloud radiative temperatures. The IIR algorithm relies on CALIOP's highly sensitive layer detection to characterize the atmospheric column seen by each IIR pixel. CALIOP provides geometrical altitudes, which are converted into radiative temperatures. The radiative temperature, Tr, of a multi-100 layer cloud system is estimated as the thermodynamic temperature, Tc, at the centroid altitude of the CALIOP attenuated backscatter at 532 nm. In the V4 algorithm, this estimate is further corrected when single or multi-layer ice cloud systems are observed (Part I). The thermodynamic temperature is derived from interpolated temperatures profiles of the Global Modeling and Assimilation Office (GMAO) Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) model (Gelaro et al., 2017). 105 The second retrieval step is the determination of the TOA background radiance, which often requires simulations using ancillary meteorological profiles and surface data. These simulations are generally more accurate over oceans than over land because the surface emissivities in the various channels are better known and less variable over oceans, and the skin temperature data are usually more accurate. In this paper, we therefore focus on retrievals over oceans. In the IIR algorithm, the TOA background radiance is preferentially determined using observations in neighboring pixels in those cases when clear sky conditions, as 110 determined by CALIOP, can be found. Otherwise, it is computed using the FASRAD radiative transfer model (Garnier et al., 2012;Dubuisson et al., 2005). In V3, IIR microphysical retrievals over oceans were possible down to εeff,12 ~ 0.05 (or optical depth ~ 0.1) when the background radiance could be measured in neighboring pixels (G13). When the background radiance had to be computed by FASRAD, which represents about 75 % of the cases, inter-channel biases in the model simulations caused discernable flaws in the microphysical retrievals. The inter-channel biases in the FASRAD simulations have been significantly reduced in V4, 115 as discussed in Part I. https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. This paper aims at demonstrating the improved accuracy of the V4 effective emissivities and of the subsequent microphysical indices that result from the changes implemented in the V4 algorithm (Part I), and at illustrating the changes in the retrieved microphysical properties. Our assessment is carried out after carefully selecting the relevant cloudy scenes, following the rationale presented in Sect. 2. Retrievals in ice clouds are presented in Sect. 3, which includes step by step comparisons between V3 and 120 V4, examples of V4 retrievals, and comparisons with MODIS retrievals. Section 4 is dedicated to retrievals in liquid water clouds that were added in V4, and Sect. 5 concludes the presentation. Cloudy scenes selection The analysis of the IIR observations is informed by the scene classification provided by the V4 CALIOP cloud and aerosol 5-km layer products. This scene classification is established for layers detected by the CALIOP algorithm at 5-km and 20-km horizontal 125 averaging intervals (Vaughan et al., 2009). An example is shown in Fig. 1, which was extracted from nighttime granule 2008-01-30T09-15-45ZN on January 30 th , 2008. 130 number of cloud layers in the cloud-system; cases with Earth surface as a reference are denoted with black lines (thin: semi-transparent (ST) layers; thick: 1 opaque layer) and in red are the cases with the lowest opaque cloud as a reference; (c): CALIOP "Was Cleared Flag" at 1-km IIR pixel resolution; (d): Ice Water Flag of the cloud system (e): temperatures at cloud top and cloud base (black) and radiative temperature used by the IIR algorithm (red); (f): effective emissivity of the cloud-system at 12.05 µm. See text for details. https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. Figure 1a shows the Level 1 CALIOP attenuated backscatter averaged at 5-km horizontal resolution with the top and base altitudes of the cloud-system shown in black. Cloudy scenes can include one or several layers (Fig. 1b). When the lowest of at least two layers is opaque to CALIOP, this opaque layer is used as a reference assuming it behaves as a blackbody source and the algorithm retrieves the properties of the overlying semi-transparent (ST) layers. An example is found between latitudes -36.45° and -36.7°, highlighted in red in Fig. 1b, where the algorithm retrieves the properties of two ST layers overlying the opaque cloud located at 140 about 8 km altitude. South of -36.7° and down to -37.2°, the portion of this cloud which is used as an opaque reference between -36.45°and -36.7° is included in a single opaque cloud of top altitude equal to 11.5 km, which extends down to the southernmost latitudes. North of -36.45° and up to -34.45°, the atmospheric column includes 1 to 3 semi-transparent clouds. Finally, no cloud layers are seen north of -34.45° where the scenes contain only low ST non-depolarizing aerosol layers (not shown). The atmospheric column might also contain clouds having top altitudes less than 4 km that are detected at single-shot resolution and 145 then cleared before searching for the more tenuous layers typically reported in the 5-km products (Vaughan et al., 2009). These single shot detections are not included in Fig. 1b. The number of these single shot cleared clouds seen within each IIR pixel is shown in Fig. 1c. We showed in Part I ( Fig. 5 in Part I) that the presence of these cleared clouds modifies the background radiance compared to the radiance due to the ocean surface and ultimately biases the effective emissivity retrievals. Because these biases cannot be quantified a priori, scenes that contain single shot cleared clouds should be treated with caution. The Ice Water Flag 150 shown in Fig. 1d characterizes the ice/water phase of the cloud layers included in the cloud system. These layers are classified either as ice, liquid water, or "unknown" by the V4 CALIOP Ice/Water phase algorithm (Avery et al., 2020). Most of the ice clouds are composed of randomly oriented ice (ROI) crystals. Clouds containing significant fractions of horizontally oriented ice (HOI) crystals are also detected, mainly before the end of November 2007, when the platform tilt angle was changed from its initial 0.3° orientation to a view angle of 3° (Avery et al., 2020). In Fig. 1 we find cloud systems composed of ROI only (flag = 1), liquid 155 water (WAT) only (flag = 2), ice and WAT (flag = 4), and some systems that include at least one layer of unknown phase (flag = 9). IIR effective emissivities are reported for all single or multi-layer scenes, regardless of the phase. In V4, the phase information is used to adjust the radiative temperature ( Fig. 1e) estimates in cases containing ice clouds (Part I). For illustration purposes, the V4 retrieved effective emissivities at 12.05 µm are shown in Fig. 1f. In this example, emissivity values in the opaque cloud are mostly around 1, the lowest value being 0.91 at -39.5° where the CALIOP image suggests the presence of a faint signal below the 160 cloud. Effective emissivities in ST clouds vary between 0 and 0.9. This example shows that a cloudy scene can include a variety of conditions for the IIR retrievals. Because the goal here is to present the cloud microphysical properties as retrieved with the IIR V4 algorithm and improvements with respect to V3, we chose to limit the analyses to scenes that contain only ROIs, only HOIs, or only WAT clouds with background radiances from the ocean surface. Furthermore, in order to facilitate the interpretation of the results, we require that the CALIOP cloud-aerosol discrimination 165 algorithm (Liu et al., 2019) assign high confidence to the cloud classifications and likewise that the ice/water phase algorithm determined the phase classifications with high confidence. Finally, scenes containing single shot cleared clouds are discarded. Table 1 reports the fraction of scenes that fall into these categories. The statistics are for IIR pixels between 60° S and 60° N in January and July 2008. The ROI scenes represent 13 % to 16 % of all the IIR pixels. The HOI scenes represent less than 0.1 % of all the IIR pixels, and we found that they represent less than 1 % at the beginning of the mission when the platform tilt angle was 170 0.3°. Thus, in the rest of the paper, ice clouds will refer to scenes containing only ROI layers. The WAT scenes represent 14 to 19 % of all the IIR pixels. Clear sky conditions are defined as cloud free scenes with Was Cleared Flag at 1 km resolution equal to zero, with no aerosol layers or only low (< 7 km) semi-transparent "not dusty" layers. Dusty layers are those identified as dust, polluted dust, or dusty https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. marine (Kim et al., 2018) and are discarded because they may have a signature in the IIR channels (Chen et al., 2010). For 175 comparison with the previous categories, the clear sky conditions represent 20 % of the cases for daytime data and 15 % for nighttime data. It is noted that 6 to 10 % of the pixels are rejected as "clear sky" in V4 due to the presence of single shot cleared clouds. These pixels would have been accepted by the V3 algorithm: they represent 25 % and 35 % of the V3 clear sky conditions for daytime and nighttime data, respectively. 180 Table 1: Total number of IIR pixels, fraction of IIR pixels with only high confidence ROI, WAT, and HOI layers in the column and no single shot cleared clouds for retrievals with background radiance from ocean surface between 60° S and 60° N, and fraction of clear sky pixels. layers. This is quantified in Table 2 for the months of January and July 2008. For these months, 45 to 53 % of the selected ROIs are opaque to CALIOP while opaque clouds represent 67 to 90 % of the WATs. Daytime fractions of opaque clouds are larger than nighttime ones, which is likely due daytime surface detection issues. Scenes with only ST layers are spread into three main categories: only one layer, two vertically overlapping layers, and multi-layer configurations with two non-overlapping layers or more than two layers. For both ROI and WAT clouds, the vast majority of the ST scenes have only one layer in the column, which 190 is explained by the fact that we required all the layers to be characterized with high confidence. Thus, the study will be carried out for single-layer cases for simplicity. Retrievals in ice clouds The accuracy of the effective emissivity in each IIR channel and of the subsequent microphysical indices is a prerequisite for successful retrievals of cloud microphysical properties. In section 3.1, we use internal quality criteria to demonstrate the improvements in the V4 effective emissivities in ice clouds that result from the revised computed background radiances over 200 oceans and from the revised radiative temperature estimates (Part I). After examining the changes in eff,12 (at 12.05 µm), interchannel effective emissivity differences, Δεeff12-k = εeff,12 -εeff,k, are assessed, keeping in mind that they should tend towards zero on average when εeff,12 tends towards 0 and towards 1 (G13; Part I). Changes in the visible cloud optical depth, τvis, inferred from the summation of absorption optical depths at 12.05 µm and 10.6 µm (a,12 + a,10, Part I) are shown in Sect. 3.2. The subsequent improvements in the microphysical indices and in the performance of the microphysical algorithm are discussed 205 in section 3.3, where we also illustrate changes in the effective diameters (De) reported in V3 and V4. We recall that De is defined as De = (3/2) × (V/A), where V is the total volume of the size distribution and A is the corresponding projected area (Foot, 1988;Mitchell et al., 2002). The V4 algorithm uses two ice crystal models from the "TAMUice2016" data base (Bi and Yang, 2017;Yang et al., 2013), namely the severely roughened solid column (SCO) and severely roughened 8-element column aggregate (CO8) models, and the model used for the retrievals is selected according to the relationship between βeff12/10 and βeff12/08. IIR retrieved 210 De is the mean of the De12/10 and De12/08 effective diameters when these two values can be retrieved from the respective eff12/k; . Both De12/10 and De12/08 are reported in the product for users interested in specific analyses. The V4 look-up tables (LUTs) that relate microphysical index and effective diameter are computed using the FASDOM (Dubuisson et al., 2008) model and bulk single scattering properties derived using an idealized gamma particle size distribution. As illustrated in Part I, the microphysical indices are very sensitive to De smaller than 50 µm and the sensitivity decreases 215 progressively up to De = 120 µm which is considered the sensitivity limit of our retrievals in ice clouds. Effective emissivity: V4 vs. V3. Because of numerous changes in the CALIOP V4 algorithms, the cloud layers reported in the V3 and V4 CALIOP data products 220 are not identical, so that direct comparisons of the V3 and V4 IIR data products could be misleading. In order to isolate the changes due to the IIR algorithm, the V3 emissivities (hereafter V3_comp) for clouds reported in CALIOP V4 were recomputed using the V3 computed background radiances reported in the V3 product and the V3-like blackbody temperatures derived directly from the centroid temperatures, Tc, which are available in the V4 product along with the V4 blackbody temperatures. The exercise was carried out for V4 scenes over oceans that contain one single cloud layer classified as high confidence ROI with no cleared cloud, 225 as discussed previously in Sect. 2. Illustrations are shown for the month of January 2008 between 60°S and 60° N. Effective emissivity in channel 12.05 The nighttime (blue) and daytime (red) distributions of εeff,12 are shown in Fig. 2 Fig. 2c and 2d are of the order of 0.015 at εeff,12 < 0.6 and increase up to 0.03 at the largest emissivities, where 230 the uncertainty in εeff,12 is prevailingly due to the uncertainty in the radiative temperature taken equal to ± 2 K. (Part I). Because of retrieval errors, eff,12 can be found outside the range of physically possible values (i.e., 0 to 1). For ST clouds (Fig. 2a), the V3 and the V4 histograms differ mostly at εeff,12 < 0.05, where the changes in the background radiances have the largest impact. In this https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. example, the fraction of ST clouds with negative εeff,12 values is reduced from 12 % in V3 to 3.5 % in V4. For opaque clouds (Fig. 2b), the larger V4 εeff,12 values are due to the radiative temperature corrections introduced in the V4 algorithm (these corrections 235 have no to little impact for ST clouds). For the range of εeff,12 values found in opaque clouds, the corrections are prevailingly a function of the "apparent" cloud thickness, which is larger and closer to the true geometric thickness at night (Part I). Nighttime and daytime εeff,12 distributions peak at larger εeff,12 in V4 (εeff,12 = 0.99 and 0.97, respectively) than in V3 (εeff,12 = 0.94). Overcorrections combined with uncertainties cause an increase of the fraction samples with εeff,12 > 1, from 3 % in V3 to 12 % in V4 at night, and from 1.2 to 3.3 % for daytime data. At night, 98 % of the opaque clouds have V4 εeff,12 > 0.8, or cloud optical depth > 240 3.2. This lower range of optical depths is consistent with V4 CALIOP optical depth retrievals, even though it is recognized that direct comparisons with V4 CALIOP optical depths in opaque clouds are difficult (Young et al., 2018). Nighttime εeff,12 distributions for ST and opaque clouds are essentially mutually exclusive, with a εeff,12 threshold around 0.7. In contrast, these distributions overlap between 0.4 and 0.7 for daytime data. The tail down to εeff,12 = 0.4 (τvis ~ 1) for daytime opaque clouds data is explained by a greater difficulty for the CALIOP algorithm to detect faint surface echoes during the day due to large solar 245 background noise, so that some clouds of moderate emissivity may be misclassified as opaque by CALIOP. Effective emissivities close to 1 are found in clouds where the CALIOP integrated attenuated backscatter (IAB) is larger than 0.04 sr -1 , which is in the upper range of values typically observed in opaque ice clouds (Young et al., 2018). Platt et al. (2011) showed that these large IABs, which are often coupled with small apparent geometric thicknesses, are observed when the CALIPSO overpass is close to the center of a mesoscale convective system. Using cloud retrievals based on AIRS thermal infrared data, Protopapadaki et al. (2017) 250 demonstrated that emissivities close to 1 in the tropics are most often indicative of convection cores reaching the upper troposphere, which confirms our observations based on CALIPSO. https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. Inter-channel effective emissivity differences We recall that effective emissivity retrievals preferably use background radiances observed in neighboring clear sky pixels and otherwise use radiances computed by FASRAD. In order to evaluate V4 computed background radiances, we first examined 260 Δεeff12-k at εeff,12 ~ 0 in ST clouds by separating retrievals that used observed radiances (V4_obs) and those that used computed radiances (V4_comp). The results are reported in Table 3, where Δεeff12-k at εeff,12 ≈ 0 is also reported for V3_comp for reference. As in V3 (G13), V4 inter-channel biases are minimum when the background radiance can be determined from observations (V4_obs), which represents 30 % of the retrievals in ST clouds for this dataset. When the background radiance is computed (V4_comp, 70 % of the cases), median Δεeff12-k is similar for both channel pairs and smaller than 0.0025 in absolute value. This 265 indicates residual inter-channel biases smaller than 0.1 K in V4 according to the simulations shown in Fig. 1c of Part I, which is consistent with the residual inter-channel differences seen in clear sky conditions (Part I). Because these biases are very small, retrievals using computed and observed radiances are consistent in V4, hereafter the two methods will be referred to collectively as "V4" for clarity. The Δεeff12-k differences were unambiguously too low in V3_comp, especially for the 08-12 pair, so that reliable retrievals were possible only when observed radiances were available (G13). Including retrievals using computed radiances 270 in V4 increases the number of retrievals in ST clouds by a factor 3.3. Table 3: Inter-channel effective emissivity differences at εeff,12 ~ 0 for retrievals in single-layered ST ice clouds over oceans between 60°S and 60°N in January 2008. The variations with εeff,12 of the Δεeff12-k inter-channel effective emissivity differences for the 12-10 and 12-08 pairs are shown in Figs. 3a and 3b, respectively. The curves are median values, and the shaded gray areas are between the V4 nighttime 25 th and 75 th percentiles. The first observation is that median Δεeff12-k are larger in V4 (solid lines) than in V3_comp (dashed lines) at any emissivity. When εeff,12 tends towards 1, Δεeff12-k is minimum at εeff,12 corresponding to the peak of the distributions shown in Fig. 2, which suggests that the peaks should be closer to εeff,12 = 1. This shows that V4 is improved compared to V3, more convincingly 280 for nighttime data, but also that the radiative temperature corrections are likely not sufficient. Consistent with the simulations shown in Fig. 1 of Part I, Δεeff12-k are increased from V3 to V4 at large emissivities, because the radiative temperatures are increased, and the changes are more important in the 12/08 pair than in the 12/10 pair. https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. Visible cloud optical depth: V4 vs. V3 The V3-V4 changes in the visible cloud optical depths inferred from εeff,12 and εeff,10 are shown in Figs. 4a and 4b for nighttime and daytime data, respectively. The changes in τvis are smaller than 0.02 on average and not significant for τvis smaller than 2 (or εeff,12 < ~ 0.6), that is for most of the ST clouds. For τvis > 2, V4 τvis is increasingly larger than V3 τvis, owing to the warmer radiative temperature estimates in V4. Consistent with previous observations regarding εeff,12, the τvis increase from V3 to V4 is larger at 295 night ( Fig. 4a) than during the day (Fig. 4b). The changes in the eff12/10 and eff12/08 microphysical indices resulting from the changes in Δεeff12-10 and Δεeff12-08 (Fig. 3) 310 are illustrated in Figs. 5a and 5b. The sharp variations of the V4 median microphysical indices (solid lines) at εeff,12 < 0.03 and εeff,12 > 0.96 are due to the increasing truncation of the distributions, because both βeff12/k indices can be computed only when 0 < εeff,k < 1 in the three channels. Over-plotted in Fig. 5 are the median V4 random absolute uncertainty estimates, which are minimum and around 0.02 for intermediate emissivity values (G13). The noticeable large dispersion of the βeff12/k values at εeff,12 < 0.1 is largely explained by the random uncertainties. The median βeff12/k values are overall larger in V4 than in V3_comp, with larger 315 changes for the 12/08 pair than for the 12/10 pair. The consequences for the De retrievals are twofold. First, the fraction of βeff12/k values that are larger than the low sensitivity limit (close to 1) is increased in V4, which means that the fraction of samples for which microphysical retrievals can be attempted is augmented. Secondly, the larger V4 βeff12/k yield smaller De12/k. These two main changes are detailed and quantified in the following sub-sections. Fraction of samples in sensitivity range 320 Figures 6a and 6b show fractions of samples for which βeff12/10 and βeff12/08 are larger than their respective theoretical lower ranges, which were derived for De = 120 µm using the V4 SCO LUT, and in practice are close to 1. For both βeff12/10 and βeff12/08, V4 retrievals are possible more than 80 % of the time for εeff,12 between 0.05 and 0.80 (or about 0.1 -3.2 in terms of τvis). In contrast, the εeff,12 80 % range in V3_comp was only 0.15 -0.7 for the 12/10 pair and only 0.25 -0.7 for the 12/08 pair. As εeff,12 increases from 0.8 to 0.95 (τvis ~ 6), which corresponds to clouds that are opaque to CALIOP (see Fig. 2), the βeff12/k indices 325 decrease and approach the sensitivity limit, and the fraction of possible retrievals in opaque clouds decreases. This fraction is notably increased in V4, and is larger at night than for daytime data, reflecting the impact of the cloud radiative temperature corrections introduced in V4. As in V3, this fraction remains lower for the 12/08 pair. One hypothesis is that cloud heterogeneities in dense clouds could induce a larger low bias in the 12/08 pair than in the 12/10 pair (Fauchez et al., 2015). The V4 nighttime retrieval rate is larger than 70 % up to εeff,12 = 0.95 for the 12/10 pair and up to εeff,12 = 0.9 (τvis ~ 4.6) for the 12/08 pair. 330 https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. Figure 6: Fraction of (a) βeff 12/10 and (b) βeff 12/08 values above the effective diameter retrieval sensitivity limit vs. effective emissivity at 12.05 µm in single-layered ice clouds over oceans between 60° S and 60° N in January 2008 in V4 (solid lines) and in V3_comp (dashed lines) during night (blue) and day (red). Changes in effective diameters Because the changes in the microphysical indices are larger for the 12/08 pair than for the 12/10 pair, we now assess the changes in the respective diameters, De12/08 and De12/10. For meaningful comparisons, the exercise is carried out only for clouds for which both βeff12/10 and βeff12/08 are found above the lower sensitivity limit, both in V3 and in V4. The changes in De12/10 and 340 in De12/08 are illustrated in Figs. 7a and 7b, respectively. The solid lines represent median De12/k derived from V4 βeff12/k and the V4 SCO LUT. The dashed lines represent median De12/k derived from V3_comp βeff12/k and the same V4 SCO LUT, so that the differences between the solid and the dashed lines are due only to the different microphysical indices. As a result of changes of different amplitude for De12/10 and De12/08, the consistency between these two diameters is drastically improved in V4 at εeff,12 smaller than 0.5. Similar conclusions would be drawn using the V4 CO8 model. 345 For a complete analysis of the differences between the V3_comp and V4 diameters, the dotted dashed lines show De12/k derived using V3_comp and the V3 solid column LUT (Part I), so that the differences between the dotted dashed lines and the dashed lines are due only to the different LUTs. The changes resulting from the LUTs and from the microphysical indices have an opposite effect, regardless of the specific V3 and V4 LUTs chosen for the analysis. As a result, De12/10 is overall not changed significantly in V4 (solid lines) compared to V3_comp (dotted dashed lines). In contrast, De12/08 is smaller in V4 by up to 15 µm at εeff,12 < 0.2, 350 because the improved (and increased) βeff12/08 has the largest impact, and conversely V4 De12/08 is larger by up to 10 µm at εeff,12 between 0.2 and 0.9. https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. V4 microphysical retrievals We showed in Sect. 3.3 that the fraction of samples with possible microphysical retrievals is significantly increased in V4 (Fig. 6), 360 and that the consistency between the De12/10 and De12/08 diameters is drastically improved (Fig. 7). The significant disagreement between De12/10 and De12/08 in V3_comp was due to biases of different amplitude in βeff12/10 and βeff12/08, and could not be explained by the possible use of an inappropriate ice crystal model. Both in V3 and in V4, De is retrieved using the ice crystal model found in best agreement with IIR in terms of relationship between βeff12/10 and βeff12/08. Because the accuracy of IIR βeff12/k is improved in V4, the residual discrepancies with respect to the ice crystal models are expected to be a genuine piece of 365 information about ice crystal shape. This requires both eff12/k to be found within the sensitivity range, which hereafter will be called "confident" retrievals. Because the population of clouds meeting this requirement is larger in V4 than in V3 and covers a larger range of optical depths, the results in this section will be shown for V4 only. Theoretically, confident retrievals should be found when De is smaller than 120 µm and eff12/k should tend to the upper sensitivity limit for De > 120 µm. In practice, uncertainties in βeff12/k can trigger non-confident retrievals even if De is truly smaller than the 370 sensitivity limit, and this is more likely to occur when De is close to this limit. Requiring both βeff12/k to be in the expected range of values is meant to reinforce the confidence in the retrievals, but doing so implies no systematic bias between both pairs of channels. This is not exactly true for opaque clouds with εeff,12 > ~ 0.8 (Fig. 6), and consequently the fraction of confident retrievals in opaque clouds is often constrained by the 12/08 pair. Furthermore, the fraction of confident retrievals at large emissivities is larger at night. 375 Effective diameter and ice water path The histograms of confident De and ice water path retrievals (IWP) are shown in Figs. 8a and 8b, respectively, for ST and opaque clouds, and statistics are reported in Table 4. The IWP histograms are computed in logarithmic scale between 0.01 and 1000 g.m -2 , with log10(IWP) bins equal to 0.1. The random uncertainty in De, noted ΔDe, is computed based on the LUT selected for the retrieval and the estimated random uncertainty in the eff12/k indices. Median ΔDe/De values reported in Table 4 are between 34 380 % and 49 %. The uncertainty in IWP is in large part driven by the uncertainty in De. https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. Table 4: Statistics associated to V4 effective diameter (De) and ice water path (IWP) retrievals in single-layered ice clouds between 60° S and 60° N over oceans in January 2008 (see Fig. 8 clouds with no detectable precipitation as retrieved using the combined CloudSat-CALIPSO 2C-ICE product. IWP (Fig. 8b) is found between 0.03 and 100 g⋅m -2 in ST clouds, with the slightly larger daytime values being explained by the cloud selection and the larger occurrence of medium emissivities in the daytime dataset (Fig. 2). The medium values are around 3 g⋅m -2 , with peaks in the distributions at 3 g⋅m -2 and 8 g⋅m -2 for nighttime and daytime data, respectively, and the median relative uncertainty is 50 %. 395 As noted by Berry and Mace (2014), the CloudSat radar is typically insensitive to these thin layers, so that microphysical retrievals in combined CALIPSO-CloudSat products such as 2C-ICE rely on parameterization of the radar reflectivity (Deng et al., 2015) rather than on actual observations. Combining CALIOP and IIR observations appears to be a suitable alternative approach to characterize these thin layers. The estimated cloud radiative temperature (Tr) is at an equivalent altitude located between the CALIOP cloud base and cloud top 400 (Part I). While in case of ST clouds, IIR De is a layer average diameter, IIR De in opaque clouds is mostly representative of the portion of the cloud seen by CALIOP before the signal is totally attenuated. Median De in opaque clouds is around 60 µm and the distributions peak at 50 µm. The different nighttime and daytime De and IWP distributions in opaque clouds are explained by the https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. different ranges of optical depth and the different amplitudes of the radiative temperature correction (Figs. 2 and 4). In opaque clouds, the retrieved IWP lies between 10 and only 300 g.m -2 . The upper limit is due to the fact that De cannot be larger than 120 405 µm and because cloud optical depths inferred from IIR effective emissivities saturate and are typically smaller than 15 (Fig. 4). Ice crystal model selection Recall that De is retrieved using the crystal model (SCO or CO8) that agrees the best with IIR in terms of the relationship between eff12/10 and eff 12/08. As seen in Fig. 9, the SCO crystal model is selected in 80 % of the ST clouds of Tr < 205 K. This fraction steadily decreases down to 60 % as Tr increases up to 230 K (Fig. 9b) and remains stable above 230 K. This result is qualitatively 410 consistent with previous findings using V3 , and, as was discussed in this paper, both the IIR model selection and the mean CALIOP integrated particulate depolarization ratio (in black in Fig. 9b) indicate changes of crystal habit with temperature. The difference between mean De12/10 and mean De12/08 in black and grey in Fig. 9c is a measure of the residual mismatch between IIR observations and the selected model. We see two temperature regimes, that is, below and above 225 K, with a better agreement between IIR and the LUTs at the warmer temperatures. This suggests that the V4 models are better suited for 415 warmer clouds and that they do not perfectly reproduce the infrared spectral signatures of colder clouds composed of small crystals. It is acknowledged that the highly variable ice particle shapes found in ice clouds (Lawson et al., 2019 and references therein) are likely not fully reproduced through the two models chosen for the V4 algorithm. It is further noted that the Clouds and the Earth's Radiant Energy System (CERES) science team is planning to use a two-habit model for retrievals in the visible/near infrared spectral domain (Liu et al., 2014;Loeb et al., 2018). This model would be a mixture of two habits (single column and an ensemble 420 of aggregates) whose mixing ratio would vary with ice crystal maximum dimension, with single columns prevailing for the smaller dimensions. Interestingly, our findings appear to be consistent with this approach. The increase of De with temperature ( Fig. 9c) is in general agreement with numerous previous findings (e.g. Hong and Liu, 2015). In this example, mean De increases from 17 µm at 185 K to 53 µm at 245 K. The decrease between Tr = 250 K and 260 K is possibly due to an increasing fraction of small liquid droplets in these prevailingly ice layers, which would be consistent with the 425 fact that CALIOP integrated particulate depolarization ratio decreases from 0.37 to 0.30 (Fig. 9b). Retrievals using parameterizations from in situ formulation The IIR algorithm takes advantage of the relationship between βeff12/10 and βeff12/08 to identify the ice crystal model that best matches the observations and thereby provide information about both ice crystal shape and effective diameter. Another approach would be to use only βeff12/10 and prescribed LUTs. This approach was adopted by Mitchell et al. (2018), who derived four sets 435 of LUTs using extensive in situ measurements rather than pure modeling. In Part I, we compared these four sets of ßeff12/10 -De relationships with the relationships derived from the V4 SCO and CO8 models. The four sets of De derived from ßeff12/10 using this independent approach are reported in the IIR product for the user's convenience. Figure 10 compares De computed by the analytic function derived by Mitchell et al. (2018) with De12/10 from the CO8 and the SCO models. Relationships derived from the SPARTICUS (blue) and the TC4 (red) field campaign were computed in two ways: by setting the first bin of the measured 440 particle size distribution (PSD) (D < 15 μm) to 0 (i.e. N(D)1 = 0, dashed lines) and without modifying the distribution (i.e. N(D)1 unmodified, solid lines). As discussed in Part I, the differences between the six sets of retrievals illustrate the possible impacts of the LUTs and of the PSDs. Because the presence of small particles in the unmodified PSD causes βeff12/10 to increase faster than De, assuming N(D)1 = 0 yields smaller values of De for a given βeff12/10 than when N(D)1 is not modified. Even though this was not the original intent, comparing median De with or without setting N(D)1 to 0 also illustrates the impact of possible vertical 445 inhomogeneities of De within the cloud layer . Nevertheless, the overall impact of vertical variations on βeff12/10 also depends on the in-cloud IIR weighting function, which is related to the cloud extinction profile (Part I). , the mean De calculated from the SPARTICUS unmodified ßeff12/10 -De relationship (applied at mid-latitudes) and the TC4 N(D)1 = 0 ßeff12/10 -De relationship (applied in the tropics) was compared against the in situ climatology of mean volume radius, Rv, reported in Krämer et al. (2020) after converting De to Rv. The retrieved Rv tended to be no more than ~ 20% smaller 450 than the in situ Rv for temperatures between 208 and 233 K. Comparisons with MODIS https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. Figure 11 compares IIR confident retrievals and co-located Aqua/MODIS Collection 6 daytime retrievals from the visible/2.1 μm and visible/3.7 μm pairs of channels (Platnick et al., 2017, and references therein) in single-layered clouds classified as high confidence ROI by CALIOP and as ice clouds by MODIS. MODIS τvis and De at 1-km resolution are from the MYD06 product 460 and co-location with CALIPSO is from the AERIS/ICARE CALTRACK product. Analyses are over oceans between 30° S and 30° N in January 2008 separately for CALIPSO ST and opaque clouds. Figures 11a and 11b K where De is < 40 µm and IIR τvis is < 0.5, and they progressively depart from each other as Tr increases and MODIS 3.7 increases and approaches MODIS 2.1. MODIS τvis is larger than IIR by 0.3 to 0.2. This small but systematic bias is not seen when comparing CALIOP and IIR (not shown). The MODIS 2.1 De -Tr relationships are similar for ST and opaque clouds, which is not the case for MODIS 3.7 and IIR. For opaque clouds, IIR De. is larger than in ST clouds and is in good agreement with MODIS 2.1 at Tr < 475 225 K. MODIS 3.7 De exhibits a similar increase with temperature as seen with the two other data sets, but it is shifted by -10 μm. At Tr > 225 K, MODIS De 2.1 continues to increase up to 100 μm at 255 K, whereas IIR remains stable around 60 μm and MODIS 3.7 increases slowly to approach the same plateau as IIR around 60 μm. As seen in Fig. 11f, both MODIS and IIR indicate moderate optical depths in these opaque clouds where comparisons are possible, with median values ranging between 2.5 and 6 at Tr < 250 K, IIR being smaller than MODIS by about 0.4. 480 Kahn et al. (2015) found that MODIS 2.1 De is typically larger than AIRS De by 10-20 μm, and that MODIS 3.7 is in better agreement with AIRS on average. These results, which were for clouds of optical depth between 0.5 and 2 over oceans, are consistent with our findings for ST clouds. The MODIS and IIR techniques exhibit different non-linear sensitivities to particle size, so that vertical inhomogeneities of the effective diameter can yield three different retrieved De . This could explain than IIR De is found in better agreement with MODIS 3.7 in ST clouds while MODIS 2.1 is clearly larger (Zhang et al., 485 2010). For clouds of moderate optical depth as found in our population of opaque clouds, MODIS 3.7 is very sensitive to cloud top while MODIS 2.1 senses deeper into the cloud Platnick, 2000), and the smaller MODIS 3.7 De as observed in Fig. 11e suggests that the effective diameter is smaller at cloud top than deeper into the cloud. IIR De might be larger than MODIS 3.7 and in better agreement with MODIS 2.1 for opaque clouds at Tr < 220 K because the IIR weighting function is deeper into the cloud than at 3.7 µm, which is agreement with simulations by Zhang et al. (2010). In conclusion, distinct sensitivity to 490 possible cloud vertical and horizontal (Fauchez et al., 2018) inhomogeneity likely contributes to the observed differences. VRetrievals in liquid water clouds 500 The only difference between effective emissivity V4 retrievals in liquid and ice clouds is that Tr is taken as the temperature at the CALIOP centroid altitude (Tc) in case of liquid water clouds, whereas this initial temperature estimate is further corrected in case https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. of ice clouds. It is recalled that De of liquid droplets are retrieved using the water LUTs (Part I) and that liquid water path is derived from De and εeff,12 (Eq. 10 in Part I). Following a similar approach as for ice clouds, the results are shown for scenes over oceans between 60° S and 60° N that contain 505 one single cloud layer classified as high confidence water by the CALIOP phase algorithm. Because liquid water clouds are statistically warmer than ice clouds, the radiative contrast is typically smaller than for ice clouds. Because uncertainties are inversely proportional to this radiative contrast (Part I), they increase very rapidly when the radiative temperature contrast, that is the difference between the clear air TOA background brightness temperature and the TOA blackbody brightness temperature, is smaller than 10 K. In order to prevent very large uncertainties associated with very small radiative contrast, the results are presented 510 for clouds in the free troposphere with centroid altitude above 4 km. For this cloud population, the radiative temperature contrast is larger than 10 K, and it increases on average from 15 K at 4 km to 50 K at 10 km where the highest water clouds are found (not shown). Most of these sampled liquid clouds are composed of supercooled water droplets. 520 Figures 12a and 12b show the distributions of V4 εeff,12 in ST and opaque liquid water clouds, respectively, for the month of January 2008 between 60° S and 60° N over ocean, for clouds with centroid altitude > 4 km. Figures 12c and 12d show the respective median random uncertainties., which are about twice as large as the uncertainties in ice clouds (Figs. 2c and 2d) because of the smaller radiative contrast. Only 17 % of these clouds are ST (Figs. 12a and 12c). Unlike in ST ice clouds, the distributions peak at εeff,12 ~ 0.2, and non-physical negative emissivity values are found in only 2 % of the pixels. The εeff,12 distributions in opaque 525 clouds peak at 1.02 at night and at 0.99 for daytime data, with an estimated uncertainty of ± 0.06. The spread around these peaks is larger than for ice clouds, which is explained by the larger uncertainties and specifically to a larger sensitivity to a wrong estimate https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. of Tr. Thus, the nighttime and daytime fractions of samples with εeff,12 > 1, for which no microphysical retrievals are possible, are 45 % and 27 %, respectively. The daytime distributions in opaque clouds exhibit a tail down to εeff,12 ~ 0.4, while at night, the lowest εeff,12 is ~ 0.65, which is very similar to what was observed for opaque ice clouds (Fig. 2b). This similarity suggests that 530 emissivity retrievals in ice and liquid water clouds are consistent, notwithstanding the unavoidable larger uncertainties in the latter ones. Inter-channel effective emissivity differences The variations with εeff,12 of the V4 Δεeff12-k inter-channel effective emissivity differences for the 12-10 and 12-08 pairs are shown in Figs. 13a and 13b, respectively. The nighttime (blue) and daytime (red) curves are median values, and the shaded gray areas are 535 between the V4 nighttime 25 th and 75 th percentiles. As for ice clouds, both Δεeff12-k tend nicely to 0 at εeff,12 ~ 0, owing to the improved computed background radiances demonstrated previously, which has a beneficial effect on retrievals in any ST layer. Both Δεeff12-k have a second minimum at εeff,12 ~ 1, as expected, and this minimum is found slightly larger than 0. Both Δεeff12-k and therefore both βeff12/k are notably larger than for ice clouds (see Fig. 3), reflecting the presence of smaller particles in the liquid water distributions (Giraud et al., 2001;Mitchell and d'Entremont, 2012). As shown by Avery et al. (2020), the IIR 540 microphysical indices are unambiguously larger in clouds classified as liquid water by the CALIOP phase algorithm than in clouds classified as ice. As previously, retrievals are deemed confident when both eff12/k are found within the sensitivity range, which corresponds to De = 60 μm for liquid clouds. The fraction of confident retrievals is found similar in liquid water clouds of centroid altitude > 4 km and in ice clouds. Following the same presentation as for ice clouds, the histograms of confident De and liquid water path retrievals (LWP) are shown in Figs. 14a and 14b, respectively, for ST and opaque clouds, and statistics are reported in Table 5. 555 https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. 560 Both in ST and in opaque clouds, the nighttime and daytime De histograms are similar. In ST clouds, median De is 13 µm and median liquid water path is 3.4 g.m -2 with a median random uncertainty of 1.2 g.m -2 . In opaque clouds, median De is 18 µm and median liquid water path is 25-31 g.m -2 with a median random uncertainty of 10-15 g.m -2 . The maximum retrieved LWP is about 100 g.m -2 , consistent with the infrared saturation range of 40-60 g.m -2 reported by Marke et al. (2016) who combined microwave and infrared ground-based observations to improve LWP and De retrievals in "thin" clouds that they defined as LWP < 100 g.m -565 2 .The authors report De between 10 and 14 μm in "thin" clouds of top altitude < ~ 1 km, which agrees well with the peaks of our distributions. Table 5: Statistics associated to V4 effective diameter (De) and liquid water path (LWP) retrievals in single-layered liquid water clouds of centroid altitude > 4 km between 60° S and 60° N over oceans in January 2008 (see Fig. 14 Fig. 15 as a function of Tr, highlighting that most of these liquid clouds of centroid altitude > 4 km are supercooled, with Tr ranging between 235 and 280 K (Fig. 15a). Mean IIR De (Fig. 15b, red) increases steadily from 11 μm at 242 K to 18 μm at 270 K, while mean CALIOP particulate depolarization ratio (Fig. 15c) is constant and 575 https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. around 0.1. At Tr > 270 K, De continues to increase up to 20 µm, while CALIOP integrated particulate depolarization ratio decreases. As Tr decreases from 242 K to 235 K and the number of samples drops quickly, De increases up to 24 μm, and CALIOP depolarization ratio increases up to 0.15, indicating a progressive transition to ice phase. As seen in Fig. 15b, De12/10 and De12/08 are in fair agreement. The mean De12/10-De12/08 difference increases from -2 µm at 275 K to + 3 µm at 245 K. This slight temperature-dependent discrepancy between the IIR observations and the water LUT could be explained by the fact that the 580 complex refractive index is temperature dependent, as reported by Zasetsky et al. (2005) and Wagner et al. (2005), the complex refractive index of supercooled water being intermediate between warm water and ice (Rowe et al., 2013). Further investigations will be carried out to establish whether the residual discrepancy between De12/10 and De12/08 would be reduced by using a new set of temperature-dependent indices, following the approach in Rowe et al. (2013). Nevertheless, these simple observations give confidence in the new V4 IIR De retrievals in ST liquid clouds. 585 Comparisons with MODIS IIR confident retrievals in liquid water clouds were compared with MODIS Collection 6 retrievals from the visible/2.1 µm and visible/3.7 µm pairs of channels for clouds also classified as liquid water by MODIS. The results are shown in Fig. 16, following the same presentation as in Fig. 11 for ice clouds. Again, cloud centroid altitude is chosen to be higher than 4 km, and, as previously for ice clouds, the comparisons shown in Figs. 16 c-f are limited to those pixels for which the IIR, MODIS 2.1 and MODIS 3.7 595 retrievals (orange curves in Figs. 16a and 16b) were all successful. As seen in Fig. 16a, Tr spans between 235 K and 280 K for these sampled liquid clouds. In ST clouds, the three sets of median De (Fig. 16c) have different variations with temperature at Tr < 270 K: IIR De increases with Tr from 10 to 20 µm whereas both MODIS 2.1 and 3.7 are larger than about 20 µm. In addition, MODIS τvis overestimates IIR τvis by about 50 % (Fig. 16d). In contrast, the three sets of De exhibit similar variations with Tr in opaque clouds (Fig. 16e). IIR De (red) is systematically smaller than MODIS 2.1 (green), by 8 μm on average. This is fairly 600 consistent with findings by Di Noia et al. (2019) who compared MODIS 2.1 with new retrievals from POLDER-3 measurements, and found that MODIS 2.1 effective radius was larger by about 3 μm (De larger by 6 μm) for high oceanic clouds having pressures lower than 600 hPa. MODIS 3.7 retrievals (blue) are weighted closer to the top of the cloud than the corresponding MODIS 2.1 retrievals (Platnick, 2000), and are larger than IIR De estimates by only 3 m. This is encouraging, despite of the seemingly temperature-dependent discrepancy between MODIS and IIR τvis (Fig. 16f), where median IIR τvis (red) saturates around τvis = 5 605 while median MODIS increases up to 15 at 240 K. https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. Conclusions and perspectives 615 This paper describes the impacts of the various changes implemented in the V4 IIR Level 2 algorithm on the effective emissivities and microphysical retrievals in ice clouds. We chose to illustrate and discuss the changes for one month's worth of data over ocean using a step by step approach so that data users can understand the differences and improvements that they should expect when https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. using the recently released V4 IIR Level 2 data products. Retrievals in liquid water clouds, which were added in V4, are also presented. The IIR retrievals rely heavily on the scene classification reported for exactly co-located CALIOP observations. The 620 results are presented for single-layer cases having the ocean surface as a reference and for which the CALIOP cloud classification and ice-water phase identification are determined with high confidence. We show that in tenuous ST clouds, emissivity retrievals derived from both observed and computed background radiances are fully consistent in V4, whereas the inter-channel biases that were observed in V3 when the background radiance had to be computed introduced significant biases into the V3 microphysical retrievals. Our assessment is based on internal control criteria; i.e., the 625 analysis of retrieved inter-channel effective emissivity differences at εeff,12 ~ 0. Because the background radiance has to be computed for approximately 70 % of the retrievals in ST clouds, the number of unbiased emissivity retrievals is increased by a factor 3 in V4. In V4, the lowest effective emissivity for which microphysical retrievals are possible in more than 80 % of the pixels is reduced to ~0.05 (or τvis ~ 0.1). In contrast, this lowest emissivity limit in V3 was as high as 0.25 in those cases of computed background radiances and was driven by the large biases in the 12/08 pair. Furthermore, when microphysical retrievals 630 were possible in V3, the different 12-10 and 12-08 inter-channel biases induced large differences between the De12/10 and De12/08 diameters retrieved from the respective microphysical indices. Perhaps one unique feature of the IIR algorithm is that the ice crystal model is selected according to the relationship between the IIR βeff12/10 and βeff 12/08 microphysical indices. In V4, the "TAMUice 2016" SCO (severely roughened single column) model is selected in 80 % of the cases in ST clouds at Tr < 210 K, and this fraction decreases at larger temperatures. The "TAMUice 635 2016" CO8 (severely roughened 8-element column aggregate) model is selected in 40 % of the cases when clouds have radiative temperatures larger than 230 K. In ice clouds, De12/10 is on average smaller than De12/08, with larger discrepancies below 230 K than above. Employing a technique similar to the IIR algorithm, Heidinger et al. (2015) also noticed differences between effective diameters retrieved from the Aqua/MODIS 32/31 and 31/29 pairs of channels when using the "TAMUice 2013" CO8 model , which was chosen for the MODIS Collection 6 data products for its consistency between visible and thermal infrared 640 optical depth retrievals (Holz et al., 2016). We could not find a perfect agreement between De12/10 and De12/08 in liquid water clouds supposedly composed of spherical droplets. In the range of temperature between 240 K and 260 K, where both ice and liquid water clouds are found, De12/10 is larger than De12/08 in liquid water clouds while it is smaller in ice clouds, suggesting that these mismatches are not due to undetected residual biases in the IIR microphysical indices but instead to our LUTs. As noted earlier, the residual mismatch in liquid water clouds could be explained by inaccuracies in the refractive indices, which are taken 645 constant whereas temperature-dependent indices have been reported (Zasetsky et al., 2005;Wagner et al., 2005). Likewise, the "TAMUice 2016" single-scattering properties are derived using refractive indices at 266 K (Warren and Brandt, 2008), but Iwabuchi and Yang (2011) reported that the temperature dependence of these properties in the thermal infrared is small but not negligible. While in V3 mismatches between IIR retrievals and the LUTs were largely due to inter-channel biases in the IIR retrievals, the improved accuracy in V4 opens the possibility for more detailed comparisons with the theory or modeling. 650 Retrievals in opaque ice clouds are improved in V4, especially at night and for 12/10 pair of channels, owing to corrections of the radiative temperature estimates. Refining the relationship between lidar geometric altitudes and infrared radiative temperature based on theoretical considerations (Part I) is deemed important per se, and quasi-perfectly co-located IIR and CALIOP observations offer a unique opportunity to test our theoretical approach. To make further progress in this topic and assess the V4 radiative temperature estimates in opaque clouds, the next step will be to use CloudSat extinction profiles from the lower parts of 655 the clouds not seen by CALIOP. Daytime comparisons with Aqua/MODIS Collection 6 data products are presented for co-located pixels where V4 IIR, MODIS 2.1 and MODIS 3.7 all have successful retrievals. This comparison demonstrated that IIR is best suited for retrievals in tenuous https://doi.org/10.5194/amt-2020-388 Preprint. Discussion started: 9 November 2020 c Author(s) 2020. CC BY 4.0 License. clouds of emissivity < 0.2 while MODIS is more efficient for denser clouds of emissivity > 0.8. IIR De is in better agreement with MODIS 3.7 than with MODIS 2.1 in tropical ST ice clouds at Tr < 200 K. In contrast, IIR De is in agreement with MODIS 2.1 in 660 tropical opaque ice clouds at Tr < 205 K and in fair agreement with MODIS 3.7 at warmer temperatures. For opaque liquid water clouds having centroid altitudes greater than 4 km, so chosen to ensure sufficient radiative temperature contrast for the IIR retrievals, IIR De is systematically smaller than MODIS 2.1 by 8 µm and smaller than MODIS 3.7 by 3 µm. The IIR technique appears to be perfectly suited for retrievals in ST supercooled liquid water clouds. Author contribution AG and JP defined the content and methodology of the paper and wrote the original draft. AG performed the data analysis and prepared the figures. NP was in charge of software development. MV provided assistance for the use of the CALIOP data. PD provided the FASRAD and FASDOM radiative transfer models and bulk scattering properties. PY provided the ice crystal models 680 from the "TAMUice 2016" database. DM provided the analytical functions derived from in situ measurements. All authors contributed to the review and editing of this paper. Competing interests Author Jacques Pelon is a co-guest editor for the "CALIPSO Version 4 Algorithms and Data Products" special issue in Atmospheric Measurements Techniques but will not participate in any aspects of the editorial review of this manuscript. All other authors declare 685 that they have no conflicts of interest.
14,225.4
2020-11-09T00:00:00.000
[ "Environmental Science", "Physics" ]
Coiling free electron matter waves Here we demonstrate particle beams that spiral in free space devoid of external fields. The beams consist of electrons in two lobes that twist around each other along the optical axis, such that each electron can be described by a two-lobed probability distribution that rotates as it propagates. Furthermore, we demonstrate that this twisting distribution can undergo programmed periods of angular acceleration. These unusual states are produced by preparing each free electron wavefunction in a superposition of non-diffracting Bessel modes that carry orbital angular momentum using nanofabricated diffraction holograms. The holograms can encode nonlinear azimuthal phase so that the resulting electron probability distribution twists in space in a controllable manner, accelerating and decelerating during propagation, without any radiation. This work provides a new platform to explore the dynamics of electrons in magnetic fields, and opens the possibility of producing other types of particle beams with coiling geometries. Introduction Recent work with diffractive electron optics in transmission electron microscopes (TEMs) has allowed for the reliable creation and study of structured electron beams [1]. One such structure is a freely-propagating electron matter wave with a helical phase f ℓ e i , where ℓ is an integer and f is the azimuthal coordinate, winding about the azimuth, called an electron vortex [2][3][4]. The phase vortex has an associated quantized orbital angular momentum (OAM). Electron beams with OAM-carrying phase vortices result from a long tradition within physics dating back to Dirac or even earlier [5,6]. The first experimental demonstration of electron vortices was in 2010 by Uchida and Tonomura [7]. Uchida and Tonomura employed a graphite staircase to approximate a spiral phase plate, which is commonly used in optics [8]. Current work utilizing off-axis holograms within a TEM allows for a high degree of control over the structure of a diffracted electron beam [9][10][11][12]. Electron vortex beams have applications in numerous scientific studies, including potential magnetic monopole detection [13][14][15], measuring magnetic properties [16][17][18], atomic scale resolution techniques [19], and magnetic dichroism experiments [20]. In all these works, it should be noted that the term 'vortex' here denotes the phase of the wavefunction only; not the motion. The movement of an electron vortex probability distribution deviates only slightly from a normal, non-vortex electron beam, and can be described by a collection of straight trajectories with a very slight azimuthal skew between them [6,21]. Here we create an electron beam with a probability distribution that observably rotates with a controllable angular velocity as it propagates, forming a coil. The particle beam follows a helical trajectory in free space, shown experimentally in figure 1, without interacting with an external force or potential. We achieve this by preparing each electron in the beam in a coherent superposition of quantized orbital states. Nanofabricated diffraction holograms are used to manipulate the phase of electron matter waves such that they are described by combinations of Bessel states with a nonlinear azimuthal phase. The resulting electron matter wave is nondiffracting over a finite range, with a probability density that exhibits local angular acceleration in a controllable Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. manner. The matter wave domain allows us to investigate properties not evident in the optical domain: we study the electrodynamic properties of these angularly accelerating waves and confirm zero radiation (zero Poynting vector) outward from the accelerating charged beam, consistent with the view that the acceleration is local and through interference, while the particle trajectories remain in straight lines even though the wave appears to be spiralling. The motion of the spirally lobes is akin to an artificial magnetic field, but resolved quantum mechanically by a simple model of an electron as an extended wavefunction, thus providing deeper insight into angular motion. Description of coiling electron beams The coiling electron beams we demonstrate can be described by a combination of four Bessel wavefunctions [12,22]. A pure Bessel mode is described in cylindrical coordinates (r, f, z) by This describes a collection of plane waves with wavenumber  = k mE 2 all converging along the optical axis in a cone of semi-angle α, such that the radial and longitudinal components of the wavevector are given by k r =k sin α and k z =k cos α, respectively. The amplitude of the wave is described by a Bessel function J ℓ , where the topological charge ℓ gives rise to an OAM of ℓ ÿ per electron. For a single Bessel mode, the phase varies linearly by 2πℓ about the azimuth, and is referred to as a canonical vortex field [23]. A superposition of Bessel states of opposite helicities and different convergence angles creates angularly rotating and accelerating electron waves: where D is a real parameter between 0 and 1, referred to as the anisotropic parameter. The difference in convergence angles, α 1 and α 2 , gives rise to angular rotation of the wave. This can most readily be seen in the simplest of case where D=0, in which the states are defined by a balanced superposition of canonical vortices. In this case, the total wavefunction becomes proportional to . As illustrated in figure 2(a), this state is defined by a petal structure rotating at a constant rate [24][25][26][27] with propagation distance, given by f = D Dl k z . When the anisotropic parameter is non-zero (D>0), the rate at which the wavefunction's petal structure rotates begins to vary upon propagation [28][29][30], thus resulting in an angularly accelerating and decelerating electron wave, as shown in figures 2(b), (c). The dynamics of this wavefunction can be more easily grasped when equation (2) is rearranged into the following form  , and Δ k z =(k z1 − k z2 )/2. Here we see that the anisotropic parameter D introduces a nonlinear variation of the azimuthal phase profile j ℓ (f) of the wavefunction given by To demonstrate the accelerating motion of this state, we set the phase terms Δ k z z+ℓj ℓ (f) to a constant which in turn allows us to track any point of interest within our wavefunction, for example, one of the petals as tracked in figure 2. From this we find that the rotation (Φ), angular velocity (∂ z Φ), and angular acceleration (∂ 2 z Φ) as a function of the propagation distance are given by Results We produced these beams using diffraction from nanofabricated holograms, such as the one shown in figure 3. The hologram was placed in the front focal plane of a lens (one focal length before the lens) system in a TEM, such that the Fourier transform of the hologram occurred in the Fourier plane of the lens (one focal length after the lens). The TEM was adjusted to demagnify and project an image of this diffracted electron pattern onto a , forms an azimuthal phase around a ring of infinitesimal thickness we call a 'delta' ring. The azimuthal phase, when encoded by a sinusoidal carrier, results in a two-dimensional sinusoidal pattern featuring a fork dislocation [10]. In an actual hologram, the delta ring must have a finite thickness in order to host the sinusoidal carrier and diffract a reasonable electron current into the desired beam. However, the non-zero ring thickness approximating the radial delta function results in a diffracted beam of finite width, which affects the quality of the generated Bessel beam and the focal range over which it is approximately Bessel-like. Thus we chose a width such that the diffracted beam was very close to a true Bessel beam within the experimental focal range, while still having sufficient intensity. To create the twisting electron beam composed of superpositions of Bessel beams, the hologram was designed with two distinct delta rings with a slightly different r′ parameter. The thickness of each ring is 200 nm, while the diameter of the rings is 8.0 and 7.0 μm. To separate the diffracted beams, the period of the carrier wave is 75 nm. The relatively small diameter of the holograms was chosen so that the resulting diffracted beam would be large enough in our detector to observe the detail of the beam. With this hologram we are able to observe spiralling electron waves for the first time, as shown in figure 1. The results are shown for the =  ℓ 1 superposition of non-canonical vortex beams. To determine the angular velocity and acceleration, images of the beam's intensity profile were taken for ℓ=1 and D=0, 0.158, 0.325, and 0.510 at equally spaced locations along the optical axis in a focal series. The angular orientation of the probability distribution was extracted from these images, with the results shown in figure 4. The anisotropic parameter that we encoded into the diffraction grating is given by D exp in the figures and serves as the 'expected' anisotropic parameter, while D fit is retrieved by fitting equation (5) to the data and retrieving the D parameter from this fit. The expected and fitted acceleration points align very closely. We note that small changes in D make almost imperceptible differences in the rotation unless D is close to one, as seen in the difference between the fitted and predicted curves in figure 4. With these considerations in mind, we claim that the measured values given by D fit align well with the encoded D exp . Our measured velocity and acceleration closely match with the predicted data, as shown in figure 5. The data represented in figure 4 was used to reconstruct the three-dimensional structure of the beam shown in figure 1. Hologram. Our hologram is placed in the front focal plane of a lens (a focal length before the lens) and consists of a double ring slit of differing radii, producing an accelerating wave in the Fourier plane (a focal length after the lens) (see Supplementary Information). Each ring is designed to produce a non-canonical Bessel vortex beam from a superposition of two canonical Bessel vortex beams. Discussion The energy flow associated with spontaneous angular accelerations of electron matter waves can be understood by considering the probability current distribution, The vectorial component of the waves' probability current density, and thus the momentum trajectories inside the wave, can be attributed to the gradient of its phase. That is, for a wavefunction ψ, the corresponding current density satisfies j ∝ ∇arg ψ, where arg ψ refers to the argument or the phase of the wavefunction [31]. The phase of the angularly accelerating wavefunction (equation (3)) is given by: The above equation only yields values of 0 and π. This reveals that the two intertwined lobes in the accelerating wavefunction are π out of phase. Despite an apparent helical probability current density, we find that the transverse gradient of the phase is zero everywhere, and thus the current lines correspond to straight trajectories in the longitudinal direction, as shown in figure 6(a). To emphasize this claim, we simulate the propagation of a wave ψ acc where ℓ=1 and D=0.5 through an aperture aligned such that one of the main lobes of the wave's probability density passes through it. The results of these simulations are shown in figure 6 (b), where we observe that the wave ceases rotating once it goes through the aperture. We also model the electric and magnetic fields emanating from these matter waves. We adopt a semiclassical model in which we equate the waves' electric charge ρ e (r) and current j e (r) densities to their probability densities [2]. Explicitly, we let ρ e (r)=eρ(r) and j e (r)=e j(r), where e is the electron's charge, r y = ( ) | ( )| r r 2 . Here we consider a more realistic model of a coiling electron wavepacket of finite extent in both longitudinal and transverse directions. The finite energy spread of the electrons is accounted for by modulating the longitudinal component of our wavefunctions by a Gaussian function. We also replace any occurrences of Bessel wavefunctions by Bessel-Gauss solutions [32] to the Schrödinger equation, which account for the propagation of Bessel waves modulated by a radial Gaussian function. We numerically compute the electromagnetic potentials due to these charge and current distributions, and derive the resulting electric and magnetic fields. In figure 6 (c), we consider the case of a finite version of the accelerating wave introduced in equation (3), where we let k z2 =2k z1 , ℓ=1, and D=0.5. For both cases, we can see that the radial components of the waves' Poynting vector, which is proportional to E×B, scales down faster than r 2 thus implying that the wave's static fields do not lead to radiation. Conclusion We have demonstrated a twisted electron beam with a rotating intensity distribution within the cavity of a TEM. In addition to producing lobed charge distributions that rotate constantly in time without an external field, we also demonstrate charge distributions that periodically exhibit angular acceleration. While this behavior has been demonstrated before in beams of light, it is not immediately obvious that the same local angular accelerations can be produced in beams of charged, massive particles. Indeed, classical physics prohibits such behavior. Our result is understood by considering that the each electron is described by a symmetric matter wave, the centroid of which does not accelerate, and the rotations and angular accelerations of individual components of this wavefunction are local in nature. The angular velocity and acceleration of the field could be inferred and showed good agreement with the theoretical predictions. The results challenge conventional notions of rotational kinetics but the paradoxes it raises can be resolved by a simple model of the electron as an extended wavefunction, thus providing deeper insight in angular motion. Our work introduces a new class of matter wave based on controlling the angular acceleration of an electron's wavefunction in a predictable manner. Earlier reported electron vortex states undergo only partial orbits and can be described by straight trajectories of classical particles [2,10]; their helical structure only represents the timing of the wave, whereas here we demonstrate electron states with actual winding probability current density. A series of previous investigations showed that the OAM degree of freedom of electron wavefunctions in a longitudinal magnetic field can couple to Landau states, which can result in a nonuniform angular rotation through nearly 90° [33][34][35][36], but the size of the beam changes as well and the authors note that this behavior can be described by classical charged particles following ray trajectories within an external field. The Gouy phase may also be used to rotate beams and has been demonstrated in the optical [29] and matter wave [33] regimes. Our work here differs in these previous approaches in that we demonstrate rapidly coiling currents in free space devoid of an external fields, in a beam that does not change size. Furthermore, unlike previous works, our approach allows for the angular acceleration to be tuned in a controllable way both in magnitude and over an extended distance. In addition to providing a rich new system to explore OAM in the quantum regime, the rotating electron beams could potentially be used for several applications. There have been reports of using canonical electron Figure 6. Semiclassical dynamics and electromagnetic fields of angularly accelerating electron waves. (a) Current lines of an angularly accelerating wavefunction which, in this case, correspond to the wave's ray optics classical trajectories. The arrows form a contour of the wave's probability density at the focal plane and are colored based on the phase at the corresponding points, illustrating the twolobed structure. In the limit where the two converging waves forming the spiralling distribution have very similar convergence angles α, then there are no phase gradients in the wave's profile. Therefore, the probability current lines are straight and they correspond to the electron's classical trajectories. (b) Simulated behavior of an angularly accelerating electron wave upon propagating through an aperture and where k z1 and k z2 are scaled to observe diffraction related effects. The shown surfaces are defined by the half-maximum value of the wave's maximum probability density in a given plane. (c) Probability density ( y | | 2 ) of a coiling electron wavepacket, where ℓ=1 and D=0.5, shown in the xy, xz, and yz plane. Its static electric (E) and magnetic (B) fields along with their cross product are also shown. The vectorial component that is transverse to the plane, v ij (v xy , v xz , v yz ), is show as black arrows plotted on top of the perpendicular component, v k (v z ,v y , v x ). The opacity of the arrows and the scheme's intensity are jointly normalized to faithfully represent the relative magnitudes of each vector component. vortex beams to rotate nanoparticles on dry substrates [37,38] as well as in liquid environments [39]. In optical traps, coiling light beams have been used to create multiple traps that can each manipulate microparticles [40]. Perhaps these technique can now be combined to manipulate multiple particles at the nanoscale. Furthermore, self-coiling electron beams could provide a rich system to explore resonance conditions high-power beam systems, such as helical undulators in a synchrotron or free electron laser, or inside high power vacuum electronics.
4,018
2019-04-15T00:00:00.000
[ "Physics" ]
Simulations of Policy Responses and Interventions to Promote Inclusive Adaptation to and Recovery from the COVID-19 Crisis in Ecuador COVID-19 has had a devastating effect on the economy and the health of households around the world. In this study, we evaluate the economic impact of COVID-19, as well as the effect of government interventions aimed at alleviating it, on the welfare of Ecuadorian households in terms of income shocks, poverty rates, and inequality. The empirical strategy used is to measure mean income shock by gender and economic sector based on cross-sectional data from December 2019, May 2020, and September 2020, and use these estimates to simulate individual income shocks from the December 2019 data. This allows us to disaggregate our analysis by demographic and employment profile in order to identify groups at risk and help guide future government COVID recovery programs. We find that by May 2019, poverty had more than doubled, reaching 57%, and average income had fallen by more than 50%. Informal workers, rural populations, indigenous households, and households with young kids were among those most affected. Government interventions thus far have had a negligible effect in the aggregate, but they may have been crucial for the subsistence of households below the poverty line. DOI: https:// doi. org/ 10. 34196/ ijm. 00271 Introduction The sudden appearance and rapid spread of the COVID-19 virus pushed governments around the world to partially shut down their economies in order to limit contact and suppress transmission. During the first trimester of the pandemic, Ecuador was among the countries hardest hit by the virus. Even though it was one of the first countries to impose lockdown measures, according to an analysis of mortality data by The New York Times, the overall number of deaths in Ecuador between March and October 2020 was 36,800 higher than in the same period in previous years -that is 2.97 times higher than the number of deaths officially reported. The economic effects of the pandemic are widely felt in the country, which was also dealing with one of its worst economic crises in decades at the time of the virus outbreak. Most Ecuadorian households are economically vulnerable to income shocks, and COVID-19's dual impact on both supply and demand has exacerbated this vulnerability. The social distancing and lockdown measures needed to reduce the spread of the virus have had important consequences for the labour market and private transfers, and thus directly affected households' economic well-being. In addition, a large share of workers are informal workers (they accounted for 66% of total employment in 1. The Humanitarian Support Law, which introduces minor tax relief measures and labour reforms along with other minor amendments to renegotiate commercial debt. 4 In terms of labour reforms, it allows for the modification of existing economic conditions in current labour contracts, in particular, the reduction of employees' working time up to 50 per cent of normal working hours, and thus a reduction in payments. In terms of social security coverage and unemployment protection, it allows salaried workers who have been laid off to apply for unemployment insurance after 10 rather than 60 days of unemployment, which was the previous eligibility requirement. The government increased its unemployment insurance expenditure by $372 million. 5 2. The Family Protection Bond for Emergencies, which is a temporary emergency program targeting families whose income is below the minimum wage and who do not have access to social security (informal workers). The government spent $250 million on this program. Yet, these policies seem quite modest compared to the economic impact of COVID-19. It is therefore key to evaluate the impact of COVID-19 on Ecuadorians' economic well-being and the effectiveness of current policies. These evaluations must consider differences across key demographic and employment profiles, including gender, age, ethnicity, education, rural/urban area, formal/informal worker status, income decile, and firm size, to identify vulnerable groups and help policymakers shape future efforts to alleviate the economic impact of This project is divided into three parts. First, we use cross-sectional data from household labour surveys to estimate COVID-19s impact on labour income by gender and economic sector from December 2019 to May 2020 and September 2020, as well as its overall impact on non-labour income. Then, we use these estimates to simulate individual income and household per capita income post-COVID-19. We analyse COVIDs average impact on income and poverty rates among key subpopulations, as well as its overall effect on inequality. Finally, we run simulations of the effect of existing alleviation policies. The remainder of this paper is organised as follows: Section 2 presents Ecuador's economic context pre-COVID; in Section 3, we discuss related studies; Section 4 describes the data and the empirical strategy used; Section 5 presents our results; and Section 6 concludes this paper. Ecuador's economic context Ecuador already had a fragile economy when it became one of the countries most affected by COVID-19. Since 2015, its average economic (GDP) growth has been almost zero and its per capita GDP has decreased every year, except in 2017 when it grew marginally (see Table 1). Unemployment fell from 4.7% to 3.8% in 2019, but this was because of lower participation in the labour market and an increase in informal work from 58.4% to 66.07%. Part of this growth in informality is driven by growth in selfemployment, from 34% to 38%. This increase is likely to be attributable to workers who could not find salaried work starting low productivity subsistence activities. Labour income also fell, and fiscal accounting deteriorated. The country also has high levels of income inequality (its Gini coefficient was 0.459 in 2017, 0.469 in 2018, and 0.473 in 2019). Furthermore, the incidence of poverty is high and trending upward (21.5% in 2017, 23.2% in 2018, and 25% in 2019). For the vast majority of households in the country, labour income accounts for the main, if not the only, source of income. In December 2019, labour income accounted for 82% of total household income. Other sources of income include conditional cash transfers (CCT) from the government (CCT accounted for 15% of eligible households' total income) and private transfers. In terms of social security coverage, only 53.9%of wage workers, or 24.7% of the working population, were registered with the system's contributory scheme in 2019. If we take into account unpaid workers (16%) and selfemployed individuals (38%), on average 75% of the working population was not covered by the social security system in 2019. Given that these workers have limited savings capacity to cope with economic shocks and do not have access to unemployment protection, changes in labour income that affect this group of workers are particularly important for economic policy. Related literature Several recent studies have focused on evaluating COVID-19's impact on the world economy. For instance, the ILO (2021) estimated that the COVID pandemic resulted in 114 million lost jobs worldwide in 2020 compared to in 2019. Bottan et al. (2020) found from online surveys conducted in 17 countries in Latin America and the Caribbean that 45% of respondents reported that a household member had lost their job and 58% of respondents from business-owning families reported that a household member had closed their business. More strikingly, they found that, among respondents whose pre-COVID household income was below their national minimum wage, 71% reported that a household member had lost their job and 61% reported that a household member had closed their business. The authors show that the crisis due to the pandemic has exacerbated economic inequality. At the macro level, Barro et al. (2020) used data from the great influenza pandemic of 1918-1920 to provide upper bounds for COVID-19 outcomes for a set of 48 countries. The authors predict a major global economic contraction of about 6 per cent for GDP and 8 per cent for consumption in a typical country. Sumner et al. (2020) investigated three different scenarios for economic contractions due to COVID-19 and their impact on poverty headcounts using international poverty lines. The authors estimate that in the most extreme scenario, i.e., 20 per cent contraction of income or consumption, the number of people living in poverty could potentially increase by 420-580 million relative to 2018. Regarding Ecuador in particular, Jara et al. (2021) used a microsimulation model to study the role of tax-benefit policies in mitigating the immediate impact of the economic shock. They found that the policies do little to mitigate losses in household income due to COVID-19. They also report -in line with our findings -that inequality increased and poverty more than doubled. Yet, their study considers aggregate measures. In contrast, this paper analyses how COVID has impacted subgroups such as different demographic groups and types of workers. This is important as there are large differences in the level of vulnerability of different groups. For instance, gender and ethnic disparities in Latin American and Caribbean (LAC) countries are well documented, and Ecuador is no exception. Canelas and Salazar (2014) used household surveys from Bolivia, Ecuador, and Guatemala to show that women are highly discriminated against in the labour market and undertake most domestic household activities in those three countries. Atal et al. (2009) used data from eighteen Latin American countries and found that in most of them, women are more likely than men to hold low-paid occupations and gender earning gaps remain substantial. Cunningham and Jacobsen (2008) analysed data on Bolivia, Brazil, Guatemala, and Guyana, and used simulations to show that there is significant income inequality across genders and ethnic groups in these countries. Data The data used in this paper is drawn from the 2019 and 2020 editions of the National Survey of Employment and Unemployment conducted by the Ecuadorian National Institute of Statistics (INEC). 6 The ENEMDU is a nationally representative cross-sectional survey that collects detailed information on household demographics, occupations and labour force participation, housing and asset ownership, and labour and non-labour income. Examples of non-labour income are contributions from social assistance and private transfers. The data also allows us to identify formal workers who are registered with the national social security system. Workers who are registered with the public contributory pension scheme have access to health care services and unemployment and retirement benefits. They also have the right to earn at least minimum wage, be paid for overtime and receive mandated benefits such as a Christmas bonus and profit sharing at the end of the fiscal year (Canelas, 2014;Canelas, 2019). In contrast, informal workers do not have any social security coverage. Empirical strategy The aim is to estimate individual income post-COVID and use it to analyse changes for key demographic groups. 7 We start by computing the actual change in average income for each economic sector in May 2020 and September 2020 (the two post-COVID cross-sectional data sets) with respect to December 2019 (the last pre-COVID cross-sectional data set) (see Table 2). To do so, we compute mean labour income, Ȳ t,g,s , and total employment, Nt,g,s , by gender, g , and economic sector, s , for each cross-sectional period, t . We then compute the change in labour income, ∆Yt,g,s , at time t (May 2020 and September 2020) with respect to December 2019 pre-COVID levels, Ȳ 0,g,s , while considering the change in total employment in a given sector as zero-income to determine expected post-COVID income accounting for the probability of unemployment in each sector. 8 ∆Yt,g,s =Ȳ t,g,s Nt,g,s/N0,g,s Y0,g,s We also compute the change in mean individual non-labour income, ∆Zt , again, accounting for zeros. Non-labour income includes remittances, government transfers, and private transfers. 9 We use these shocks to estimate an individual's expected post-COVID income, w i,t,g,s , based an their gender and economic sector and the projected change in individual non-labour income: y i,t,g,s = y i,0,g,s (1 + ∆Yt,g,s) 6. Encuesta Nacional de Empleo y Desempleo (ENEMDU). 7. We do not have data on price changes per category, nor consumption per category. Thus, we capture the effect of price changes by considering changes in real income. 8. We rely on shocks by gender/economic sector because the sampling strategy had to change to adapt to lockdown in 2020 and, thus, the data sets are not comparable at the micro level -comparing income regression with full demographics in each period would be misleading. Yet, all the data sets are representative at the macro level and thus shocks by sector are more transparent. 9. Unfortunately, there was a problem with the remittance variable in the 2019 data; we can thus consider only total non-labour income. We then use these individual-level estimates to compute the change in total household per capita income, as well as household per capita labour and non-labour income. We use household per capita income to compute poverty rates (percentage of individuals with per capita household income below the official poverty line of $84.81) and inequality. Finally, we simulate the impact of current government cash transfers in response to the crisis and additional unemployment insurance expenditures. Our strategy to evaluate the Family Protection Bond for Emergencies consists of considering the government's total transfer expenditure ($250 million) and distribute it evenly to the simulated post-COVID incomes of the individuals who qualify -those who are below the extreme poverty line. To simulate the effect of the additional unemployment coverage, we consider the government's total additional expenditure on unemployment ($372 million) and distribute it equally among formal workers. 10,11 The advantage of this exercise is that we can then analyse income shocks and poverty rates for key demographic and employment profiles. Evaluation of the impact of COVID-19 We start by reporting the empirical impact of COVID-19 on mean labour income and employment by gender and economic sector (see Table 2). In May 2020, aggregate mean income was down 51% compared to December 2019, and employment, 21%. Among females, the economic sectors that were most affected during this period were restaurants/hotels, personal services, and real estate, with drops in average income of 80%, 72%, 62%, respectively. Among males, the most affected sectors were construction, restaurants/hotels, and personal services, with drops in average income of 86%, 71%, 67%, respectively. By September 2020, the change in mean income across sectors had improved (-12% compared with December 2019) and average income had recovered in several sectors such as education, health, and services. There are gender differences in change in employment. In May 2020, female employment was down 26%, while male employment had dropped 20%. By September 2020, the change in employment across genders and sectors had improved, although it remained 7% lower than in December 2019 for both male and female workers. Individual non-labour income had dropped 13% in May 2020 -despite a 34% increase in government transfers, albeit, from a very low base -but had returned to its pre-COVID level by September 2020 (see Table 3). We use the estimates from Table 2 to simulate the change in individual labour income, and the last two columns of Table 3 to simulate the change in individual non-labour income as explained in the methodology. We then construct simulated household per capita income, for which we estimate a drop of 44% in May 2020 and 10% in September 2020 (see Table 4). As expected, we can see that the drop in household per capita income was driven mainly by the drop in individual labour income. Heterogeneity in individual labour income shocks In Table 5, we analyse the shocks to simulated labour income for key subgroups. We find that informal workers' labour income was affected considerably more than formal workers' (-60% vs. -42% in May 2020, and -17% vs. -8% in September 2020). 12 This is particularly severe considering that 75% of workers are in the informal subgroup. Workers in urban and rural areas experienced a similar labour income shock (-52%), but income in urban areas recovered more by September (-12% vs. -16%). Also, rural workers earned considerably less than urban workers in all periods. When it comes to firm size, 10. Since we do not know who lost their job, we give all qualifying workers an equal share of the unemployment budget. 11. We also estimated the probability of unemployment in May 2020 and September 2020 with logit regressions on available demographics and used these regressions to estimate the probability of unemployment of each worker in the 2019 data set; we then distributed the funds to all individuals who were formal workers in December 2019 proportionally to their probability of unemployment in 2020. The results remain virtually unchanged. The results are available upon request. 12. Informal worker: an individual is considered an informal worker if (s)he lacks social security coverage. workers at small private firms saw their income affected more than those at big private firms did (-60% vs. -53% in May 2020 and -17% vs. -14% in September 2020), and public workers were least affected (-23% in May 2020 and +3% in September 2020). Note that 69% of workers are in small private firms, and 7% are public workers. There are striking differences when it comes to education level. Workers with higher education were much less affected (-39% in May 2020 and -4% in September 2020) than high school graduates (-55% in May 2020 and -14% in September 2020) and those without a high school diploma (-59% in May 2020 and -18% in September 2020). In terms of age, the income shock was slightly worse for youth and seniors than for adults in May 2020 (-55%, -55%, and -51%, respectively) and in September 2020 (-14%, -14%, and -12%, respectively). In terms of ethnicity, the income shock was slightly worse for indigenous people (-55%) than for those in the other three ethnicity subgroups (-52%) in May 2020, and mestizos/whites recovered more than those in the other three ethnicity subgroups by September 2020 (-12% vs. -15%). Males' labour income was affected slightly more than females' (-54% vs. -47% in May 2020, and -13% vs. -12% in September 2020), yet females still earned less than males in all periods. In Figure 1, we can see the gender gap in each decile. The income distributions for both males and females dropped severely in May 2020, which pushed the gender gap down; yet males recovered considerably more by September 2020. It is also worth noting that in December 2019, 17% of the labour force corresponded to unpaid workers, 62% of whom were female workers with zero income. Lastly, in the bottom panel of Table 5 we can see that the shock to labour income was considerable in all income deciles in May 2020 (around -57%), with deciles nine and ten experiencing the smallest drop (-49% and -46%, respectively). By September 2020, the change in income was around -16% for the lowest eight deciles, and -11% and -8% for deciles nine and ten, respectively. Government interventions Next, we consider the effect of government interventions -direct cash transfers under the Family Protection Bond for Emergencies program and additional spending on unemployment insurance. We add qualifying individuals' corresponding share of these transfers to their simulated income as described in the methodology and use those values to compute household per capita income with transfers. Table 6 compares mean household per capita income in December 2019 with simulated income in September 2020 with and without transfers. We see that the transfers had very little effect in the aggregate, with the average monthly income increasing by only $4. Yet, if we consider only individuals below the poverty line, the transfers increase their average monthly income by $6, which represents about 11% of their pre-COVID (December 2019) average monthly income and a recovery of 87% of the loss in their average household per capita income. Figure 2 shows the distribution of incomes below minimum wage in December 2019 and simulated incomes in September 2020 with and without transfers. We can see how the COVID-19 crisis increased the density of the distribution below the minimum wage line and that the transfers had a small effect around the poverty line. The transfers compressed the left tail of the distribution, helping some households to get out of extreme poverty. This is also shown by the small bump between the extreme poverty line and the poverty line. Table 7 shows the poverty rates in December 2019 and in May 2020 and September 2020 with and without transfers. The overall poverty rate more than doubled, climbing from 24% in December 2019 to 57% in May 2020 before falling back down to 30% in September 2020, which is still six percentage points higher than pre-COVID. In the aggregate, current government interventions had almost no impact on the poverty rate -around one percentage point. Poverty and inequality Poverty is particularly severe in rural areas. The rural poverty rate reached 68% in May 2020 and 44% in September 2020 -around 20 percentage points higher than in urban areas in all periods. In terms of geographic region, poverty is very severe in Amazonia, where the poverty rate was 68% in May 2020 and 50% in September 2020. Poverty is also particularly severe for informal workers, for whom the poverty rate was 62% in May 2020 and 33% in September 2020. In sharp contrast, the May 2020, and mestizos/whites recovered more than those in the other three ethnicity subgroups by September 2020 (-12% vs. -15%). Males' labour income was affected slightly more than females' (-54% vs. -47% in May 2020, and -13% vs. -12% in September 2020), yet females still earned less than males in all periods. In Figure 1, we can see the gender gap in each decile. The income distributions for both males and females dropped severely in May 2020, which pushed the gender gap down; yet males recovered considerably more by September 2020. It is also worth noting that in December 2019, 17% of the labour force corresponded to unpaid workers, 62% of whom were female workers with zero income. Lastly, in the bottom panel of Table 5 we can see that the shock to labour income was considerable in all income deciles in May 2020 (around -57%), with deciles nine poverty rate for formal workers was 22% in May 2020 and 4% in September 2020. Note also that government interventions decreased the poverty rate of formal workers 4.38 percentage points in May 2020 -their largest impact among the categories in the table -mostly through unemployment benefits. There are again big differences when it comes to education level. Those without a high school diploma had a poverty rate of 65% in May 2020 and 37% in September 2020. High school graduates are also vulnerable and had a poverty rate of 44% in May 2020 and 17% in September 2020. In contrast, the poverty rate for individuals with higher education was 12% in May 2020 and 5% in September 2020. In terms of age, sadly, kids fourteen and younger are the most vulnerable group, with a poverty rate of 70% in May 2020 and 42% in September 2020. Poverty rates drop steadily for each consecutive age group, with the oldest group (65 and older) having a poverty rate of 31% in May 2020 and 16% in September 2020. In terms of ethnicity, indigenous populations are the poorest group, with a poverty rate of 80% in May 2020, and 57% in September 2020. In contrast, the poverty rate for mestizos/whites was 51% in May 2020 and 25% in September 2020. We do not find much difference between the poverty rates of males and females in any subcategory (See Table A1 in the appendix for poverty rates by gender for each subcategory). However, since poverty is measured at the household level (using household per capita income), the gender gap in poverty rates may be underestimated. Indeed, the measurement implicitly assumes that all household members enjoy the same standard of living, which may not necessarily be true (see Munoz Boudet et al., 2018, for a discussion of gender differences in poverty). In terms of inequality, Figure 3 shows Ecuador's Lorenz curves and Gini coefficients before and after the COVID crisis. In December 2019, Ecuador's Gini coefficient was 0.457, which was higher than the average Gini coefficient of the other Andean countries (Bolivia, Colombia, and Peru, average the poverty line. Table 7 shows the poverty rates in December 2019 and in May 2020 and September 2020 with and without transfers. The overall poverty rate more than doubled, climbing from 24% in December 2019 to 57% in May 2020 before falling back down to 30% in September 2020, which is still six percentage points higher than pre-COVID. In the aggregate, We also look at percentile ratios of the distribution of household per capita income pre-COVID and post-COVID with and without transfers to differentiate changes among the poorest, the middle 13. Data was not available for Venezuela. 14. World Bank World Development Indicators. class, and the richest (see Table 8). The first column in the table ( p90/p10 ) shows that in December 2019 the average household per capita income of individuals in the top decile of the distribution ( p90 ) was around eight times higher than that of those in the bottom decile ( p10 ). If government interventions are not accounted for, the difference between these two deciles was tenfold in May 2020, but public transfers seem to have had a small equalising effect and reduce the ratio to 8.77. In September 2020, the p90/p10 ratio was down to 8.42 without public transfers and 7.59 with them. We see similar patterns, albeit with less variation, when looking at the income ratios of closer deciles. Conclusion This study shows the delicate economic situation in Ecuador. Mean labour income dropped by more than half in May 2020, while the poverty rate more than doubled compared to the pre-COVID (i.e., December 2019) level. The economic situation had improved by September 2020, when the drop in income represented 10% of the pre-COVID level and the poverty rate was 6 percentage points above and went back down to 0.48 in September 2020. We also look at percentile ratios of the distribution of household per capita income pre-COVID and post-COVID with and without transfers to differentiate changes among 13 Data was not available for Venezuela. 14 World Bank World Development Indicators its pre-COVID level. Income inequality was up considerably in May 2020, and partially improved by September 2020. When evaluating government transfers in response to the crisis, we see that they had very limited effects on average income in the aggregate. Yet, they may have been crucial for the subsistence of individuals below the poverty line. The crisis affected individuals across the income distribution; only the top two deciles experienced a somewhat smaller initial shock and faster recovery. The populations most affected were informal workers, workers in small private firms, workers in rural areas, indigenous populations, households with young children, and households in the Amazonia region. In terms of gender, males' labour income was affected slightly more than females', yet females still earned considerably less than males across the income distribution in all periods. Furthermore, during the first trimester of the crisis (by May 2020), employment decreased 26% for females vs. 20% for males. At the time this article was written (2021), our main recommendation for the Ecuadorian government was to invest in more vaccines in order to normalise economic activity because only 15% of the population had had a first dose and the government was still struggling to get more vaccines. Yet, over the past year, the government has done a great job getting vaccines: more than 79% of the population has received two doses, and 25% has received a third dose. Beyond vaccines, future relief efforts should pay particular attention to informal workers, rural workers, and poor households with young children. Since 75% of the labour force is in the informal subgroup with no access to social security benefits, investment in direct cash transfers is likely to be more effective than wage subsidies or unemployment benefits. By evaluating the impact of COVID-19 among different groups of the population and by simulating the effectiveness of government interventions, we hope to guide policymakers in developing more efficient interventions to alleviate the economic impact of the pandemic.
6,542.2
2022-12-31T00:00:00.000
[ "Economics" ]
Existence of Open Loop Equilibria for Disturbed Stackelberg Games In this work, we derive necessary and sufficient conditions for the existence of an hierarchic equilibrium of a disturbed two player linear quadratic game with open loop information structure. A convexity condition guarantees the existence of a unique Stackelberg equilibria; this solution is first obtained in terms of a pair of symmetric Riccati equations and also in terms of a coupled of system of Riccati equations. In this latter case, the obtained equilibrium controls are of feedback type. Introduction The study of linear quadratic (LQ) games has been addressed by many authors [1][2][3][4]. This type of games is often used as a benchmark to assess the game equilibrium strategies and its respective outcomes. In a disturbed differential game, each player calculates its strategy taking into account a worst-case unknown disturbance. In non-cooperative game theory, the concept of hierarchical or Stackelberg games is very important, since different applications in economics and engineering exist [1,5]. This is also the case of gas networks where a hierarchy may be assigned to its controllable elements-compressors, sources, reductors, etc… Also, for this application, the modelling as a disturbed game makes a lot of sense, since the unknown offtakes of the network can be modelled as unknown disturbances. Further research on Stackelberg games can be found for instance in AbouKandil and Bertrand [6]; Medanic [7]; Yong [8]; Tolwinski [9]. No assumptions/constraints are made of the disturbance. To be easier to understand the hierarchical concept, we consider only two players. Therefore, we study a LQ game of two players with Open Loop (OL) information structure where the players choose its strategy according to a modified Stackelberg equilibrium. Player-1 is the follower and chooses its strategy after the nomination of the strategy of the leader. Player-2, the leader, chooses its strategy assuming rationality of the follower. Both players find their strategies assuming a worst-case disturbance. In this work, we consider a finite time horizon, where for applications this is chosen according to the periodicity of the operation of the problem being studied. The disturbed case of the representation of optimal equilibria for noncooperative games has been studied [10,11] considering a Nash equilibrium. It is the aim of this paper to generalise the work of Jank and Kun to Stackelberg games and extend the results presented in Freiling and Jank [12]; Freiling et al. [13] to the disturbed case. To calculate the controls, we use a value function approach, appropriately guessed. Thence, we obtain sufficient conditions of existence of these controls and its representation in terms of the solution of certain Riccati equations. Furthermore, a feedback form of the worst-case Stackelberg equilibrium is obtained. In a future paper, we expect to present analogous conditions using an operator approach. In Section 2, we define the disturbed LQ game and define Stackelberg worst-case equilibrium. In Section 3, we derive sufficient conditions for the existence of a worst-case Stackelberg equilibrium under OL information structure and investigate how are these solutions related to certain Riccati differential equations. Section 4 concludes the paper and outlines some directions for future work. Fundamental notions We start with the concept of best reply: We say thatγ i is the best reply against γ Ài ð Þ if holds for any strategy γ i ∈ U i . We denote the set of all best replies by R i γ Ài ð Þ . We study games of quadratic criteria, defined in a finite time horizon t 0 , t f  à ⊂  and subject to a linear dynamics, controlled players and also an unknown disturbance. Hereby also consider u j ¼ γ j t, η j , where η j is the information structure of Player-j. In this case, η j , j ¼ 1, … , N, is of OL type. Definition 2.2. (Linear Quadratic (LQ) differential game) Let Γ N be an NÀ player differential game finite time horizon T ¼ t 0 , t f  à : Suppose further that: i. the dynamics of the game are assumed to obey a linear differential equation In this equation, t ∈ T , where the initial t 0 and the final t f are finite and fixed, the state x t ð Þ is an nÀ dimension vector of continuous functions defined in T and with x t f À Á ¼ x f . The controls u i , i ¼ 1, … , N, are square (Lebesgue) integrable and the m i À dimension vector of continuous functions is also defined in T . Also, the disturbance w t ð Þ ∈ L m T ð Þ: The different matrices are of adequate dimension and with elements continuous in T . i. the performance criteria are of the form where with symmetric matrices K if ∈  nÂn and symmetric, piecewise continuous and bounded matrix valued functions We observe that no cost functional is assigned to the disturbance term because no constraints can be applied on an "unpredictable" parameter. In what follows, we consider N ¼ 2: To extend the theory to N > 2, since this is an hierarchical solution, we need to define the structure of the leaders and followers in the game. We can even have more than two hierachy levels. We assume that Player-2 is the leader and Player-1 is the follower. The leader seeks a strategy u * 2 t ð Þ in OL information structure and announces it before the game starts. This strategy is found knowing how the follower reacts to his choices. The follower calculates its strategy as a best reply to the strategy announced by the leader. (1) and considering a worst-case disturbance. Consider U i , i ¼ 1, 2, the sets of functions such that (1) is solvable and J i exists, with u i , i ¼ 1, 2, in these conditions U i , i ¼ 1, 2, W are said the sets of admissible controls and disturbance, respectively. Definition 2.3. (Stackelberg equilibrium) Let Γ 2 be a 2-person differential game, we define the Stackelberg/worst-case equilibrium in two stages. A functionŵ i u ð Þ ∈ W is called the worst-case disturbance, from the point of view of the ith player belonging to the set of admissible controls, if holds for each w ∈ W. There exists exactly one worst-case disturbance from the point of view of the ith player according to every set of controls. 2. We say that the controls u * To guarantee the uniqueness of OL Stackelberg solutions, matrices are assumed to satisfy [14]. In what follows, we drop the dependence of the parameters in t to reduce the length of the formulas. Sufficient conditions for the existence of OL Stackelberg equilibria In this section, we withdraw sufficient conditions for the existence of the worstcase Stackelberg equilibrium, using a value function approach. A disturbed differential LQ game as defined in Definition 2.2 is said playable if there exists a unique Stackelberg worst-case equilibrium. Theorem 3.1. Let the solution of the Riccati differential equation with Then, the following identity holds: where ∥z 1 ∥ 2 and x a solution of (1). Proof: The proof is similar to the analogous result for the non-disturbed case Freiling et al. [13]. Theorem 3.2. Let the solution E 1 of (6) exist on T . Then the unique response of the follower to the leader's OL strategy u 2 t ð Þ is given by: where the maximum disturbance, was considered. E 1 and e 1 are the solutions of (6)-(7) and x is then the solution of The corresponding minimal costs then are Proof: We have that the unique OL response of the follower to the leader's announced strategy u 2 (10) under worst-case disturbance (11), that we substitute in the trajectory (1) to obtain: The cost functional minimal value is obtained when we substitute in (9) the minimal control and themaximal disturbance. Notice that J 10 u 2 ð Þ is not depending on u 1 : This, as a matter of fact, is only true if we consider OL information structure, since otherwise u 2 would depend on the trajectory x and hence, via (1), also on u 1 : In OL Stackelberg games, the leader tries next to find an optimal OL control u 2 that minimises J 2 u 1 u 2 ð Þ, u 2 ð Þwhile u 1 u 2 ð Þ is defined by (10). Theorem 3.3. Let the solution of the Riccati differential Eq. (6) and the solution of For any given control u 2 of the leader, define functions e 2 ∈  3n , v 1 , v w , x ∈  n and d 2 ∈  in T by the following initial and terminal value problems: with v 1 ≔ E 1 þ 1 2 e 1 À Á : Then, we obtain and the following identity and 0 m i Ân , i ¼ 1, 2 the m i  n dimensional zero matrix and ∥z∥ 2 P 2 ¼ zP 2 z and Proof: Consider (10): : Then, differentiate v 1 and substitute the derivatives into the obtained expression using (6), (7) and (8). Also, the optimal control u * 1 and disturbance w * 1 in (11). Hence: to write these two equations as: (??) as: Next, we consider the following value functioñ for some mappings E 2 : T !  2nÂ2n , e 2 : T !  2n , and d 2 : T !  2 , where E 2 is symmetric for each t ∈ T: We consider (21), where we substitute (20): Now we associate certain terms Consider . Substitute this y 2 and α in the calculations: Considering: We end up with Integrating yields: Further, we assume the mappings E 2 , e 2 , d 2 to be chosen in such a way that the following terminal values hold: Then, we obtainṼ 2 t f À Á ¼ y T t f À Á K y f y t f À Á and substituting: Observe that the rhs of (23) does not depend of uj t 0 ,t ½ and the rls of (23) does not depend of u 2 j t,t f ½ : Then considering now the infimal value, we recall that: Now, we substitute this into (23) and consider the infimal values over all possible control functions in t, t f  à : i dτ then we have: ð Þ if u 2 À y 2 0∀t ∈ T and w À α ¼ 0: As the leader chooses his strategy assuming rationality of the follower and worst-case disturbance, the follower should take also the worst-case disturbance into account. To conclude, consider t ¼ t 0 and hence: Then from (21): , we have: Now, we substitute y 0 ¼ x 0 v 10 : The leader may choose its best answer either by accounting directly for its worst-case disturbance or by considering that the follower knows that there is a worst-case disturbance. In this work, the leader takes the worst-case disturbance directly into account. Notice that in the term x 0 , E 2 t 0 ð Þ, do not depend on the choice of u 1 , u 2 : Since we shall study the situation for Player-2 when Player-1 applies his optimal response control defined in (10), we have to set v 1 ¼ E 1 x þ 1 2 e 1 : From (7), we can see that v 1 t 0 ð Þ ¼ v 10 depends on e 1 t 0 ð Þ and hence also on u 2 : In order to derive from Theorems (3.1) and (3.3) sufficient conditions for the existence of a unique worst-case Stackelberg equilibrium, we must get rid of the u 2dependence on v 10 : Therefore, we propose to restrict the set of admissible controls to functions representable in linear feedback form. This is what we do next. Theorem 3.4. Let the solutions E 1 t ð Þ ∈  nÂn , E 2 ∈  2nÂ2n of (6) and (15) exist in T , respectively. Let further the coupled system of equations admits a solution in T . Then, there exists a unique open loop disturbed Stackelberg equilibrium in feedback synthesis which is given by considering worst-case disturbamces w * i and where x t ð Þ is a solution of the closed loop equation The minimal cost for the follower, J 10 u * 2 À Á , is as in (14), and for the leader is (16) and (17), respectively. Proof: The proof is similar to the analogous result for the non-disturbed case [13]. From the convexity assumptions, it follows that S 1 , S, Q 1 , Q and are all semidefinite. Therefore, as far as the convexity conditions hold, the standard Riccati matrix Eqs. (6) and (15) are globally solvable in À∞, t f À à [15]. It still remains the following questions to be answered (i) direct criteria for solvability of these equations if the convexity assumption is guaranteed as well as (ii) solvability of the coupled system of Eqs. (25)-(27). Actually, this system of equations can also be written as a single, nonsymmetric Riccati matrix differential equation. Hence: As it can be easily observed, all these Riccati equations are of nonsymmetric type: where W is a matrix of order k  n whose coefficients are of adequate size. See AbouKandil et al. [16] for results on the existence of solution of Riccati equations. Discussion and conclusions High dimension problems appeal to the use of hierarchic and decentralised models as differential games. One example of these problems is large networks, as for instance the management and control of high pressure gas networks. Since this is a large dimension and geographically dispersed problem, a decentralised formulation captures the non-cooperative nature, and sometimes even antagonistic, of the different stake-holders in the network. The network controllable elements can be seen as players that seek their best settings and then interact among themselves to check for network feasibility. The equilibrium sought by the players depends on the way the players are organised among themselves. It makes some sense to have some autonomous elements that run the network and others follow, as is the case of a main inlet point of a country, as it happens with the inlet of Sines in the portuguese network. The ultimate goal of the network is to meet customers' demand at the lowest cost. As the main variation of the problem is due to the off-takes, these may be seen as perturbations to nominal consumption levels of a deterministic model. Therefore, it makes some sense to view the gas transportation and distribution system as a disturbed Stackelberg game where the players play against a worstcase disturbance, that means a sudden change in weather conditions from one period of operation to the other. Neverthless, the theory is not ready, and also having in mind the development of algorithms, direct solution methods, and explicit solution representations need to be further investigated. In this work, we have obtained sufficient conditions for the existence of the solution of a 2-player game. However, direct criteria for solvability of this problem needs more work. Also, the solvability of the coupled system of Eqs. (25)-(27) has to be further investigated. Also, we would like to solve the same problem using an operator approach. Similarly to what we have done in the past for Nash games, we would like to study this problem considering the underlying dynamics as a repetitive process, that seems to be adequate to capture the behaviour seemingly periodic of the network. Also, the boundary control of the network depends on the type of strategy sought by the players. The structure of these versions of the problems need to be examined. The obtained results, in every stage of the work, should be applied to a single pipe and ideally using some operational data. Furthermore, we expect to apply the work to a simple network, which is not exactly a straightforward extension.
3,773
2020-05-11T00:00:00.000
[ "Mathematics", "Economics" ]
Negative Times of the Davey-Stewartson Integrable Hierarchy We use example of the Davey-Stewartson hierarchy to show that in addition to the standard equations given by Lax operator and evolutions of times with positive numbers, one can consider time evolutions with negative numbers and the same Lax operator. We derive corresponding Lax pairs and integrable equations. Introduction In [3], we proposed a method of derivation of (2 + 1)-dimensional nonlinear integrable equations based on commutator identities on associative algebras. Taking into account the algebraic similarity of operator commutators and derivatives, we have transformed commutator identities into linear partial differential equations. A characteristic property of these linear equations is the possibility to lift them up to nonlinear, integrable ones. In [4,5], this approach was extended to differential-difference and difference equations, where the analogy of similarity transformations and shifts of independent variables was used. In [6], we developed this result for non-Abelian identities of commutators. To formulate the main aspects of this approach, we start here with the simplest examples. Let A and B be arbitrary elements of an arbitrary associative algebra A. Then they obey the commutator identity Being a trivial consequence of associativity, this identity easily proves that the function i.e., such that B tn = [A n , B], n = 1, 2, 3, obeys the linearized Kadomtsev-Petviashvili (KP) 1 equation with respect to the variables t j : This paper is a contribution to the Special Issue on Mathematics of Integrable Systems: Classical and Quantum in honor of Leon Takhtajan. It was stated in [3,7] that there are similar relations for higher commutators. In the case of KP, they lead to higher linear equations 2 n ∂ tn ∂ n t 1 B = ∂ t 2 + ∂ 2 n B, n = 3, 4, . . . . Similar results were obtained in [4,5,6] for difference and differential-difference equations. In that case, we replace (1.1) with, say, the commutator identity where the element A is assumed to be invertible. Thus, in addition to commutators of the kind (1.1), we get similarity transformations here (commutators in the group sense). Therefore, we introduce the element B depending on the number n 1 and continuous variables t 1 and t −1 by means of and denote the shift with respect to variable n 1 as B (1) = ABA −1 . Accordingly, this element B obeys the linear differential-difference equation which gives a linearized version of the two-dimensional Toda system [8,9]. In [3,4,5], we proved that any linear equation, resulting from the commutator identity, can be lifted up to a nonlinear integrable equation using a special dressing procedure. In this paper, our goal is to extend the class of commutator identities. For this purpose one can use arbitrary functions f (A) with commutativity being the only condition they should obey: [f (A), g(A)] = 0. A natural generalization of the choice of functions of the element A was suggested in [3,7]. We assume that in algebra A there exists an element σ such that where {·, ·} denotes anticommutator. In particular, we can consider elements of A as 2 × 2 matrices, where A is proportional to the unity matrix I, B is off diagonal, where n ≥ 1. These two sets of commutator identities give two sets of differential hierarchies if, in addition to (1.2), we introduce two sets of variables, t = {t 1 , t 2 , . . . } and x = {x 1 , x 2 , . . . }, given by the equations (1.8b) Taking n = 1 here, we get so thanks to (1.6) and (1.7), we get linear differential equations for B(t, x, z), (1.11) For n = 2, these equalities are read as σB t 2 = B t 1 x 1 and σB x 2 = B t 1 t 1 + B x 1 x 1 , respectively. In [7], these linear equations were lifted to the Davey-Stewartson equation (see [1]) and higher equations of its hierarchy. Here we consider "negative" version of this hierarchy, i.e., we assume negative values of n in (1.8). In Section 2, we derive the corresponding commutator identities and the corresponding linear differential equations. In Section 3, we introduce the realization of elements of the associative algebra A using pseudo-differential operators. On this basis, in Section 4, we consider the dressing procedure that enables introduction of the dressing operator and its time evolutions. The Lax pair and nonlinear equations are derived in Section 5. Section 6 is devoted to (1 + 1)-dimensional reductions of the systems under consideration. Some concluding remarks are given in Section 7. Taking into account that all these commutators mutually commute, we consider B as a function of t 1 , x 1 and t −1 , or x −1 , such that Commutator identities and linear equations Thus, we again have two versions of the equations: one involving ∂ t −1 , and the other involving ∂ x −1 . Taking into account the symmetry of these two equations with respect to the substitution x −1 ↔ −t −1 , we study here mainly (2.4). By extending (1.8) to negative values of n, we arrive at a hierarchy of commutator identities and linear equations. We can use (1.10) and (1.11), substitute n → −n into these equations and multiply them both by ∂ 2 where n = 1, 2, . . . and where by analogy with (2.3b) We omit here form of (2.6) and (2.7) in terns of commutator identities. It can be easely restored with the help of (1.9). In the case of n = 1, equations (2.6) and (2.7) are reduced to (2.1) and (2.2). Now we have to show that all these linear equations admit lift up to nonlinear integrable ones. Realization of elements of the associative algebra To this end, we consider a special realization of the elements of the associative algebra A, see [3,4,5,6]. By analogy with the standard definition of the pseudo-differential operators, we define an element F of A by its symbol F (t, x, z). Here t and x denote (finite) subsets of . . }, and z ∈ C denotes a complex parameter. The subsets t and x definitely include the variables t 1 and x 1 and at least one of the other variables of these lists. In the following we call such subsets minimal. The symbol of the composition of two elements of the algebra is given by means of the symbols of cofactors in the form where t denotes a subset t without variable t 1 . We see that the variable t 1 plays a special role here: the composition with respect to other variables is pointwise. In what follows we consider elements of the algebra A such that their symbols belong to the space of tempered distributions of their arguments. The symbol of the unity operator is 1, and we choose the symbol of operator A as Thanks to (3.1) we have that for any F where A n is understood as n-th power of composition (3.1), where now n ∈ Z. Then, for n = 1, we get [A, F ] = ∂ t 1 F according to (1.8a). Further relations of these equalities give in terms of symbols: Because of our assumption, the symbol B(t, x, z) admits a Fourier transform with respect to the variable t 1 , so the above relations show where n ∈ Z and f (p, z) is an arbitrary 2 × 2 off diagonal matrix function independent of all t n and x n . Note that here we do not specify set of "times" t i and x i involved in the evolution equation. We know that this set includes at least three times: t 1 , x 1 and one of times t n , or x n with n = 0 and 1. It can include more times, but t 1 and x 1 and every third time gives an evolution equation generated by the commutator identity. Thus, in (3.3), summation in the exponent goes over finite number of terms, corresponding to times that are "switched on" while other times are equal to zero. It is natural to impose on B(t, x, z) the conditions of convergence of the integral and the boundedness of the limits of B(t, x, z) as t, or x tends to infinity. Two obvious conditions are sufficient for this. The first one is given by the choice f (p, z) = δ(p + 2z Im )g(z), where δ denotes delta-function, so that (3.3) takes the form where g(z) is an arbitrary bounded function of its argument. But in order to get B(t, x, z) bounded with respect to variables x n , it is necessary to perform substitution where the new x n are real. The second case is given by reduction f (p, z) = δ(z Re )h(p, z Im ), where z = z Re + iz Im and h(p, z Im ) is an arbitrary function of its arguments. Then (3.3) takes the form B(t, x, z) = dp exp n i n (z Im + p) n − z n Im t n + σi n n (z Im + p) n + z n Im x n × h(p, z Im )δ(z Re ), (3.6) Here we see that B(t, x, z) is bounded with respect to variables t n and x n with odd numbers, and in order to make it bounded for variables with even numbers, we need to make a substitution Thus we have two types of systems defined by the choices (3.4) and (3.6). Dressing procedure Specific property of the above set of operators is the possibility of defining operation of∂differentiation by the complex variable z, F →∂F . In terms of symbols, this is defined, see [3], as where derivative is understood in the sense of distributions. Thanks to (3.2), we get the equalitȳ which plays essential role in what follows. Now we can define a dressing operator K with symbol K(t, x, z) by means of ∂-problem where the product in r.h.s. is understood in the sense of the composition law (3.1). Thanks to (3.1) and (4.1), the equality (4.3) takes the explicit form for time evolutions given by (3.4) and the form for time evolutions given by (3.6). Thus, in the case of (4.4), the equation (4.3) gives the ∂problem, while in the case of (4.5) we get Riemann-Hilbert problem. In both these cases, we normalize solution K of the equation (4.3) by the asymptotic condition In what follows, we assume unique solvability of the problem (4.3), (4.6). The time evolution of the dressing operator follows from these equations. Say, due to (1.8) and (2.3) we get Accordingly, thus, taking into account the commutativity of A m and A n , we get ∂(K tmtn −K tntm ) = (K tmtn − K tntm )B by (4.3). Thus, the commutativity of derivatives K tmtn = K tntm (4.8) follows due to the unique solvability of the problem (4.3), (4.6). Similarly, we prove that K xmtn = K tnxm and K xmxn = K xnxm . In [7], the time derivatives of the dressing operator for positive times (n > 0 in (1.8)) were calculated in terms of the asymptotic decomposition of the dressing operator K where u, v, and w are multiplication operators, i.e., their symbols do not depend on z. Say, using (4.7) for n = 1 we get ∂K t 1 = K t 1 B + K[A, B]. This can be written as ∂(K t 1 + KA) = (K t 1 + KA)B, where (4.2) and (4.3) were used. Due to the condition of unique solvability of (4.3), (4.6) we derive that there exists multiplication operator X such that K t 1 + KA = (A + X)K. Thanks to (4.9), it is easy to see that it equals to zero, so we have (4.10) The situation with K x 1 is more involved, here analogous multiplication operator does not vanish and by (4.6) we get where the multiplication operator u is defined in (4.9). Combining (4.10) and (4.11) we get Our goal here is to extend the approach of [7] to the negative numbers of times in (1.8). More exactly, we start with the times t 1 and x 1 as above and we choose either t −1 or x −1 as the third time according to (2.3b). To determine the evolutions with respect to t −1 or x −1 for the dressing operator we differentiate (4.3) and use (2.3b): so for the first equality, we have We see that situation here is more complicated than in the case of positive numbers of times. There we were able to reduce the equations to the form ∂(K tn + KA n ) = (K tn + KA n )B due to (4.2). While for negative n, this equality gives an additional delta-term. Therefore, to use the relation (4.14), we must find replacement for A −1 BA. This can be done by introducing a discrete variable, cf. [4] and (1.3) here. We assume that the symbols of B, K, etc. depend on an intermediate variable n ∈ Z. Denote B (1) (t, x, n, z) = B(t, x, n + 1, z), K (1) (t, x, n, z) = K(t, x, n + 1, z) and set It is easy to see that these shifts commute with times t and x: B (1) (1) and we extend definition of composition law (3.1) to symbols that depend on n pointwise with respect to this variable. Now ∂K (1) = K (1) ABA −1 because of (4.3), so that due to the unique solvability of the problem (4.3), (4.6) there exists a multiplication operator ψ such that and thanks to (4.9) we get where u (1) (t, x, n) = u(t, x, n + 1). Let us shift n → n + 1 of (4.14) that due to (4.15) gives ∂ K (1) so that, because of (4.6), there exists multiplication operator Z such that K (1) Thanks to (4.9), we get that Z = 1 + u It looks like we have constructed a (3+1)-dimensional integrable system with the independent variables t 1 , x 1 , t −1 , and n. But in fact, we have two different systems here: t 1 , x 1 , n (see (4.16)) and t 1 , x 1 , t −1 , because the dependence on n can be excluded. Indeed, substituting K (1) for K by means of (4.16) and using ψ as new dependent variable in (4.17) instead of u (1) , we get Here the equation (4.19) is derived by analogy using the second equality in (4.13). The compatibility of any of these equations with (4.12) can be proved like in (4.8). Compatible evolutions (4.12) and (4.18) or (4.12) and (4.19) admit higher (in fact, lower) versions that involve the times t −n and x −n , n > 1, see (2.8). By analogy with (4.13), we get for this case by (2.8) (4.20) Multiplying this equality by A n from the right, we use n-multiple application of (4.15): B [−n] = A −n BA n . Thus (4.20) takes the form cf. (4.14). Again thanks to the assumed unique solvability of the Inverse problem (4.3), (4.6) we get that there exist multiplication operators α 0 , . . . , α n−1 such that where we applied n-fold shift operation. The operators α j are defined in terms of operators u, v, w, etc. in (4.9). We omit these calculations here. Next, we execute an (n − 1)-fold shift of a discrete variable in equation (4.16), which gives where the multiplication operator ψ was defined in (4.17). The final expression follows as a result of inserting of K [n] from (4.22) to (4.21), which again cancels dependence on the auxiliary variable n. The consideration of dependence on x −n is similar. Lax pair and nonlinear equations In Here we omit dependence on the discrete variable n, since it was excluded from (4.18) and (4.19). Thanks to this substitution coefficients of the equations (4.12), (4.18), and (4.19) become independent on z: where the first equation is the famous two-dimensional linear Zakharov-Shabat problem. One can also rewrite (4.3) in terms of the Jost solutions. Say, by means of (3.4) we get ∂ϕ(t, x, z) ∂z = ϕ(t, x, z)g(z), (5.5) and by means of (3.6) ∂ϕ(t, x, z) ∂z = δ(z Re ) dpϕ(t, x, ip)h(p − z Im , z Im ). (5.6) We see that the equations on the Jost solutions are independent on all "time" variables t and x. The dependence on them, as well as on z in (5.2)-(5.4) is given by (4.6), which, thanks to (5.1), takes the form Note that (5.5) is a standard ∂-problem with the normalization condition (5.7), where we must perform substitution mentioned in (3.5). At the same time, (5.6) shows that the Jost solution in this case is analytic in the left and right half planes of z with discontinuity on the imaginary axis. Thus here inverse problem is given in terms of the Riemann-Hilbert problem, i.e., we define the boundary values of the Jost solution as ϕ ± (t, x, iz Im ) = lim z Re →±0 ϕ(t, x, z) and set under the condition (5.7) and substitution given in (3.7). The difference of these two formulations of the inverse problem results from the condition of boundedness of the symbol of operator B in (3.4) and (3.6). In the case of (5.5) t n are real and x n are pure imaginary, while in the case of (5.6) t n and x n with odd n are real and are pure imaginary for even n. The compatibility of (5.2) with (5.3) and (5.4) follows from (4.8) and (5.1). Thus, we get the following theorem. It is natural to decompose both matrices u and ψ into diagonal and anti-diagonal parts: so σ, u d = 0, [σ, u a ] = 2σu a thanks to (1.4). Then anti-diagonal parts of the equations (5.8) and (5.10) give while their diagonal parts reduces to the derivative of (5.8) with respect to t −1 and of (5.10) with respect to x −1 of one and the same equation that we have integrated here with respect to t −1 (or, correspondingly, to x −1 ) under the assumption of the rapid decay of u as (t 1 , x 1 ) → ∞. Dimensional reductions Here we introduce (1 + 1)-dimensional reductions of (2 + 1)-dimensional nonlinear integrable equations constructed above. Such reductions follow due to time evolutions (3.4), (3.6), which, due to (4.3) and (4.6), lead to the same reductions of the dressing operator K, and then to reductions of all coefficients of the series (4.9). The reduction of time dependence of the operator B, in turn, is the result of conditions on the supports of the functions g(z) and h(p, z Im ) in (3.4) and (3.6), which reduce the number of independent time variables. For example, for the operator B(t, x, z) in (3.4), depending on times t 1 , x 1 , and t −1 , we can cancel dependence on x 1 by imposing condition Thanks to (3.4), this gives It is clear that this dependence on two variables is preserved in evolution and that thanks to the ∂-problem (4.3) and (4.6) and the composition law (3.1) (or due to (4.4)) we get the symbol of the operator K also independent of x 1 . Moreover, this operator is now analytic function for z Re = 0. Taking into account the independence of the operator K from x 1 , we must change the definition of the Jost solution, cf. (5.1), We see that the ∂-problem in this case is the Riemann-Hilbert problem for a function analytic in the right and left half planes on the complex z-plane with discontinuity given by (6.1) on the imaginary axis. The function K is normalized by the condition (4.6) at z → ∞. Summarizing, (6.2) is nothing but Zakharov-Shabat linear problem [10] that has been extensively studied in the literature, e.g., [2]. This is not the only reduction applicable to (3.4). Setting there g(z) = δ(|z| − 1)g(z Im ), we get the scattering data, i.e., symbol of operator B, depending on two variables t 1 −t −1 and x 1 : B(t, x, z) = δ(|z| − 1) exp −2iz Im (t 1 − t −1 ) + 2σz Re x 1 g(z Im ). (6.3) Thus after shifting t 1 → t 1 + t −1 , we exclude the dependence on t −1 from B, and then from K. Now, because of the delta-function in (6.3), we reduce the inverse problem (4.3) to the Riemann-Hilbert problem on the circle |z| = 1 and the normalization condition (4.6). Now we define the Jost solution by means of the relation ϕ(t 1 , x 1 , z) = K(t 1 + t −1 , t −1 , x 1 , z)e zt 1 +σzx 1 , where the r.h.s. does not depend on t −1 . The integrable equation follows from (5.8): where the second equation (5.9) is left unchanged. By analogy, we can consider the reductions of the symbol of operator B in (4.5), i.e., when t 1 , x 1 , and x −1 are chosen as independent variables. Concluding remarks In the above derivation of nonlinear integrable equations we needed some essential assumptions, the main was the condition of unique solvability of the ∂-problem (4.3), (4.6). But when nonlinear equation is derived, these assumptions are not necessary: the nonlinear equation is given as a compatibility condition of a Lax pair. On the other hand, the existence of linear equations given by commutator identities always leads to nonlinear integrable equations, as was shown above.
5,143.6
2021-06-07T00:00:00.000
[ "Mathematics" ]
Pre-grouting for Leakage Control and Rock Improvement In underground construction, ground water inflow and rock mass stability has been a challenging problem. Especially the tunnels constructed below ground water table are often exposed to risk of ground water inflow. Protection of underground structures against water ingress and rock mass failure is of paramount importance during construction as well as operation of the tunnel. Risks associated with these hydrogeological issues often pose challenge to tunneling engineers. Mitigation of these risks by means of pragmatic measures such as pre-grouting can improve the rock mass strength and restrict the water ingress to acceptable levels. The primary purpose of pre-grouting is to improve the rock mass strength and establish an impervious zone around the tunnel periphery, thus leading to reduced support system and permeability of rock mass. The aim of this article is to discuss the concept of pre-grouting as an effective mitigation measure for leakage control and rock mass improvement which is being successfully implemented in many countries. *Corresponding author: Ananth M, L&T Construction, Chennai, India, Tel: +919566913277; E-mail<EMAIL_ADDRESS>Received April 11, 2016; Accepted April 26, 2016; Published April 28, 2016 Citation: Subash TR, Abhilash Urs KR, Ananth M, Tamilselvan K (2016) Pregrouting for Leakage Control and Rock Improvement. J Civil Environ Eng 6: 226. doi:10.4172/2165-784X.1000226 Copyright: © 2016 Subash TR, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Introduction Control of groundwater leakage and rock mass stability has always been a challenging task for tunneling engineers. Slowing down of the tunnel advance rates, installation of support systems, durability of the lining systems and hindrance to safe operation of the tunnel are some of the key risks faced in most of tunneling projects due to water ingress and rock mass instability. These problems often lead to schedule delays and cost overruns. Envisaging the magnitude of these highly variable risks and developing appropriate measures to mitigate these risks is the key to successful construction of a tunnel. General perception is to consider grouting as a contingency measure, but pre-grouting has been successfully adopted in many countries to mitigate the aforementioned risks beforehand. A properly planned pre-grouting approach can improve the rock mass strength and establish an impermeable zone around the tunnel periphery [1]. The recent development in grouting materials and grouting technology are increasing the possibility to achieve tangible improvements (impermeability and strengthening) in rock quality by pre-grouting [2]. This article presents some guidelines on pre-grouting methodology, material selection, concept of leakage control and rock mass improvement with illustrations. A conceptual explanation of grouting Pre-Grouting is a process of injecting grout material into boreholes drilled into the rock mass around an excavation, with the purpose of sealing any fissures or joints that intersect the borehole. Thus, pregrouting results in a less permeable and more stable rock mass around the excavation. The main objective of the pre-grouting is • To enhance the rock mass strength thus leading to reduced support system and increased productivity. • To reduce the rock mass permeability and thereby control the water leakage to the acceptable levels. Permissible leakage rate The intensity of leakage into a tunnel depends on factors such as permeability of the surrounding rock mass, permeability of the tunnel lining, and groundwater conditions [3]. The permissible leakage rates along a cavern or tunnel alignment system must be decided prior to the excavation phase by considering the actual circumstances into account. Permissible leakage rate for tunnels are often fixed by the owner for the safe operation of a particular tunnel. International standards have been established for the specification of permissible leakage rates. A compendium of permissible leakage rates is tabulated below [4]. Pre-grouting pattern In underground projects detailed hydrogeological assessment is implemented prior to arriving at a feasible pre-grouting pattern. Probe and grout hole orientations are often selected and further optimized to cross as many of the rock fractures as possible. Holes should be orientated to intercept rock fractures of greatest potential of impact to the tunnel and at angles which are near normal as possible to the feature's strike and dip. Generally grout hole of spacing 1 m to 1.5 m with angle of 4 to 7 degree and length of 15 m to 25 m is adopted [5]. Pre-grouting strategy A balanced approach to pre-grouting has the potential to reduce the ground water inflow substantially. Strategy for pre-grouting as per Asting is explained below [6]. 1. A predetermined number of probe holes are drilled ahead of the tunnel face. 2. A water loss test is carried out by water pressurizing of the holes. 3. The water losses are expressed in terms of Lugeon (L). The measurements give an indication of the water tightness of the rock and are taken into account for determination on the type of grout material that shall be used. 4. The probe holes are grouted until a predetermined counter pressure is achieved. 5. If leakages as measured in the probe holes are greater than acceptable, a new round of control holes between the first ones is drilled. Water losses are measured and grouting is carried out according to the same criteria as for the probe holes. 6. The procedure is repeated until the criteria for water tightness are fulfilled. Execution of pre-grouting Pre-grouting is generally either done from the surface or from inside the tunnel. The method of drilling and injection of grout depends on the site conditions. In untreated rock, partial hole collapse and difficulty in retraction of drilling rods is a possibility. For this reason, one of the practices is to establish the drill holes through pipes grouted up to a certain extent. A gist of generic grouting methodology is summarized below [7] (Figure 1). 1. Drilling of 40-75mm diameter hole to required length and inclination as per the site condition. The pattern and spacing of the grout holes will be based on groutability test. 2. Installation of a pipe with internal diameter to fit the expandable packers. 3. Placement of packer at the very end of the pipe and injection of a cement based grout which fills the annular space between the rock and the pipe. Hardening for about 12 hours. 4. When the grout is hardened, drilling through the pipe to feasible length. 5. Placement of packer and pressure injection with appropriate cement grout for penetration into the rock mass in the drilled length of the hole. Termination criteria as per grout mix design. 6. After hardening of the injected grout, re-drilling through the pipe and injected area to design length beyond last drilled length. 7. Placement of the packer in the pipe and inject (repetition of step 5). Grout materials The selection of grout material is an essential part of grouting. The grout must have the ability to penetrate the fractures to seal them [8]. The selection of appropriate grout material depends on the size, frequency and configuration of the fractures and pores in the rock mass. The particle size of the cementitious material plays a key role in the selection criteria of grout. Based on the fissures size in the rock mass, grout material will be selected. Many methods such as Mercury injection, Gas expansion, Optical method and X-ray tomography methods have been developed for determining fissure size/porosity in rock mass samples. This paper primarily focuses on cementitious grouts and hence the same has been elaborated below. Cement: Generally to achieve the cost effectiveness, stable grouts formulated with locally available ordinary Portland cement is used. This cement has an average particle size of 45 μm and fineness of about 225 m 2 /kg. Micro-cement is used for finer fissures, where it is not feasible to use OPC. Micro-cement is finely grounded ordinary cement with large specific surface and fine particle size of about 15 μm. As an alternative to cement-based materials, various solutions have been evaluated as grouting materials such as colloidal silica. Colloidal silica is an aqueous dispersion of discrete colloidal amorphous silica particles. The particle size of the colloidal silica is less than 0.015 μm. A schematic illustration of different cement particles (µm) with respect to rock aperture is portrayed in Figure 2. As a general rule of thumb the type of cement can be chosen based on the groutability ratio. The groutability ratio for rock depends on width of the crack and the grain size of the grout material, expressed as shown in the formula below. For groutability ratios > 5, grouting is considered consistently possible whereas for groutability ratios < 2, grouting is not considered possible [9]. 95 Width of Fissure Groutability Ratio D of Grout = Water cement ratio: The properties of cement-based grouts vary with respect to w/c ratio. Generally for the grout mix, w/c ratio starts at 3:1 and is gradually thickened to 2:1, 1:1 and 0.5:1 until refusal is reached. Additives Different additives can be used to change the properties of the grout. The most common are superplasticizers, which are used to increase the flowability of the grout and to improve dispersion. Other additives are accelerators, non-shrink and thixotropic admixtures. Pre-grouting -leakage control The rock mass is a discontinuous material and its hydraulic characteristics vary widely from an impervious medium to a highly conductive zone. Permeability of rock mass is defined as the ability of a rock to allow the passage of water through its pores, fissure and joint. Generally, the amount of water flowing through a certain area can be represented by the coefficient of permeability. For rock mass, the coefficient of permeability is a function of rock type, pore size, Table 1). The primary purpose of a pre-grouting scheme is to establish an impervious zone around the tunnel periphery, where the hydraulic conductivity is reduced. This zone ensures that the all-round hydrostatic pressure surrounding the tunnel is distanced from the tunnel periphery to the outskirts of the pre-grouted zone as depicted in Figure 3 [10]. The water pressure is gradually reduced through the grouted zone and the water pressure acting on the tunnel lining can be close to nil. Studies conducted in Norway suggest that, tunneling in urban areas may be subject to a maximum allowable water inflow level of 2-4 liters per minute per 100 meter tunnel, whilst in remote areas such as subsea tunnels the maximum allowable level can be fixed to 30 litres per minute per 100 meters. Table 2 below illustrates the reduction in ground water inflow due to pre-grouting in few of the Norwegian tunneling projects [11]. Barton observed that pre-grouting reduces the maximum permeability by 17 times and the minimum permeability by one-tenth in rock mass. Neby have noted that for a water treatment plant at Oslo, water leakage at tunnel face reduced from 100 liters/min to 27 liters/ min due to pre-grouting. Pre-grouting -rock improvement Barton considered a 10m diameter tunnel in quartzite rock with UCS of 50 MPa under 200 m overburden to witness the effect of pregrouting by analysing various engineering parameters before and after pre-grouting. Improvement in some of the key rock mass parameters is shown in Table 3. Thus, with these rock mass parameters, Singh and Goel noted an improvement in about 25 engineering parameters by the implementation of pre-grouting, notable few are depicted in Figure 4. A typical calculation for improvement in rock mass quality and reduction in permeability due to pre-grouting is illustrated below Rock Mass Quality, High Q-values indicates good stability and low values means poor stability. Quality of rock mass increased from 0.83 to 16.7 with the implementation of pre-grouting as seen from the calculation above. Pre-grouting enhances the rock mass strength, thereby reducing the deformation and support pressure, leading to an economical support system. As per Barton model, for the example case discussed earlier the thickness of SFRS and spacing of rock bolts reduces from 100 mm to none and 1.6 m to 2.4 m c/c respectively. Conclusion In this article, an effort has been made to showcase pre-grouting as an effective mitigation measure for leakage control and rock mass stability. Some of the noteworthy points pertaining to pre-grouting are mentioned below. 1. It has been noted that permeability can be reduced up to 1 lugeon using cement grout. Also, with the recent developments in grout materials, achieving permeability close to 0.1 Lugeon is possible. 2. Also, Barton observed that pre-grouting reduces the maximum permeability by 17 times and the minimum permeability by one-tenth in rock mass. 3. Neby have noted that for a water treatment plant at Oslo, water leakage at tunnel face reduced from 100 liters/min to 27 liters/ min due to pre-grouting. 4. As evidenced above, the pre-grouting effectively shifts the hydrostatic pressure from the tunnel periphery to the outskirts of the pre-grouted zone. Thus, pre-grouting can be considered to provide an effective leakage control mechanism. 5. Q value in one case has been improved from 0.8 to 16.7 because of pre-grouting; this indicates that the rock mass category has changed from 'very poor' to 'good'. 6. Singh and Goel observed improvement in 25 rock parameters such as rock mass strength, modulus of deformation and support pressure reduction. Before Grouting After Grouting Rock Quality Designation, RQD (%) 30 50 Joint set number, J n 9 6 Joint roughness number, J r 1 2 Joint alteration number, J a 2 1 Joint water reduction factor, J w 0.5 1 Stress reduction factor, SRF 1 1 Table 3: Rock mass parameters before and after pre-grouting.
3,151.4
2016-04-28T00:00:00.000
[ "Geology" ]
Quantum Simulation of Conical Intersections We explore the simulation of conical intersections (CIs) on quantum devices, setting the groundwork for potential applications in nonadiabatic quantum dynamics within molecular systems. The intersecting potential energy surfaces of H$_{3}^{+}$ are computed from a variance-based contracted quantum eigensolver. We show how the CIs can be correctly described on quantum devices using wavefunctions generated by the anti-Hermitian contracted Schr{\"o}dinger equation ansatz, which is a unitary transformation of wavefunctions that preserves the topography of CIs. A hybrid quantum-classical procedure is used to locate the seam of CIs. Additionally, we discuss the quantum implementation of the adiabatic to diabatic transformation and its relation to the geometric phase effect. Results on noisy intermediate-scale quantum devices showcase the potential of quantum computers in dealing with problems in nonadiabatic chemistry. I. INTRODUCTION Nonadiabatic processes involve nuclear motion on multiple potential energy surfaces (PESs).These processes are ubiquitous in nature and have been studied extensively in diverse areas such as spectroscopy, solar energy conversion, chemiluminescence, photosynthesis, and photostability of biomolecules.[1][2][3][4][5][6][7][8][9][10][11][12][13][14] Different potential energy surfaces can intersect at regions that exhibit a conical-shaped topography, known as conical intersections (CIs).[15][16][17][18] In the vicinity of CIs, the Born-Oppenheimer approximation which assumes adiabaticity breaks down.Systems with nonadiabaticity can undergo sudden changes in their dominant configurations at CIs, leading to the classical and well-known "hop" picture between different electronic states.[19] CIs act as highly efficient channels for converting external excitation energy, usually carried by a photon, to internal electronic energy.Their characterization is crucial to understand rich photochemistry and photobiology processes involving energy conversion. CIs are in general difficult to treat with quantum mechanical methods for several reasons.First, from the perspective of electronic structure theory, the excited states are harder to compute than the ground state as they correspond to first-order critical points rather than the global minimum.Moreover, the nonunitary ansatz for the wavefunction employed in some methods, such as standard coupled cluster (CC) methods, gives an incorrect topography of CIs.[20][21][22][23] Second, since most electronic structure programs work under the Born-Oppenheimer approximation, results obtained from these programs are not readily applicable for subsequent chemical dynamics studies, especially near CIs.The process of converting the original adiabatic electronic structure data to a diabatic representation, referred as diabatization, is an active yet non-unified field due to the non-uniqueness of quasi-diabatic representations.[24][25][26][27][28][29][30][31][32][33][34][35][36][37][38] Third, the dynamics of nonadiabatic systems typically require a more complex treatment than the dynamics on a single potential energy surface.For example, we need to expand wavefunctions in the basis of every diabatic state to account for effective state transition in quantum dynamics.[1] Quantum computers could be a natural solution for nonadiabatic chemistry.[39][40][41][42][43] To address some of the concerns in the last paragraph, we observe first that the gate operations are unitary, which makes it convenient to implement a unitary ansatz of wavefunctions (e.g., the unitary coupled cluster (UCC) ansatz [44,45] or the anti-Hermitian contracted Schrödinger equation (ACSE) ansatz [46][47][48][49][50][51][52][53][54]).They offer robust and accurate solutions to the electronic structure data near CIs.In fact, for the ACSE ansatz used in this paper, classical calculations of CIs are well established.[55][56][57][58] Second, quantum computers are ideal tools to perform unitary and even nonunitary propagation [59,60] with a possible polynomial scaling advantage over classical computers where the coupling potential term can be expressed as an entanglement of encoded qubits.[39] Third, the transformation from adiabatic wavefunctions to diabatic wavefunctions is unitary and can be easily implemented as parametric gates during state preparation on quantum computers.The geometric phase, [42,43,61] a global phase factor dressing the wavefunctions near CIs, can also be encoded with simple rotations in the Pauli basis, which is a natural advantage of quantum computers. In this paper we evaluate the performance of quantum computers in describing CIs.Some key issues associated with CIs, such as seam curvature, optimization and geometric phase are discussed.We implement the electronic structure simulation of H + 3 with and without noise using the excited-state contracted quantum eigensolver (CQE) proposed in Ref. [62] in which the wavefunctions are generated by the ACSE ansatz.The theory and methodology of CI including an overview of CQE are presented in section II.Results and outlooks are further discussed in section III and IV, respectively. II. THEORY A. Diabatic Hamiltonian matrix and CIs The electronic Schrödinger equation can be written as in which H d is the quasi-diabatic Hamiltonian matrix.E J and d J are the adiabatic energies and wavefunctions of the Jth state respectively, corresponding to nuclear coordinate R. We consider a two-state example in which H d is a two by two matrix throughout this work, where most conclusions can be readily extended to additional electronic states.To obtain degenerate eigenvalues, we require where H IJ are matrix elements of H d .When wavefunctions are non-Hermitian as generated from nonunitary exponential ansatz (for example, standard coupled cluster ansatz), the above equation is the only constraint to form CIs. However, this creates a nonphysical (N −1) artifact that accompanies complex eigenvalues in the vicinity of true CIs, where N is the molecular degree of freedom.[20][21][22][23] The true CI is a submanifold of the potential energy surfaces where the following two constraint equations, one for diagonal and one for off-diagonal term, are simultaneously satisfied in the diabatic representation. It is then easily recognized that the dimension of CIs is (N − 2).While the diagonal condition is easy to constrain, the off-diagonal condition is subtle because it is not directly available in electronic structure programs.For some molecules, as we will show in this paper, it is possible to find two states with high symmetry such that the couplings between them are strictly zero by symmetry, a situation known as symmetry-required CIs.The symmetry, however, does not serve as a necessary condition for the existence of CIs as some CIs occur in the more general category of "accidental" CIs.[16] B. Geometric phase effect Most modern quantum chemistry programs assume the Born-Oppenheimer approximation and thus, produce electronic structure data in the adiabatic representation.The adiabatic representation, however, fails to describe nonadiabatic dynamics because state couplings are always zero.The diabatic representation, on the contrary, produces well-described state couplings and smooth diabatic wavefunctions.There has been significant research effort directed towards determining the transformation from the adiabatic representation to the diabatic representation.One reason for the abundance of such diabatization techniques is that a strict diabatic representation does not exist for polyatomic molecules; hence, we refer to their states as "quasi-diabatic."[24] For two-state diabatization, the transformation is written as in which U is a unitary matrix.Some literature expresses the unitary as a rotation matrix parameterized by angle θ, We remind the reader that the expression might naturally lead to the assumption that θ is a continuous function of R, but this is not necessarily true in the presence of CIs due to the geometric phase effect.The geometric phase effect requires that wavefunctions that are transported around a path enclosing a CI acquire an additional phase factor.[61,63,64] A geometry-dependent and statedependent factor e iA K (R) (K = I, J) must be included in the adiabatic wavefunction.The natural advantage of using qubits to represent this two-state diabatization is that they are both isomorphic to the SU(2) group.Indeed on quantum computers, the phase factor can be implemented as a simple rotation gate parametrized by A K (R).One of the authors has shown in previous work [65] that A J (R) can be evaluated from the integral below, where the integrand is the well-known derivative coupling vector.[16] As this paper focuses on the topography of CIs, namely a more "static" description, a detailed analysis of the nonadiabatic quantum dynamics, the geometric phase factor and its implementation on quantum platforms is reserved for future work. C. Variance-based contracted quantum eigensolver The electronic structure calculation in this work is performed with a variance-based contracted quantum eigensolver (CQE) that has been proposed in a previous paper.[62] The algorithm is briefly reviewed here. The variance (denoted as Var) of the system is defined as: We minimize the variance with respect to the parametric two-body anti-hermitian operator Fm , where the wave-function at the m th iteration is given by the unitary ansatz as where in which â † p and âp are the creation and annihilation operators, respectively.The key equation guiding the optimization is derived by taking the gradient of the variance with respect to Fm : in which Γpq st = â † p â † q ât âs and the equation defines the elements of the 2-RDM.Through a selfconsistent update of energy and 2 F m , we can converge the variance to a minima which corresponds to an excited or ground state.More details including an ancillary-assisted measurement of the variance has been reported in previous work.[62] We provide additional comments regarding why variance-based CQE is suitable to describe the CIs.The convergence depends on the choice of the initial guess, which can be generated from single Slater determinant or a linear combination of them.It will converge to the nearest minimum of the variance without knowledge of the lower states.Here by nearest, we mean the most similar in configuration composition.This state-specific feature can be beneficial in studying the CIs.It allows us to tackle a specific state during the slow variation of molecular geometry without concern that the adiabatic states will cross.Note this also coincides with the idea of configurational-uniformity-based diabatization as first proposed by Nakamura and Truhlar.[26] III. RESULTS We demonstrate the approach to computing the CI with the molecule H + 3 .The relative positions of the three co-planar hydrogen atoms are described in polar coordinates as (R, 0), (R, π) and (ρ, θ) where R ≥ 0, ρ ≥ 0, 0 ≤ θ < 2π, allowing us to represent the molecular geometry by the set of coordinates (R, ρ, θ).Calculations are performed with the IBMQ statevector simulator and FakeLagosV2 backend.The quantum simulation result is benchmarked with full configuration interaction calculations.All computations are performed in the minimal Slater-type orbital (STO-3G) basis set.Here and below we denote full configuration interaction as FCI to distin- guish it from the abbreviation for the conical intersection (CI). A. Electronic structure of H + 3 The H + 3 molecule exhibits arguably the simplest CI.Nonetheless, despite its simplicity, the molecule is an important species in astrochemistry, providing a useful benchmark for the study of CIs.[66][67][68][69][70][71][72][73] We compute the first three states of H + 3 with S z = 0.A compact mapping is used to reduce the number of required qubits to three for the first and second excited states (denoted as E 1 and E 2 ) of H + 3 .The mapping is described here.We denote the configuration state function as |ij⟩, (1 ≤ i, j ≤ 3) with the S z = +1/2 electron occupying the i th molecular orbital and the S z = −1/2 electron occupying the j th molecular orbital.The dimension of FCI matrix is 9.A further reduction is performed by eliminating |11⟩ by observing that it has almost no coupling to the E 1 and E 2 states.Although |11⟩ can couple to other higher states and in principle affect the diagonalization result, the truncation has negligible effect on the energy of E 1 and E 2 (< 10 −8 hartree), resulting in a total qubit number of log 2 8 = 3. We analyze the electronic structure property of H + 3 using the highly-symmetric D 3h point group.The first and second excited states correspond to the two components of an E ′ irreducible representation and thus form the symmetry-required CIs.We plot the potential energy curve in Fig. 1 obtained from the statevector simulator and a fake-backend simulator.A zoomed region of the degeneracy is given in the figure as well.The two excited states always overlap in the FCI scheme, which is consistent with our electronic structure knowledge of the system.On a noiseless statevector simulator we achieve an energy accuracy of 10 −6 hartree, where the only error comes from the trotterization, proving the exactness of the ACSE ansatz.After introducing device noise, the error for each individual state is around 12 mhartree.It is worth noting that since the error is quite uniform for both states, the error of their energy gap is significantly smaller, which is quite promising for predicting the energetic degeneracy. We next plot the dissociation curve at a lower symmetry, namely C 2v in Fig. 2. The discontinuity of the E 2 curve is due to crossings with intruder states.We are particularly interested in the CIs between E 1 and E 2 , where the two states coincide at a D 3h geometry.Despite the relatively large error of individual states, the prediction of the location of the CIs is surprisingly accurate (<0.01 bohr).As mentioned before, if the error induced by noise is nearly uniform for both states and for all geometries, then the effect of noise is only to shift both potential energy surfaces by a similar amount, which should not significantly affect the topography of the CIs.To verify this, in Fig. 3 we plot the coupled potential energy surfaces as a function of the coordinates of the third hydrogen atom.It can be seen that the topography of the CIs is well reproduced.The expected cusps induced by random noise are barely discernible due to the uniformity of noise.We note, however, that although the potential energy surfaces and relative energy gap are well reproduced, the absolute error still remains challenging on noise intermediate-scale quantum (NISQ) devices and further error mitigation techniques are needed. B. Locating the seam of CI The search of the minimum energy CI (MEX) is done by minimizing the constrained Lagrangian where C k are certain geometry constraint equations and ∆E IJ = 0 constrains the CIs in the adiabatic representation.In previous work, we showed that the gradient of Lagrangian corresponding to the geometry parameter set R can be obtained with classical calculations.[74] Here we use a hybrid method, where single-point calculations are performed on quantum computers and the gradient is obtained numerically and classically by varying geometries. The dimension of CI for the triatomic molecule is only one.By varying one molecular coordinate and fixing the rest, we should obtain a one-dimensional curve that corresponds to the seam of the CI.For the special case between the E 1 and E 2 of H + 3 , we know that such curve is unique and corresponds to the D 3h geometries in Fig. 1.We report the optimization results by setting R to 2.0 bohr and optimizing over the position of the third hydrogen (r, θ).To keep things simple, we used a gradient-like Newton-Raphson method with fixed step size of 0.1 where the Hessian of the Lagrangian is approximated by the identity matrix.A typical optimization process is shown in Table I.The performance is quite robust despite the simple setup, suggesting the gradient from quantum devices is resilient enough for geometry optimization purposes.The energy difference decreases during the optimization, except in the first iteration.This exception can occur because the Lagrangian includes contributions beyond the energy difference.An important observation is that, the noise on NISQ simulators introduces a small oscillation around our targeted D 3h CI.The errors in the bond distance and bond angle, however, are quite small, which helps to demonstrate the accuracy of our description of the CIs. IV. CONCLUSIONS AND OUTLOOK Current state-of-art nonadiabatic quantum dynamics are limited to small molecules due to their exponential scaling with respect to the active vibrational modes.Quantum computers may potentially provide a solution.In the future quantum devices with hundreds of qubits may be able to perform nonadiabatic quantum dynamics simulations that are either too expensive or intractable on high-performance classical computers.This paper provides a foundation for simulating the CIs on quan-tum computers, paving the way for advancements in quantum-based simulations for nonadiabatic molecular systems.Using a variance-based CQE, the energies of intersecting potential energy surfaces of molecular H + 3 are accurately computed.The study achieves a correct representation of CI topography through a Hermitian wavefunction generated by the ACSE ansatz.Future work includes realizing diabatization and quantum dynamics of complex nonadiabatic molecular system on quantum devices. FIG. 1 . FIG. 1. Potential energy curve calculated by FCI as well as noiseless and noise simulators.Molecule is treated in the D 3h symmetry, where the polar coordinates of the three hydrogen atoms are (R, 0),(R, π) and ( √ 3R, π/2).Note that E1 and E2 are degenerate in FCI result due to symmetry. FIG. 3 . FIG. 3. Intersecting adiabatic potential energy surfaces calculated from variance CQE.The grid size is 20×12.Additional points are placed in the vicinity of the CI.For better illustration, we use Cartesian coordinate with the coordinates of the three atoms being (-1,0,0), (1,0,0) and (x,y,0). TABLE I . Energy gap during a geometry optimization with respect to θ and ρ starting from random guesses for these variables.The remaining parameter R is fixed at 2.0 bohr.
3,897.6
2024-01-28T00:00:00.000
[ "Chemistry", "Physics" ]
Blue-Violet Laser Modification of Titania Treated Titanium: Antibacterial and Osteo-Inductive Effects Background Many studies on surface modifications of titanium have been performed in an attempt to accelerate osseointegration. Recently, anatase titanium dioxide has been found to act as a photocatalyst that expresses antibiotic properties and exhibits hydrophilicity after ultraviolet exposure. A blue-violet semiconductor laser (BV-LD) has been developed as near-ultraviolet light. The purpose of this study was to investigate the effects of exposure to this BV-LD on surface modifications of titanium with the goal of enhancing osteoconductive and antibacterial properties. Methods The surfaces of pure commercial titanium were polished with #800 waterproof polishing papers and were treated with anatase titania solution. Specimens were exposed using BV-LD (λ = 405 nm) or an ultraviolet light-emitting diode (UV-LED, λ = 365 nm) at 6 mW/cm2 for 3 h. The surface modification was evaluated physically and biologically using the following parameters or tests: surface roughness, surface temperature during exposure, X-ray diffraction (XRD) analysis, contact angle, methylene blue degradation tests, adherence of Porphyromonas gingivalis, osteoblast and fibroblast proliferation, and histological examination after implantation in rats. Results No significant changes were found in the surface roughness or XRD profiles after exposure. BV-LD exposure did not raise the surface temperature of titanium. The contact angle was significantly decreased, and methylene blue was significantly degraded. The number of attached P. gingivalis organisms was significantly reduced after BV-LD exposure compared to that in the no exposure group. New bone was observed around exposed specimens in the histological evaluation, and both the bone-to-specimen contact ratio and the new bone area increased significantly in exposed groups. Conclusions This study suggested that exposure of titanium to BV-LD can enhance the osteoconductivity of the titanium surface and induce antibacterial properties, similar to the properties observed following exposure to UV-LED. Introduction Photocatalysis, as exemplified by titanium dioxide (TiO 2 ), was originally developed by Fujishima and Honda [1], and the effect has been applied to various fields ever since. Wang et al. reported the photogeneration of a highly amphiphilic TiO 2 surface, i.e., with both hydrophilic and oleophilic effects [2]. Recently, the ultraviolet (UV) light-induced photocatalytic activity of titanium has attracted considerable attention in environmental and clean-energy sciences [3][4][5][6]. Titanium is highly biocompatible, and both pure titanium and titanium alloys have been used in the medical field, particularly in orthopedics and as dental implants. Aita et al. reported that pretreatment of titanium with UV light substantially enhances its osteoconductive capacity in association with UV-catalytic progressive removal of hydrocarbons from the TiO 2 surface, resulting in photofunctionalization of titanium and thereby enabling more rapid and complete establishment of bonetitanium integration [7]. On the other hand, exposure of titanium surfaces to UV also result in changes to the microbial growth properties of the metal [8,9]. Indeed, photocatalysis as exemplified by TiO 2 has 2 major effects: oxidation and hydrophilicity [10]. It is generally accepted that hydrophilicity of the titanium surface will enhance rapid bone contact, while strong oxidation due to photocatalysis will result in cytotoxic activity and bactericidal effects [11,12]. However, the biological responses of these two opposite effects, i.e., oxidation and hydrophilicity on cells and bacteria, have not been investigated under the same conditions. In the clinical setting, UV light-aided treatment of titanium has been mainly studied in the context of manufacturing medical devices that will be implanted within the human body [7,[13][14][15]. Therefore, reactivation of the titanium implant at the bedside/chairside or in the oral cavity is also necessary. Exposure of a limited area and development of a mobile apparatus are needed to achieve this goal. A low-pressure mercury lamp that emits germicidal UVC light (shorter than 280 nm) has been conventionally used as a light source for this purpose. Moreover, UV light-emitting diodes (UV-LEDs) with a wavelength of 365 nm have been developed and offer the following advantages: low cost, energy efficient, long life, easy control of emission, and no production of mercury waste [15,16]. Visible blue-violet lasers with a wavelength of 405nm have also developed, and such devices may facilitate bedside/chairside use of light sources for modification of titanium. The purpose of this study was to examine the effects of a visible blue-violet laser and anatase titania solution on titanium, particularly the bone and microbiological response, and to determine the potential of titanium treatment at the bedside. Titanium specimens and light sources and exposures Commercial pure titanium (grade II) was used for the specimens. Three types of specimens, i.e., disk plate (φ33 mm × 1 mm) for the contact angle test and methylene blue degradation test, square plates (10 mm × 10 mm × 2 mm) for the cell proliferation test and bacterial adherence test, and cylindrical rods (φ2 mm × 5 mm) for the animal test, were prepared. The specimens were serially ground and manually polished with #800 abrasive silica papers and ultrasonically cleaned with acetone, ethanol, and distilled water for 10 min progressively. Specimens were then dried in a desiccator for 12 h without light. In addition, 30 μL of anatase titania solution (TITANNEXT 21, 0.85%, Marutomi, Gifu, Japan) was used to evenly coated the irrigated titanium surface according to methods described in a standardized manual, and samples were then dried in the desiccator for 12 h without light. Five specimens were prepared for every condition. Physical evaluation Roughness of the modified surfaces. Average surface roughness (Ra) was measured using a surface measuring device (HANDYSURF, Tokyo Seimitu, Tokyo, Japan). Three areas for each disk specimen and one lateral side of each cylindrical rod specimen were measured; the data were averaged. Structure of the modified surfaces. The surface crystalline structure of the specimens was analyzed using Xray diffraction analysis (XRD; MINIFLEX, RIGAKU, Tokyo, Japan). Surface temperature during exposure. A thermocouple (C520-13K, CHINO, Tokyo, Japan) was fixed tightly in the tube (φ2.0 mm × 4.0 mm), which was made of a copper plate (0.3 mm in thickness) and was fixed on the center of the specimens by soldering. The temperature of the center of the specimen during exposure was measured with a thermocouple and temperature monitoring device (NR-250, Keyence, Osaka, Japan). Hydrophilicity. Static contact angles were measured using a contact angle measuring device (IMC-159D, IMOTO, Tokyo, Japan) before/after each light exposure. Photocatalytic effects. Photocatalytic effects were evaluated by the degradation of methylene blue. In this assay, 10 μL of 0.2% methylene blue solution (MB) was dropped onto 5 points of the specimen. After exposure to each light source, the specimens were soaked in 10 mL distilled water to allow the MB on the specimen to dissolve. After pipetting 30 times, the absorbance of the solution was measured using a spectrophotometer (Novaspec II, Amersham Pharmacia Biotech, Cambridge, UK) at 664 nm. Biological evaluation Antibacterial properties. Porphyromonas gingivalis (ATCC 33277), a pathogenic bacterium involved in peri-implantitis, was employed. The bacterial strain was cultured for 72 h, and colonies were resuspended in phosphate-buffered saline (PBS) to 1.0 × 10 7 cfu/mL as the final concentration. Sterilized saliva was prepared by sterilization using membrane filters, and the specimens were then soaked in the saliva for 10 min to allow the pellicle to be formed on the specimens. After rinsing in PBS 3 times, 30 μL of the bacterial suspension was dropped on each specimen, and specimens were then maintained at room temperature for 2 h. After rinsing again in PBS to remove loosely adhered microorganisms, the specimens were fixed in 2 mL glutaraldehyde solution, formaldehyde solution, and PBS for 1 h at room temperature and then rinsed with PBS 3 times and dehydrated with a series of ethanol solutions. All samples were dried to the critical point, coated with gold using an ion coater (IB-3, Eiko Engineering, Hitachinaka, Japan), and observed by scanning electron microscopy (Miniscope TM-1000, Hitachi, Tokyo, Japan). Five representative areas of each specimen were photographed at a magnification of 5,000×. Finally, the bacteria attachment ratio was calculated by dividing the number of adhered cells in the exposed specimens by that in the nonexposed specimens. Titanium specimens were placed in 24-well polystyrene plates. When cells reached 70% confluence, they were detached by trypsinization and seeded onto the specimens at a density of 1.3 × 10 5 cells/mL. After 48 or 96 h, the specimens were transferred to new plates containing fresh medium. Cell proliferation assay reagent (Cell Counting Kit-8, Dojindo Laboratories, Kumamoto, Japan) was added at a concentration of 100 μL/mL. After incubation at 37°C for 3 h in a humidified CO 2 incubator, absorbance at 450 nm was measured using a microplate reader (Multiskan JX Version 1.1, Thermo Labsystems, Thermo Fisher Scientific, Kanagawa, Japan). Animal experiments. Ten 12-week-old male Wistar rats (Charles River Laboratories Japan, Yokohama, Japan) were used in this study. The rats were divided into 2 groups (BV-LD and UV-LED exposure, 5 rats in each group). The rats were anesthetized with 1%-2% pentobarbital sodium (Somnopentyl, Kyoritsu, Tokyo, Japan), and the implant site was prepared 10 mm from the distal edge of the left femur by drilling with a φ2 mm pilot bur after determining the site using a round bur. Then, control specimens were placed into the left legs of rats, while irradiated specimens were placed into the right legs. Every specimen was inserted into the implantation site such that the gap between the upper end of the specimen and the femur bone surface disappeared. Four weeks after implantation, rats were sacrificed using an overdose of pentobarbital sodium. Specimens were excised with surrounding tissue and preserved in 10% formalin neutral buffer solution for 1 week. After polymerization, the embedded specimens were cut into 200-μm-thick sections using a microcutter (MC201, Maruto, Tokyo) parallel to the axis of the femur bone. Slices were then stained with 0.1% toluidine blue solution for histological observation. Typical specimens were chosen from those that contained the center region of the cylindrical specimens. Their histological states were recorded for measurement of the boneto-specimen contact ratio (%), and new bone area around the specimens was measured by the method adopted before [17] using NIH Image 1.55 software (National Institute of Health, USA). The Animal Care and Use Committee of the University of Tokushima approved this protocol, and all experiments were performed in accordance with the Guidelines for the Care and Use of Lab Animals at the University of Tokushima. Statistical analysis Significant differences were determined using analysis of variance (ANOVA) followed by Tukey post-hoc tests for multiple comparisons. Physical evaluation First, we evaluated the characteristics of titanium following exposure to light. The average surface roughness (Ra) of anatase titania solution-treated titanium was significantly lower than that in untreated titanium ( Figure 1). However, no significant differences were found between specimens exposed to light and those not exposed to light. XRD patterns revealed typical pure anatase peaks in anatase titania solution-treated titanium ( Figure 2). Exposure of the specimen to each light source enhanced this anatase peak. Moreover, the surface temperature of specimens varied only slightly during 3-h BV-LD and UV-LED exposure, but never increased over 36°C ( Figure 3). We also measured static contact angles before and after exposure to light (Figure 4). Before exposure, contact angles on non-titania-treated and titania-treated specimens were 59°a nd 38°, respectively. UV-LED decreased the contact angle down to 3° with 3 h, and BV-LD exposure also gradually decreased the contact angle with increasing exposure time. Finally, in the test of methylene blue degradation, absorption decreased monotonically after UV-LED and BV-LD exposure ( Figure 5). Biological evaluation Next, we investigated the biological properties of light-treated titanium. As shown in Figure 6, the P. gingivalis adherence ratio decreased by 49% and 35% following BV-LD and UV-LED exposure, respectively. Additionally, both osteoblasts and fibroblasts grew on all specimens with time ( Figure 7). UV-LED exposure significantly enhanced the proliferation of MC3TC-E1 cells at 96 h, whereas BV-LD had less of an effect on cell proliferation. All light exposures also had minimal effects on cell proliferation in NIH3T3 cells. We then analyzed the osteoconductive properties of the treated titanium specimens. Interestingly, histological analysis of bone around each specimen at 1 month after specimen placements revealed an increase in new bone around specimens that had been exposed to BV-LD or UV-LED ( Figure 8). Additionally, the bone contact rate of the specimen without exposure was 60%, and BV-LD and UV-LED exposure increased the bone contact rate to 86% and 89%, respectively ( Figure 9). Compared to unexposed specimens, these differences were statistically significant; however, there was no significant difference between the light sources. Finally, we found that light exposure increased new bone formation around the titanium specimen ( Figure 10). The area of new bone formation before the exposure was less than 0.1 mm 2 , but increased to more than 0.2 mm 2 after BV-LD or UV-LED exposure. Again, these differences were significant compared to the unexposed control, but no significant differences were observed between light sources. Discussion Titanium dioxide, particularly in the anatase form, is a photocatalyst under UV light. Many reports have investigated the biological response to titanium following UV exposure, and these studies have shown that bone-titanium integration can be more rapidly and completely established following UV treatment [7,9,[13][14][15]. Aita et al. stated that the osseo-inductive capacity of the titanium surface depends on the UV-dose responsive removal of hydrocarbons from the TiO 2 surface, but not on the hydrophilic status [7]. In addition, it has been suggested that UV-C, with a wavelength of 250 nm, is more effective than UV-A, with a wavelength of 360 nm. Thus, modification of the titanium surface depends on the energy of the UV exposure and the chemistry of the TiO 2 . In contrast, semiconductor light sources, i.e., lasers and LEDs, are convenient to use in clinical settings because they are compact, allow for directionality, and have a long lightsource lifetime [16,18]. In addition, UV-A and visible light are more preferable due to biological hazards; UV-C damages cellular DNA, while UV-A attacks the cell though generation of radicals [19,20] . In this study, we attempted to modify the titanium surface (grade II) to improve biocompatibility for clinical use. The application of a specific anatase titania solution and LD/LED exposure were combined, and the surface was photocatalytically activated. Medical devices usually use grade IV titanium and titanium alloy, not grade II titanium that was used in this study. The grade of titanium used affects less characteristics of the thin anatase layer formed on the titanium surface. Generally, the wavelength is required to be less than 400 nm to act as a photocatalyst on anatase-type TiO 2 . Photocatalysis is also increased by exposure to visible light, with varying efficiencies. In this study, we examined whether near-ultraviolet exposure, i.e., BV-LD, which has a wavelength of 405 nm, had both antimicrobiological and osteoconductive effects, similar to the effects of UV exposure on the titanium surface. The application of anatase-titania solution decreased the contact angle and enhanced the crystal structure of anatase on the titanium surface. Interestingly, no changes in the physical nature of the titanium surface (roughness, temperature, and structure) were observed after exposure to BV-LD or UV-LED. The contact angle lineally decreased with the increasing exposure time and eventually became less than 10°, indicating a change to a superhydrophilic state. Photocatalytic effects, as demonstrated by methylene blue degradation, were also increased with prolonged exposure time. In these tests, we observed that the effects of BV-LD exposure were a slightly inferior those of UV-LED exposure. We also examined the influence of light-exposed titanium surfaces on hydrophilic and photocatalytic effects both in vitro and in vivo. First, the antibactericidal effects of the modified titanium surface were evaluated using P. gingivalis in a bacterial adhesion assay. P. gingivalis is an important bacterium causing in peri-implant and periodontal diseases. BV-LD and UV-LED exposures decreased bacterial adherence on the titanium surface. Photocatalysis depends on the ability of the catalyst to create electron-hole pairs, which generate free radicals able to undergo secondary reactions. The proliferation of osteoblasts on the exposed specimens was significantly increased compared to that on unexposed specimens, whereas no change was observed in the proliferation of fibroblasts, regardless of exposure. Changes in the hydrophilicity of the specimen could enhance osteoblast adherence, while the adherence of P. gingivalis appeared to be unassociated with the hydrophilicity of the material. Hence, this difference between bacteria and mammalian cells may be related to the potential protective mechanisms the different cell types have to combat the formation of toxic radicals [21][22][23]. Finally, histological and quantitative analyses of bone formation suggested that UV-LED/BV-LD exposure enhanced new bone formation compared to unexposed specimens. These results are in agreement with many reports on the effects of UV exposure to the titanium surface [7,13]. In conclusion, our data suggested that exposure to BV-LD can enhance the osteoconductivity of a titanium surface and induce antibacterial properties similar to the effects observed after exposure to UV-LED. However, the exposure time used in this study is too long for clinical use due to the low exposure strength (6 mW/cm 2 ). Thus, for clinical use, the exposure time could be shortened by increasing the current to the laser and LED and focusing the exposure area.
3,977.4
2013-12-17T00:00:00.000
[ "Engineering", "Materials Science", "Medicine" ]
Computational Medicinal Chemistry: A Useful Tool for Pharmaceutical Sciences and Drug Development Historically, the use of modern medicines for the treatment of health problems represented a major change in the paradigms of medical science. Serious problems such as the establishment of infections, pain as well as disorders of the central nervous system have come to find effective relief from the use of small organic molecules, incorporated into pharmaceutical forms, with curative or palliative purpose [1]. Since the establishment of modern medical chemistry [2], in the XIX century, the efforts of emerging pharmaceutical companies and government laboratories having focused on the isolation of compounds from natural sources and the synthesis or semi-synthesis of bioactive. However, this development has been accompanied by some drawbacks, such as the Thalidomide tragedy in the 1960s and the acquired resistance phenomenon to antibiotics [3], generating the imperative need to generate New Chemical Entities and better understand their mechanism of action at the molecular level. Opinion Historically, the use of modern medicines for the treatment of health problems represented a major change in the paradigms of medical science. Serious problems such as the establishment of infections, pain as well as disorders of the central nervous system have come to find effective relief from the use of small organic molecules, incorporated into pharmaceutical forms, with curative or palliative purpose [1]. Since the establishment of modern medical chemistry [2], in the XIX century, the efforts of emerging pharmaceutical companies and government laboratories having focused on the isolation of compounds from natural sources and the synthesis or semi-synthesis of bioactive. However, this development has been accompanied by some drawbacks, such as the Thalidomide tragedy in the 1960s and the acquired resistance phenomenon to antibiotics [3], generating the imperative need to generate New Chemical Entities and better understand their mechanism of action at the molecular level. Corroborating this scenario, the capitalist pressure for the development of innovative molecules allied to the development of modern computing, robotics and automation, led to the development of strategies that allowed to obtain diverse libraries of compounds in a shorter time (combinatorial chemistry) [4], automated tests of high performance, as High Throughput Screening (HTS) [5] as well as application of Theoretical and Computational Chemistry approaches for the development and optimization of lead compounds [6]. Basically, computational chemistry approaches consists in the implementation of different levels of theory for the construction and visualization of molecular structures, obtaining optimized geometries, physicochemical properties, reactivity studies, among others, using computer programs [7]. Thus, by understanding the forces acting on a collection of atoms and their geometries of greater stability, various physicochemical properties related to the electronic structure (energy of orbital's, dipole moment, molecular polarizability), parameters related to geometry constitutional, correlations between atoms), estimation of empirical descriptors (LogP, pKa) among other approaches. In this sense, we observe that these descriptions at the molecular level can be applied in the understanding of the properties of bioactive molecules. The scientific literature points to several successful examples in the use of computational chemistry on the design and optimization of lead compounds, with application in pharmaceutical sciences [8][9][10][11]. In the field of Computational Medicinal Chemistry, two strategies are widely used: Molecular docking algorithms and QSAR modeling studies. Molecular docking consists in the search for the best orientation and conformation of a ligand (small organic molecule) in the binding site of a macromolecule (for example a protein) [12]. Considering that most drugs act by binding to specific biological targets, these can be exploited in understanding the mechanism of action of molecules. Several routines can be used [12], such as virtual screening, which tests in a virtual library of compounds which molecule has a better chance of interacting with an elected target; pharmacophoric mapping, which consists in identifying functional regions or groups of a series of ligands, responsible for recognition and pharmacological effect; confirmation and understanding of the mechanism of action of molecules submitted to bench tests, among others. Organic and Medicinal Chemistry International Journal On the other hand, the strategy known as Studies of the Structure-Activity Quantitative Relationship, QSAR, aims to obtain a mathematical model relating a dependent variable (in the case of biological activity) with independent variables (molecular properties), with the purpose of assisting in the design and prediction of new compounds or analogues not yet tested [13]. These methods involve the generation of molecular descriptors generated by several methods in computational chemistry, biological activities obtained experimentally, chemo metric methods to select which molecular properties have the best relation with the activity, mathematical techniques of multivariate regression [13]. As we can see, these strategies are complementary to the synthetic and pharmacological efforts of the bench, providing cost savings and time in the development of research. Another great contribution of the computational chemistry methods applied to the pharmaceutical sciences is the better understanding of the relationships between small organic molecules and their biological properties. These approaches are also considered regarding aspects of environmental regulation and public health in models that predict the potential toxicity of chemicals. Given these observations, as well as the advances that the science of Computational Medicinal Chemistry has acquired over the years, we consider that it is still insufficient the dissemination of these strategies on undergraduate and postgraduate courses related to chemistry and pharmaceutical sciences. Academic research and the emergence of new drugs are expensive and time-consuming processes, so the more they are disseminated and used, these strategies will be able to gain more space and can be more and more improved, assisting in chemistry teaching, as well research and development of new medicines.
1,290.8
2017-11-10T00:00:00.000
[ "Chemistry", "Medicine", "Computer Science" ]
A Phylogenetic and Taxonomic Study on Phellodon (Bankeraceae, Thelephorales) from China In this study, phylogenetic analyses of Phellodon from China were carried out based on sequences from the internal transcribed spacer (ITS) regions, the large subunit of nuclear ribosomal RNA gene (nLSU), the small subunit of nuclear ribosomal RNA gene (nSSU), the largest subunit of RNA polymerase II (RPB1), and the second largest subunit of RNA polymerase II (RPB2), combined with morphological characters of the collected specimens in China. The fruiting bodies of the specimens were used to observe their characteristics, and three new species of Phellodon are discovered. Phellodon crassipileatus is characterized by its pale brown to dark brown pileal surface, tomentose pileal margin, white spines, and the presence of clamp connections in generative hyphae of pileal surface, context, and stipe. Phellodon griseofuscus is characterized by its dark brown to black pileal surface, white to pale brown pileal margin, the presence of both simple septa and clamp connections in generative hyphae of spines, and moderately long basidia. Phellodon perchocolatus is characterized by its woody and broad pileus, brown to greyish brown pileal surface when fresh, tomentose pileal margin when young, which becomes glabrous with age, and the presence of both simple septa and clamp connections in the generative hyphae of the spines. This is the first time both single and multi-genes analysis is used in such a phylogenetic and taxonomic study on Phellodon, which can provide the basis for the phylogenetic study of the genus. Introduction Phellodon P. Karst. was established by Petter Adolf Karsten and typified by P. niger (Fr.) P. Karst [1]. The genus, together with Hydnellum P. Karst. and Sarcodon Quél. ex P. Karst. were stipitate hydnoids, and they were affiliated to Bankeraceae of Thelephorales. All of the three genera belong to ectomycorrhizal fungi, which are associated with broad-leaved or coniferous trees in forest ecological systems [2][3][4]. Ectomycorrhizal fungi are symbionts of trees in forests, which can reflect the conservation state of forest ecosystems [5]. They can connect plant roots to soil by promoting the decomposition of organic matter in soil and the absorption of organic and inorganic elements by host plants [6]. Therefore, they are of great significance to the growth of plants and the material circulation of ecosystems. During the second half of the 20th century, the numbers of most species of stipitate hydnoid fungi have declined [7], and many species have been included in national red lists [8]. This is most likely ascribed to habitat loss due to forestry operations, such as massive logging, the disappearance of old Picea forests on calcareous soils, deciduous forest transformed into coniferous forest, direct effects of air pollutants, and forest soil acidification [7,9]. In addition, sulfur and nitrogen depositions and soil acidification also contribute to the decline of stipitate hydnoid fungi [7,9,10]. In recent decades, the number of stipitate hydnoid fungi has dropped significantly, which reflects that we need to pay more attention to protecting them [4]. Meanwhile, discovering new species of stipites hydnoid fungi is also of great significance for helping us to further recognize and protect them. Macro-morphologically, species of Phellodon, Hydnellum, and Sarcodon are relatively similar in having single to concrescent basidiomata and spines. However, the three genera can be distinguished by the color of their basidiospores. Traditionally, species in Hydnellum and Sarcodon have brown basidiospores, while species in Phellodon have white basidiospores [4]. While in a recent comprehensive study, Larsson et al. [11] suggested that basidiospore size can distinguish the Hydnellum and Sarcodon, species in Hydnellum have basidiospore lengths in the range 4.45−6.95 µm while the corresponding range for Sarcodon is 7.4−9 µm. Species in Phellodon are characterized by solitary to gregarious or concrescent, stipitate basidiomata, hydnoid hymenophore, and echinulate basidiospores [12], and often occur in forests of Fagaceae and Pinaceae [2,6]. In 1881, Karsten divided the genus Hydnellum into two parts: the white toothed and the dark toothed, and the former was named Phellodon [13]. Banker [13] revised all of the Hydnaceae found in the continent of North America and its adjacent areas, which included Hydnellum, Phellodon, and Sarcodon, and 10 species of Phellodon were described based on morphological features. Species of Phellodon were described only based on morphological characteristics in the past few decades, which resulted in the lack of molecular basis for taxonomic studies of the genus [11,[13][14][15][16][17][18][19][20][21][22][23][24][25]. Morphological and phylogenetic studies were used to identify the genus in recent years. Parfitt et al. [4] carried out a systematic study of Hydnellum and Phellodon based on molecular and morphological analyses, which identified the taxonomic status of the known Phellodon species from Britain. Ainsworth et al. [26] revealed the cryptic taxa of the genera Hydnellum and Phellodon based on the combination of molecular and morphological analysis. Moreover, Baird et al. [27] reevaluated the species of stipitate hydnums from the southern United States, and 41 distinct taxa of Hydnellum, Phellodon, and Sarcodon were determined. At the same time, they described 10 species of Phellodon. They provided phylogenetic analyses on Phellodon based on ITS sequences, which provided a morphological and molecular basis for taxonomic and phylogenetic studies of the genus. Furthermore, Bankera fuligineoalba (J.C. Schmidt) Pouzar, the typified species of Bankera Coker and Beers, was recombined in Phellodon in their study, which suggests that the genus Bankera has already been combined into Phellodon. In recent years, the genus has been studied in China. Mu et al. [12] described Phellodon subconfluens H.S. Yuan and F. Wu in Liaoning Province based on morphological characters and molecular data. Later, Song et al. [28] described four species of Phellodon, P. atroardesiacus B.K. Cui and C.G. Song, P. cinereofuscus B.K. Cui, and C.G. Song, P. stramineus B.K. Cui, and C.G. Song and P. yunnanensis B.K. Cui, and C.G. Song, based on morphological characters and ITS sequences data from southwestern China [28]. During the investigations of stipitate hydnoid fungi from China, abundant fruiting bodies were obtained, and three undescribed species of Phellodon were discovered. To confirm the affinity of the undescribed species corresponding to Phellodon, phylogenetic analyses were carried out based on ITS and ITS + nLSU + nSSU + RPB1 + RPB2 sequences. The new species were described based on the combination of morphological and phylogenetic analysis. Morphological Studies Methods of specimen collection and preservation followed the methods of Wang [29]. The specimens used in this study were collected during the annual growing season of macrofungi. At the same time, the specimen information, host trees, ecological habits, location, altitude, collector, date were recorded, and the photos of the fruiting bodies and growth environment were taken. Then, the specimens were dried and bagged in time for preservation. After that, the specimens were registered and deposited at the herbarium of the Institute of Microbiology, Beijing Forestry University (BJFC). Macromorphological descriptions were based on the field notes and measurements of herbarium specimens. Microscopic characteristics, measurements, and drawings were made from slide preparations stained with Cotton Blue and Melzer's reagent and observed at magnifications up to ×1000 under a light microscope (Nikon Eclipse E 80i microscope, Nikon, Tokyo, Japan) following Liu et al. [30]. Basidiospores were measured from sections cut from the spines. The following abbreviations are used: IKI, Melzer's reagent; IKI-, neither amyloid nor dextrinoid; KOH, 5% potassium hydroxide; CB, Cotton Blue; CB-, acyanophilous; L, mean spore length (arithmetic average of all spores); W, mean spore width (arithmetic average of all spores); Q, variation in the L/W ratios between the specimens studied; n (a/b), and number of spores (a) measured from given number (b) of specimens. A field Emission Scanning Electron Microscope (FESEM) Hitachi SU-8010 (Hitachi, Ltd., Tokyo, Japan) was used to film the spore's morphology. Sections were studied at up to 2200 times magnification, according to the method by Sun et al. [31]. Molecular Study and Phylogenetic Analysis DNA extraction, amplification, and sequencing: the CTAB rapid plant genome extraction kit (Aidlab Bio technologies Co., Ltd., Beijing, China) was used to obtain PCR products from dry specimens, and for polymerase chain reaction (PCR), according to the manufacturer's instructions with some modifications [32]. The primer pairs ITS5/ITS4, LR0R/LR7, NS1/NS4, AF/Cr, and 5F/7Cr were used to amplify ITS, nLSU, nSSU, RPB1, and RPB2 sequences [28]. The PCR process for ITS was as follows: initial denaturation at 95 • C for 3 min, followed by 35 cycles at 94 • C for 40 s, 56 • C for 45 s and 72 • C for 1 min and a final extension of 72 • C for 10 min. The PCR process for nLSU and nSSU was as follows: initial denaturation at 94 • C for 1 min, followed by 35 cycles at 94 • C for 30 s, 50 • C for 1 min and 72 • C for 90 s and a final extension of 72 • C for 10 min. The PCR process for RPB1 and RPB2 was as follows: initial denaturation at 94 • C for 2 min, 9 cycles at 94 • C for 45 s, 60 • C for 45 s, followed by 36 cycles at 94 • C for 45 s, 53 • C for 1 min, 72 • C for 90 s and a final extension of 72 • C for 10 min. The PCR products were purified and sequenced in Beijing Genomics Institute, China, with the same primers. All newly generated sequences were submitted to GenBank and are listed in (Table 1). Moreover, other sequences in the dataset for phylogenetic analysis were downloaded from GenBank (http://www.ncbi.nlm.nih.gov/genbank/php, accessed on 15 October 2021). New sequences generated in this study were aligned with additional sequences downloaded from GenBank (Table 1) using ClustalX [33] and manually adjusted in BioEdit [34]. The sequences of Amaurodon aquicoeruleus Agerer and A. viridis (Alb. and Schwein.) J. Schröt. were used as the outgroups, according to Mu et al. [12]. Maximum parsimony (MP) analysis followed and was applied to the sequence datasets using PAUP* version 4.0b10 [35], and the congruences of the 5-gene (ITS, nLSU, nSSU, RPB1, and RPB2) were evaluated with the incongruence length difference (ILD) test [36]. Gaps in the alignments were treated as missing data. Maxtrees were regular to 5000, branches of zero length were collapsed, and all parsimonious trees were saved. Clade might be assessed using a bootstrap (BS) analysis with 1000 replicates [37]. Descriptive tree statistics tree length (TL), consistency index (CI), retention index (RI), rescaled consistency index (RC), and homoplasy index (HI) were calculated for each maximum parsimonious tree generated. Maximum likelihood (ML) analysis was conducted with RAxML-HPC252 on Abe through the Cipres Science Gateway (www.phylo.org, accessed on 18 October 2021), which referred to 100 ML searches, and the program estimated all model parameters. The maximum likelihood bootstrap (ML-BS) values were performed with a rapid bootstrapping with 1000 replicates. Phylogenetic trees were viewed using FigTree v1.4.2 (http://tree.bio.ed.ac.uk/software/figtree/, accessed on 18 October 2021). Table 1. A list of species, specimens and GenBank accession numbers of sequences used in this study. Species Specimen No. Locality GenBank Accession No. ITS nLSU nSSU RPB1 RPB2 Amaurodon aquicoeruleus MrModeltest2.3 [38,39] was used to determine the best-fit evolution model for the combined dataset for Bayesian inference (BI). BI was performed using MrBayes 3.2.6 on Abe through the Cipres Science Gateway (www.phylo.org, accessed on 19 October 2021) with 2 independent runs, beginning from random trees with 4 simultaneous independent Chains, performing 2 million replicates, sampling 1 tree for every 100 generations. The burn-in was set to discard 25% of the trees. The remaining ones were used to construct a majority rule consensus and calculate the Bayesian posterior probabilities (BPP) of the clades. Branches that received bootstrap support for maximum parsimony (MP), maximum likelihood (ML), and Bayesian posterior probabilities (BPP) greater than or equal to 50% (MP and ML) and 0.95 (BPP) were regarded as prominently supported. Phylogenetic Analyses The dataset of ITS included 73 sequences representing 32 taxa. The ITS dataset had an aligned length of 873 characters, of which 376 characters were constant, 46 were variable and parsimony-uninformative, and 451 were parsimony-informative. Maximum parsimony analysis yielded 516 equally parsimonious trees (TL = 1516, CI = 0.547, RI = 0.839, RC = 0.459, HI = 0.453), and 1 of the maximum parsimonious trees is shown in Figure 1. The best fit model selected for these three partitions of ITS sequences was GTR + G for ITS1, JC for 5.8 s, and HKY + G for ITS2. BI resulted in a similar topology with an average standard deviation of split frequencies = 0.007630 to MP analysis. The MP topology is shown with MP (≥75%), ML (≥75%), and BPP (≥0.95) supported values at the nodes ( Figure 1). In the ITS based phylogenetic tree (Figure 1), the three new species P. crassipileatus, P. griseofuscus, and P. perchocolatus formed distinct well-supported lineages distant from other species of Phellodon. The combined ITS + nLSU + nSSU + RPB1 + RPB2 dataset included sequences from 73 fungal samples representing 32 taxa. The combined dataset had an aligned length of 5639 characters including gaps (873 characters for ITS, 1379 characters for nLSU, 1097 characters for nSSU, 1203 characters for RPB1, 1087 characters for RPB2), of which 4599 characters were constant, 192 were variable and parsimony-uninformative, and 848 were parsimony-informative. Maximum parsimony analysis yielded 12 equally parsimonious trees (TL = 2195, CI = 0.643, RI = 0.866, RC = 0.557, HI = 0.357), and 1 of the maximum parsimonious trees is shown in Figure 2. The best fit model selected for the combined ITS + nLSU + nSSU + RPB1 + RPB2 sequence dataset was GTR + I+G with equal frequency of nucleotides. BI resulted in a similar topology with an average standard deviation of split frequencies = 0.008906 to MP analysis. The MP topology is shown with MP (≥75%), ML (≥75%), and BPP (≥0.95) supported values at the nodes ( Figure 2). The ITS + nLSU + nSSU + RPB1 + RPB2 based phylogenetic tree ( Figure 2) produced a topology similar to that generated by the ITS based phylogenetic tree, and confirmed the affinities of the three new species within Phellodon. MycoBank: 843670 Diagnosis-This species is characterized by its pale brown to dark brown pileal surface, thick pileus, tomentose pileal margin, white spines, and the presence of clamp connections in generative hyphae of pileal surface, context, and stipe. Fruiting body-Basidiomata annual, centrally or eccentrically stipitate, solitary or gregarious, with a fenugreek odor when dry. Pileus infundibuliform, up to 6.5 cm in diam, 2 cm thick at the center. Pileal surface pale brown to dark brown when fresh, becoming dark brown upon drying, azonate, tomentose at the margin; pileal margin blunt or irregular, white when fresh, becoming cream upon drying, up to 1.2 cm wide. Spines soft, white when fresh, becoming fragile, cream to clay-buff upon drying, up to 3 mm long. Context vinaceous grey, tough, up to 6 mm thick. Stipe brown to dark brown in the outer layer, fuscous in the inner layer, cylindrical, glabrous, up to 1.5 cm long, 1 cm in diameter. Hyphal structure-Hyphal system monomitic; generative hyphae mostly with simple septa, occasionally with clamp connections; all the hyphae IKI-, CB-; tissues turned to olive green in KOH. Generative hyphae in pileal surface pale brown, thick-walled, rarely branched, mostly with simple septa, occasionally with clamp connections, parallel, 2-6 µm in diameter. Generative hyphae in context clay-buff to pale brown, thick-walled, occasionally branched, mostly with simple septa, occasionally with clamp connections, 2-5 µm in diameter. Generative hyphae in spines clay-buff to pale brown, thin-walled, branched, with simple septa, more or less parallel along the spines, 2-4 µm in diameter. Generative hyphae in stipe clay-buff to brown, thick-walled, rarely branched, mostly bearing simple septa, occasionally with clamp connections, parallel along the stipe, 2-6 µm in diameter. Ecological habits-P. crassipileatus was found on the ground of forest dominated by trees of Quercus sp., under a humid monsoon-climate in the northern subtropical region. Hyphal structure-Hyphal system monomitic; generative hyphae mostly with simple septa, occasionally with clamp connections; all the hyphae IKI-, CB-; tissues turned to olive green in KOH. Generative hyphae in pileal surface pale brown, thick-walled, rarely branched, mostly with simple septa, occasionally with clamp connections, parallel, 2-6 µm in diameter. Generative hyphae in context clay-buff to pale brown, thick-walled, occasionally branched, mostly with simple septa, occasionally with clamp connections, 2-5 µm in diameter. Generative hyphae in spines clay-buff to pale brown, thin-walled, branched, with simple septa, more or less parallel along the spines, 2-4 µm in diameter. Generative hyphae in stipe clay-buff to brown, thick-walled, rarely branched, mostly bearing simple septa, occasionally with clamp connections, parallel along the stipe, 2-6 µm in diameter. Ecological habits-P. crassipileatus was found on the ground of forest dominated by trees of Quercus sp., under a humid monsoon-climate in the northern subtropical region. Fruiting body-Basidiomata annual, centrally or eccentrically stipitate, solitary or gregarious, with strong odor when dry. Pileus infundibuliform, up to 4 cm in diameter, 5 mm thick at the center. Pileal surface pale brown to dark brown or black when fresh and becoming dark grey to mouse-grey upon drying, azonate, fibrillose; margin blunt or irregular, white to pale brown when fresh, vinaceous grey with age, becoming fuscous upon drying, up to 3 mm wide. Spines soft, white when young, brown with age when fresh, becoming fragile, pale mouse-grey upon drying, up to 1 mm long. Context dark grey, tough, up to 2 mm thick. Stipe fuscous in the outer layer, fuscous to black in the inner layer, cylindrical, glabrous, up to 1.5 cm long, 0.6 cm in diameter. , Figures 3b, 4c,d, and 6. MycoBank: 843671. Diagnosis-This species is characterized by its dark brown to black pileal surface, white to pale brown pileal margin, short spines, generative hyphae with both simple septa and clamp connections in spines, and moderately long basidia. Hyphal structure-Hyphal system monomitic; generative hyphae mostly with simple septa, occasionally with clamp connections; all the hyphae IKI-, CB-; tissues turned to olive green in KOH. Generative hyphae in pileal surface greyish brown, thick-walled, rarely branched, with simple septa, parallel, 3-6 µm in diameter. Generative hyphae in context pale brown, thick-walled, occasionally branched, with simple septa, parallel, 3-5 µm in diameter. Generative hyphae in spines clay-buff, thin-walled, branched, mostly with simple septa, occasionally with clamp connections, more or less parallel along the spines, 2-4 µm in diameter. Generative hyphae in stipe greyish brown, slightly thick-walled, rarely branched, bearing simple septa, subparallel along the stipe, 2-6 µm in diameter. Ecological habits-P. griseofuscus was found on the ground of forest dominated by trees of Pinus sp., under the humid climate of the plateau. This species grows in wellwatered bryophytes, which are often interspersed with pine needles. Fruiting body-Basidiomata annual, centrally or eccentrically stipitate, solitary or gregarious, with a fenugreek odor when dry. Pileus infundibuliform, woody, up to 9 cm in diam, 6.5 mm thick at the center. Pileal surface brown to greyish brown when fresh, becoming fuscous to black upon drying, zonate, tomentose when young, glabrous after the mature; margin blunt or irregular, white when fresh, becoming buff upon drying, up to 5 mm wide. Spines soft, white when fresh, and becoming fragile, pinkish buff to olivaceous buff upon drying, up to 3 mm long. Context vinaceous grey to greyish brown, tough, up to 3 mm thick. Stipe dark brown to fuscous in the outer layer, fuscous in the inner layer, cylindrical, glabrous, up to 4.8 cm long, 1.9 cm in diameter. Discussion In this study, phylogenetic analyses of Phellodon were conducted based on the ITS sequences and the combined ITS + nLSU + nSSU + RPB1 + RPB2 sequences to confirm the affinities of the new species and reveal the relationships of Phellodon species. Phellodon crassipileatus formed a single lineage different from other species of Phellodon in our phylogenetic analyses (Figures 1 and 2). Morphologically, P. crassipileatus is similar to P. griseofuscus in having infundibuliform and dark brown pileus. However, P. griseofuscus can be distinguished from P. crassipileatus by its fibrillose pileus, brown spines after maturity, presence of clamp connections in generative hyphae of spines, and longer basidia (22-55 × 5-6 µm). Phellodon perchocolatus and P. cinereofuscus B.K. Cui and C.G. Song clustered together and then grouped with P. mississippiensis R.E. Baird, L.E. Wallace and G. Baker, forming a high supported lineage (98% MP, 100% ML, 1.00 BPP) in our phylogenetic trees (Figures 1 and 2). Morphologically, P. cinereofuscus is similar to P. perchocolatus in having infundibuliform basidiomata and white pileal margin. However, P. cinereofuscus differs from P. perchocolatus by its reddish brown to cinnamon brown pileal surface, glabrous basidiomata and lack of clamp connections [28]. Phellodon mississippiensis is similar to P. perchocolatus in having solitary or gregarious basidiomata, and subglobose to globose basidiospores. However, P. mississippiensis can be distinguished from P. perchocolatus by its white, light orange to light brown pileal surface and shorter basidia measuring 16-22 × 5-6 µm [27]. Moreover, tissues in P. mississippiensis turned light to dark brown in KOH while turned to olive green in P. perchocolatus. Conclusions This study not only fills in the blank of multiple gene fragments of Phellodon, but also enriches the species diversity of the genus, which will promote the taxonomy and phylogeny of the genus. This is the first step to infer the phylogeny of Phellodon on the basis of multiple genes rather than ITS sequences. Therefore, this study provides a basis for further research on Phellodon. However, only a few species of Phellodon with available multiple genes could be used for the analyses, which limited the systematic study of the genus. For the time being, the best gene marker for the identification of Phellodon is ITS, while more samples with more gene markers including TEF, RPB1, and RPB2 are needed to further investigate the species diversity and phylogenetic relationships of Phellodon species.
5,055.4
2022-04-22T00:00:00.000
[ "Biology" ]
State-Aware Stochastic Optimal Power Flow The increase in distributed generation (DG) and variable load mandates system operators to perform decision-making considering uncertainties. This paper introduces a novel state-aware stochastic optimal power flow (SA-SOPF) problem formulation. The proposed SA-SOPF has objective to find a day-ahead base-solution that minimizes the generation cost and expectation of deviations in generation and node voltage set-points during real-time operation. We formulate SA-SOPF for a given affine policy and employ Gaussian process learning to obtain a distributionally robust (DR) affine policy for generation and voltage set-point change in real-time. In simulations, the GP-based affine policy has shown distributional robustness over three different uncertainty distributions for IEEE 14-bus system. The results also depict that the proposed SA-OPF formulation can reduce the expectation in voltage and generation deviation more than 60% in real-time operation with an additional day-ahead scheduling cost of 4.68% only for 14-bus system. For, in a 30-bus system, the reduction in generation and voltage deviation, the expectation is achieved to be greater than 90% for 1.195% extra generation cost. These results are strong indicators of possibility of achieving the day-ahead solution which lead to lower real-time deviation with minimal cost increase. Introduction The power system operation is going through a major change due to increased uncertainties via renewable source-based distributed generations (DG) and electric vehicle (EV) load. These uncertainties pose challenges in compact formulation of alternating current optimal power flow (ACOPF), under uncertainty, to obtain optimal cost solution while satisfying the operational and physical constraints [1]. Multiple formulations of uncertain ACOPF have been proposed in the literature, with different constraint satisfaction notions like robust [2], chance-constrained [3], and risk-aware [4]. All these methods fall under the class of stochastic optimal power flow (SOPF) [5]. Literature reveals two main directions followed for SOPF solution: (i) linearization of AC power flow (ACPF) (DC, Taylor expansion, or partial linearization [6]), and (ii) Monte-Carlo simulation (MCS) methods [7]. The linear methods carry a trade-off in accuracy for better computational performance. MCS provides a higher accuracy with poor tractability as errors directly depend upon the number of ACPF samples used. Recently, to include non-linear ACPF relations, in SOPF, polynomial chaos expansion (PCE) is employed [8]. PCE expresses the uncertain state and decision variables as a function of input following a known probability distribution function (PDF) [8]. The SOPF solution methods in literature, other than MCS, are built with assumptions about the type of PDF followed by the uncertain DG generation or load. For example, in PCE, perfect information about the random input variable's PDF is needed to construct orthogonal basis [8]. The collection or estimation of PDF information is more challenging with solar generation and EV loads, as they do not always follow well-known distributions. Mostly, the SOPF works use affine policy to shift the generation set-points under uncertainty of load demand or injection [1,9]. There are two rationales behind focusing on affine policy as (i) it is easy to implement via automatic generation control (AGC) [1], and (ii) it makes the problem tractable as calculating the expectation over linear function is computationally easy. The conventional SOPF formulation only concerns about minimizing the cost of generation in day-ahead and real-time operation. The optimal affine policy (found by any method) attempts to achieve the minimal generation cost, while satisfying the physical and operational constraints for every input point. Following the optimal affine policy may lead to considerable variation in states, particularly node voltages. This variation will result in frequent tap-changing requirements and large variations in power flow across lines [10]. A higher number of control operations means higher overall control costs in terms of operation and maintenance of transformers and tap-changing devices. Further, the optimal affine policy obtained only to minimize generation cost can also result in higher variations in power flow across transmission lines. This large change in power flow during operation means higher variations in locational marginal prices (LMP), making economic market operation difficult. Recently, in [11], it has been discussed and highlighted that cost-minimizing participation factors can lead to large variations in power flows. In [11], the objective is modified to incorporate the variance of power flow using the DCOPF formulation without voltage variables. Therefore, this work focuses on addressing two challenges in SOPF problem formulation and solution domain. First, the proposed SA-SOPF brings state awareness into the formulation for obtained a more holistic solution. Second is that this work provides a way to get away with assumptions on net-load uncertainty description as a fixed PDF and provides distributionally robust policy. In this paper, we formulate a novel state-aware stochastic optimal power flow (SA-SOPF). The objective is to minimize generation cost with the expectation of generation and voltage deviation for real-time (RT) operation. The classical two-stage SOPF formulation is adopted to convey the idea clearly. We present the proposed SA-SOPF in the form of standard single-period formulation for day-ahead and intra-day scheduling [1,[12][13][14]. The first stage in concerned with finding optimum day-ahead solution while second stage deals with the updates or changes needed in set points when value of uncertain input gets realised or known. Our purpose is to investigate the possibility of obtaining the day-ahead (DA) base-solution, which has a joint objective of minimization of expectation deviation in the generation and voltage set-points and the minimization of DA cost of generation. The minimization of expectation of voltage deviation will help reduce the total number of control operations required and lower the cost of control. We formulate the RT objective in terms of expectation of deviation in generation and voltage for a given uncertain set of net demand or load. The proposed work also presents a distributionally robust (DR) method to obtain an affine policy for RT operation and solving second stage of the SA-SOPF problem. We employ Gaussian process (GP) learning for obtaining the affine policy of RT operation and express RT deviation in generation and voltage as a function of DA base-solution. Further, to incorporate the voltage we work with full ACPF based formulation of SA-SOPF. To deal with non-convexity of the problem, we present a convex relaxation of ACPF based SA-SOPF with DA base-solution as a decision variable. The relaxation includes objective penalties that incorporate RT operation objective and improve the convex ACOPF feasibility. The main contributions of this work can be summarized as: 1. Proposing and formulating the novel state-aware stochastic optimal power flow (SA-SOPF) problem with a given affine feedback policy. The formulation is aimed to minimize joint objective of expectation of state deviation and generation cost; 2. Learning distributionally robust (DR) affine policy using the Gaussian process. This DR policy can be employed for different uncertainty distributions without retraining. The analytical form of policy is then expressed as a function of DA base-solution to be incorporated in SA-SOPF; 3. Developing convex relaxation of SA-SOPF with modified objective function for incorporating real-time objective. This relaxation handles the non-convexity of proposed SA-SOPF, formulated based on complete AC power flow to incorporate the voltage variable. There are different day-ahead optimization problem formulations proposed over the years. One category is multi-period SOPF [15,16] which solves a 24-h scheduling problem. Another class of problem called single period SOPF, on which there are works such as [1,[12][13][14]17]. The single period SOPF can be considered as a snapshot of the multiperiod SOPF by fixing one-time instance. Further, single-period problems can also be used for hour-ahead scheduling (intra-day scheduling) for upcoming time instances. In this work, we propose the SA-SOPF problem as a single-period SOPF problem. We follow structure and timing of single-period SOPF similar to that described in case of robust OPF in [14]. The remaining paper starts with formulation of SA-SOPF and lay out the differences between the proposed SA-SOPF formulation with traditional SOPF. We also highlight the major challenges in solving the proposed SA-SOPF problem which are addressed in the sections following this one. Then, we present the GP-based distributionally robust affine policy learning mechanism. The section also contains the description of the RT affine policy as a function of DA base-solution. Section 4 presents the convex formulation of SA-SOPF problem. First, we present the theorem which provides analytical solution of the RT stage objective of SA-SOPF and then we develop a reformulation of SA-SOPF. The objection penalization based semi-definite programming formulation is then followed. In results and discussion we provide case studies with four different cases on IEEE 14-Bus system. The cases are designed where load follows different distributions. Thus establishing the proposed method's applicability. The conclusion section then summarizes the work and identifying future work directions. Next, we introduce some notations. The i-th node complex voltage, in rectangular form is v i = Re(v i ) + j Im(v i ) where Re(·) is real and Im(·) is imaginary part. We consider a network having n nodes and ng generators. At the i-th node, the complex power demand is given as s d i = p d i + jq d i while generation is indicated by s g i . To indicate uncertain load vector during real-time operation, we use ξ. The capital letters, such as W and M, represent a matrix of appropriate dimensions while column vectors are indicated using bold letters like voltage v. The norm operator · indicates 2-Norm of the quantity. We use "base-solution" to indicate DA schedule set-points while RT stands for real-time and DA for day-ahead. The word policy and recourse function are used interchangeably. In this work, we consider uncertain load where load means the net demand where DG is considered negative load. State-Aware Stochastic Optimal Power Flow The conventional two-stage SOPF formulation attempts to minimize the expected RT operational cost (of real power generation) along with the DA generation schedule cost [4]. This means that during the RT operation there is no control over the state deviations. In other words, the set-point adjustments made to preserve cost optimally can lead to larger variations in states, such as voltage. In this section our objective is to define the SA-SOPF problem and highlight how it addresses the need of being aware about state deviation in RT operation state. The non-linear ACPF-based formulation of SOPF and SA-SOPF under discussion is motivated by the two-stage stochastic non-linear programs with recourse [18]. The core idea of a two-stage SOPF is to find an optimal recourse function and basesolution, which has the objective of cost minimization, respecting constraints for both day-ahead (first-stage) and real-time (second-stage) operation. The recourse function or policy is a rule by which we update the DA set-points of generation in real-time, realizing uncertainty in net demand (load-DG injection). The conventional two-stage SOPF problem, with uncertain load vector ξ, is [4] min cost D (p (1) Here, cost D is day-ahead cost and cost R is real-time operation cost. The ξ represents the uncertain net-load vector, while E is the expectation operator. The RT and DA constraints are very similar to standard ACOPF [19]. The difference is that RT constraints need to satisfy for each realization of the uncertainty while DA constraints are for base-solution. Constraints Let, N being set of nodes in a network, while generator node set is G ⊆ N and the set of branches is identified as L. For i-th node, the generated power is denoted as s g i = p g i + jq g i and the load demand is given by s d i = p d i + jq d i , respectively. Further, Y is network admittance matrix with elements y ij where (i, j) ∈ L. The v represent the complex conjugate of the variable v. Then, the set of constraints applied at each node of the network for a complex load demand s d i are The first constraint in (2) is of power balance at the node i where the right-hand side is the sum of the power flow through all the branches connected to node i, i.e., ∀j such that (i, j) ∈ L. The next four equations are inequality constraints bounding the control and state variables within the operational and physical limits of the system and equipment. These constraints are imposed at each node of the system. Now, there are two different constraint sets in (1). For DA constraint set, the demand s d i in (2) is taken as the day-ahead forecasted demand s d o,i . The s d o,i can also be interpreted as expected value of demand for any future scheduling instance like day-ahead or hourahead. For the real-time operation, we consider the demand or load to be uncertain. Thus, the demand s d i is replaced by an uncertain load variable ξ i . Further, real-time constraints should be imposed on each realization of uncertain load. The SOPF formulation we follow is similar to the one presented in [1]. As explained before, we need to change the DA set-points (generation dispatch, controlled voltages) while satisfying constraints and maintaining optimality with the realization of uncertainty. The function that establishes the relationship between uncertainty and set-point updates is called recourse function or policy. Conventionally, this policy is obtained to minimize the real-time generation cost only [1,4]. The proposed problem is different from traditional SOPF formulations in [1,4] in terms of the objective of the problem. The proposed SA-SOPF is targeted to find a base-solution that minimizes the combined objective of DA cost and RT state-deviation expectation while following a given RT affine policy. This implies that the SA-SOPF solution will sacrifice the optimality in generation cost, if needed, to achieve a lower state (voltage in proposed work) deviation expectation. Formally, the proposed SA-SOPF is Problem 1. Given uncertain net-load ξ with mean forecast µ, find the optimal day-ahead basesolution In (3), we opt for an affine policy for the RT adjustments in generation and voltage as in [1,9]. The policy is a function which relates the uncertain load with the change and basesolution of generation and state. The affine relationship is expressed as v o + ∆v = M v ξ + C v for voltage, and p g o + ∆p g = M p ξ + C p for generation change. Here, M p and M v represent the sensitivities with respect to uncertain load while C p and C v are intercepts of the affine policies of generation and voltage, respectively. Further, {p g o , v o } ∈ X represents that base-solution is inside ACOPF feasible space, and g(·) is generation cost function. The X represents the feasible space of the day-ahead optimization problem constructed by imposing power balance constraint at base-load and operational limits as DA constraint set shown in (2). Basically, X represents feasible space for ACOPF problem, constructed to find DA base-solution. With a known policy, we just need to plug the numerical value of ξ, upon realization (once numerical value is known), to obtain the changes needed in the voltage ∆v and generation ∆p g setpoint. The proposed SA-SOPF problem (3) has two main challenges in solving. Firstly, even with a fixed, known optimal policy, the expectation operator makes the problem non-deterministic in nature. Further, the problem (3) is a general case of the conventional ACOPF problem. The problem (3) reduces to ACOPF when the uncertainty-related decision and input variables are replaced with zero. Thus, as the ACOPF problem is NP-hard in nature [19], the proposed general stochastic problem (3) also belongs to the NP-hard category. In the following, before dealing with these issues, we obtain the distributionally robust affine policy. Later, we cast the RT objective without the expectation operator using analytical solution which minimized the RT objective. Then we develop a convex relaxation of the proposed SA-SOPF problem (3) with a deterministic form RT objective. Affine Policy for RT Operation The GP is a non-parametric modeling method allowing modeling of prior information and performing regression for a subspace of input [20][21][22]. The non-parametric behavior means that we can employ the model with different distributions, once it is trained for an uncertain load subspace, without retraining. The concept is similar to the idea of distributional robustness (DR) [23]. The main difference in non-parametric and DR is that the former is applicable on entire subspace irrespective of distribution type while traditionally DR is defined on an ambiguity set, developed using different distributions [13]. Therefore, the proposed method can work with an unknown PDF within the given load subspace. The error in implementation does not depend on the type of uncertainty distribution. In GP, the covariance function k(·, ·) is employed to obtain input-output relationship [20]. The selection of the covariance function determines the accuracy and complexity of the model. In the following, we employ the linear covariance function to obtain an affine policy for the RT-stage of the SOPF (1). We term the affine policy as distributionally robust as it is a wide-spread terminology in the community. The same can also be interpreted as non-parametric affine policy. For details of ACOPF learning via GP, see [24]. Distributionally Robust Affine Policy Learning In this section, the target is to learn an affine function that relates the generation and voltage set-point to uncertain load ξ. In the power system, it is difficult to accurately estimate the PDF type and parameters for solar PV-based DGs and loads, such as electric vehicles. This motivates researchers to develop methods which are independent of the PDF type. In this work, we are employing non-parametric GP regression to learn the policy. This means that upon learning, the policy can be used with any type of distribution function within the given load subspace. This property is similar to distributional robustness and it provides subspace-wise robustness. Therefore, we term the policy as subspace-wise distributionally robust or only distributionally robust for simplicity. Further, we use complete ACOPF as an RT stage problem to minimize the cost of generation while satisfying the operational and physical constraints. For a given realization ξ of uncertain load, the RT stage problem can be cast as a deterministic ACOPF: Here, g(·) is the cost function, f 1 (·) is power balance equality and f 2 (·) represents lower and upper limits as indicated in (2). As discussed before, the constraint set (2) has to be applied for each demand or load realization ξ in RT operation (4). As we consider load uncertainty bounded within a range, we assume generators to have sufficient ramping rates for meeting the demand. However, the inequity constraint set f 2 (·) can be modified to handle ramp constraints as well. Now, we employ the linear covariance function to learn an optimal set-point p g i as an affine function of the uncertain load ξ, i.e., p g i (ξ) . The learning mechanism is similar to the one used recently in [24]. The core concept in designing affine policy is to learn the optimal generation setpoints using GP regression with linear covariance function and then obtain the standard affine form with sensitivity and intercept coefficients. At learning stage, we first construct a learning data set {Ξ, p g i } ∀i ∈ G by solving (4) for N input instances. Here, Ξ is a matrix with N rows and columns equal to the number of uncertain loads in the system. Each row of Ξ represents an uncertain input vector while p g i is a column vector containing N optimal generation set-points for i-th generator. The uncertain load samples in Ξ are sampled from a uniform distribution within a given load subspace of ±δ variation in real-power load. Thus, if mean prediction of real power load is p d o then each of the random load vector is The network topology is assumed to be unchanged implying constant admittance matrix Y. The optimal hyperparameters of linear covariance function k LN are obtained via maximizing the Log marginal likelihood [20]. The i-th generator's optimal generation, as a function of ξ, upon training GP model on training dataset {Ξ, p g i }, is [20] k LN (Ξ, ξ) = τ 2 c1 + Ξ T ξ l 2 (5) j , design matrix is Ξ, and ξ is a variable vector. By taking transpose of p g i , expression (6) is To obtain standard form, we define an optimal generation intercept value c i = τ 2 c[α T i 1] along with sensitivity vector m i = (τ 2 α T i Ξ T )/l 2 , with m i being row vector of same length as that of uncertain load ξ. Therefore, an approximate linear policy for optimal generation is This relation (8) is an equation of line if the uncertainty vector ξ has only one dimension. Further, via generalizing (8), we define M p = [m 1 ; . . . ; m ng ] with ng being number of generators. The linear policy for all the optimal generation set points, with matrix M p ∈ R ng×n , and vector C p = [c 1 ; . . . ; c ng ], is Here, we omit (ξ) on the left-hand side for representation simplicity. Similarly, following (9), voltage policy is Interestingly, the function in (9) can be used for obtaining the optimal set-points under uncertainty of load, in its the current form directly. However, it is important to note that we do not know the base-solution for the generation p g o and voltage set-point v o . This is because (6) (and voltage policy) is obtained using the complete ACOPF model and policy provides the set-points like p g , not deviation of set-points like ∆p g . Moreover, as explained in the problem formulation of (3), we need the affine policy as a function of base-point p g o and v o as this will allow us to obtain the base solution which achieves objective described in (3). Therefore, we express (9) (and voltage policy) as a function of unknown base-solution p g o and v o as p The representation in (10) allows us to solve SA-SOPF problem (3) as now we have the RT stage objective as a function of DA base-solution. In the next section we solve SA-SOPF for the optimal DA base-solution {p g o , v o } which minimized the cost of day-ahead generation schedule and expectation of deviation in generation and voltage following the affine RT policy (10). Notably, this affine policy is not optimal and is an approximation of true policy. The remark below explains this in detail. Remark 1. The affine policy used to update the set-points is an approximation and will have some optimality gap compared to accurate, complete information optimal solution. The main reasons for using the affine policy are (1) easy implementation and interpretation as participation factor, and (2) reduced computational complexity due to straightforward expectation calculation over linear functions. It is easy to see that if there exists a perfect feedback policy, it is likely to be a non-linear function as the power flow manifold in ACOPF is non-linear. However, the idea behind using affine policy is to reach close to optimal while keeping the easy implementation capability and lower formulation complexity. The proposed GP-based distributionally robust policy learning framework can also be used to obtain more accurate and complex non-linear policies. Nevertheless, using such a policy to formulate and solve state-aware SOPF needs detailed work, which we will explore in future works. Convex Relaxation SA-SOPF In this section, we present a convex relaxation of the SA-SOPF problem to deal with the non-convexity arising due to AC power flow equation in SA-SOPF. First, for the RT stage, we present the analytically obtained optimal solution which solves the issue of non-deterministic RT stage objective due to expectation operator. Here, it is important to understand that in real-time the shifting of set-point has much smaller numerical value compare to day-ahead (DA) solution, i.e., ∆p g i <<< p g o,i . This also implies that any optimality gap in DA solution will have more impact than the RT affine policy induced optimally gap. Therefore, it is very important to find the DA solution close to the true optimal solution of the original non-convex problem. RT Stage Reformulation In this subsection, we derive a convex deterministic equivalent of the expectationbased RT objective. This is essential as, even with a known affine policy, the problem (3) is intractable due to involvement of expectation operator. The core idea is to find an optimal solution vector that minimizes the expectation of generation and voltage deviation with RT policy (10), expressed as a function of base-solution. We refer to such a solution as RT optimal solution. Further, the RT optimal solution will replace the RT objective term in (3) in terms of the distance between DA solution and RT optimal solution. We present this section in terms of obtaining the RT optimal voltage solution, and generation solution can be obtained similarly. Let, the RT optimal solution for voltage is v r o and given as v r o = argmin v E ξ ∆v 2 . Now, with RT objective term of voltage minimizing at v r o , the expectation operator term in (3) can be replaced with the distance between v r o and v o . This means that formulation will try to achieve the day-ahead solution v o close the set-point v r o , which minimizes the voltage deviation expectation term in RT objective. Importantly, in the proposed work we do not solve a non-deterministic optimization problem, involving expectation operator, to obtain v r o . We analytically derive the RT optimal solution v r o for voltage term in (3) below. Theorem 1. If the affine policy is ∆v = M v ξ + C v − v o with uncertain load vector ξ having mean forecast µ, then the RT optimal solution v r o is given as Proof of Theorem 1. Using the given RT affine policy we can calculate the expectation as Further, by taking derivative of above expression with respect to v o , we can easily show that optimal occurs at v r o = M v µ + C v . Similarly, we also obtain the RT optimal solutions for generation vector as p g,r o = M p µ + C p . In (11), the right hand side is constant for a given affine policy coefficient matrix M v and intercept C V . Further, the mean forecast µ of the uncertainty variable ξ is known to system operator to obtain the day-ahead solution accordingly. As discussed that due to non-parametric property of GP, the affine policy we have shown is distributionally robust within a given subspace. This means as long as we know the mean vector µ in a given subspace of uncertain load, we can use the proposed method. Now, using the result of Theorem 1 as {p g,r o , v r o }, we obtain an equivalent, tractable and deterministic formulation of problem (3), by replacing the expectation operator-based RT objective term with convex Euclidean-norm of difference (or distance), as follows. here, optimal day-ahead base solution (3). The x o is the base-solution at which joint objective, minimum generation cost and minimum distance from the RT optimal solution obtained via (11), is minimized. Thus, the expectation operator's issue is solved, and now we have a convex objective term instead, which is deterministic in nature and convenient to optimize using convex optimization methods. In the OPF works under uncertainty, the cost of change in generation set-point is also considered [13]. In the problem (3), we do not use the cost factor multiplier for RT generation as we formulate a state-aware SOPF where voltage deviation minimization is also our objective. Further, the minimization of expectation of generation deviation will also minimize RT generation costs indirectly. It represents the case when all generators have the same RT cost coefficient. However, our formulation is suitable to incorporate different objective functions versions as suggested by the remark below. The comparative cost-benefit analysis of different objective formulations will be explored in future works. Remark 2. The RT cost appears in various formulations of the SOPF [13]. The RT objective, in the problem (3), can also be modified to include a cost coefficient vector. Let r be column vector of known RT generation cost coefficients, then modified RT objective is E ξ r T ∆p g . As the constant multiplier r does not affect the calculation of the expectation, the optimal base point solution p g,r o for minimization of RT stage objective is same as obtained via (11). It is important to note here that it is difficult to obtain combined and related cost coefficient of generation and voltage deviation. Thus, it will be useful to have separate cost coefficient for voltage and generation deviation which can be interpreted as weights of the multi-objective function. The adequate selection of these weights depends on system operator requirements, and is governed by the trade-off between generation cost and operation-maintenance cost. As mentioned earlier, the non-convexity of the ACOPF feasible space X in (3) poses another major computational challenge. To solve this, we present a convex relaxation of the problem (12) in the following subsection. We use the well established SDP relaxation of ACOPF [19] with some modifications required to solve (12) having additional RT objective term along with generation cost function. Convexification of SA-SOPF In this section, we build the objective penalization-based convex relaxation of (12) using a modified form of SDP relaxation of ACOPF [19]. The major computational issue in ACOPF is of non-convexity which arises with the apparent power flow equation, describing power flow in branch connecting node i and j, as here, v i is the complex node voltage at i-th node while over-line as v indicate complex conjugate of the quantity. The y ij is admittance of the branch connecting the node i and j, y ij ∈ Y. The non-convexity in (13) is due to the multiplication of voltage with its complex conjugate which makes (13) a quadratic equality. In [19,25], authors have given a convex relaxation method by using lifting variables. The lifting variable W, a symmetric matrix replaces the voltage product as [19] Using the relation (14), the quadratic voltage equalities can be formulated in terms of linear matrix inequalities with the positive semi-definite condition on variable matrix W 0 [19] instead of the relation W = v o v T o . However, there are some major differences in the proposed problem that makes this type of relaxation inadequate for the proposed problem. Below we present the modifications in standard SDP relaxation of ACOPF [19], which are required to solve proposed SA-SOPF. The RT objective in (12) involves the voltage such that the expectation of voltage deviation gets minimized, along with the generation deviation. Thus, complete replacement of v o with W is not possible. Further, with standard relaxed constraint W 0 [19], there is no bound on the gap between decomposed voltage vector v w = [Re(v w ) ; Im(v w )] from W = v w v T w and voltage magnitude variable appearing in the objective as v o . To build a coupling between these two different voltage variables, we employ a modified convex relaxation of ACOPF. Instead of positive semi-definite condition W 0, we apply the convex inequality as [26] W Now, consider the RT objective term in (12). The day-ahead generation p g o is a variable and the generation term in (12) is convex. Thus, the real power objective term can be included directly in SDP formulation of SA-SOPF. The voltage term of RT objective in (12) can be expended as [26] ||v Further, we can modify (16) to bound the gap induced using the lifting variable as in (15) which will also improve feasibility. Here we do not rigorously provide guarantee of the feasibility improvement by the penalty but readers can find the motivation and ideas behind the same in [26]. Now, we propose an upper bound of the term v T o v o in (16), relating the voltage variable vector v o with matrix W, by applying the trace operator on (15) as Thus, with (17), (16) and (12) we obtain the convex upper bound of RT objective , as a function of DA base-solution vectors Now, following the model in [19,26] and using (14) with (18), we present a complete convex relaxation of (3) as Here, DA base-solution for real power generation is p is a quadratic cost function and β p , β v are weights determining relative importance of the different objective terms. Further, Y k , Y k , Y kl , Y kl are admittance matrices and M k diagonal incident matrix. These are constructed similar to the ones presented in [19], and not presented here for brevity. The effect and interpretation of the objective weights (β p , β v ) is important to understand here. The numerical values of these weights decide the significance of different objectives. Unlike generator-wise cost variables which work on individual generation variable, these weights directly decide relative significance between generation deviation and voltage deviation. Further, they can also be used to balance out the significance between real-time and day-ahead cost objectives. In the next section, we present the simulation results and discussion on the proposed SA-SOPF problem formulation. We show the evidence of distributional robustness of the affine policy, as well as the effect of state-aware objective on optimal generation cost under different cases. Results and Discussion In this section, we use the IEEE 14-bus and 30-bus system [27] with all the nonzero load buses having an uncertain injection or load. First, we show the subspace-wise robustness or distributional robustness of the affine policy learned via GP for 14-bus system. Later we present results and discussion on SA-SOPF numerical studies and comparative effect of different objective terms. We use the well established runopf code of MATPOWER [27] for bench-marking of SA-SOPF solution and error estimation. The ACOPF is used to refer the case where only DA cost objective g(p g o ) is used to obtain the base-solution. 14-Bus System To show the applicability of the proposed method, over a variety of uncertainty distributions, we use different cases in simulations. The details of cases in terms of PDF parameters of net-load (uncertain demand minus uncertain distributed (renewable source) generation) is as follows: • Case I: Uniform distribution of uncertain net-load vector ξ, within ±30% variation of base-load of each node; • Case II: Normal distribution of uncertain net-load vector ξ with base-load as mean and 10% of the base-load taken as standard deviation for each node; • Case III: Weibull distribution of uncertain net-load vector ξ with base-load selected as scale and 1.5 times the base-load taken as shape parameter for each node. It is important to note that we consider the load as net-load, meaning uncertain distributed generation (from renewable sources) is subtracted from the uncertain demand at each node. Therefore, the simulations cases above are explicitly designed to showcase the proposed methods ability to work with both uncertain load and renewable injection together. Now to show the distribution robustness of RT policy (10), we draw %L 1 error histogram for generation vector in Figure 1 and for voltage vector in Figure 2. The %L 1 error is defined as ||v − v s || 1 /||v s || 1 × 100 with v s being the true solution obtained via runopf. We use Case I to train the GP and obtain policy for all there cases without retraining of the model again. It is clear that in case I, mean of error histogram is lower than other cases in Figures 1 and 2. Note that the %L 1 norm error in all three cases is <1%. This is indicative of the distributional robustness of the proposed GP-based policy. The importance of SA-SOPF lies in it's ability to outperform conventional SOPF in terms of minimizing the expectation of voltage deviation. The Table 1 contains the comparison of SA-SOPF results with SOPF for case I-III. It shows that with very less extra cost (<1% of SOPF cost of 6429.31 $/hr), SA-SOPF can significantly reduce the voltage deviation. This evidence suggests that there exist DA optimal solutions, which lead to lower state deviation without a significant increase in the cost. Further, case I-III have the base-load as the mean p d o = µ. This means that DA optimal base-solution without RT objective will lead to the very less value of E{∆p g } = M p µ + C p . In other words, the DA scheduling ACOPF solution will be close to the RT optimal solution for generation p g,r . Therefore, we test our method's robustness on a case where the mean of the uncertain load is different from base-load µ = p d o . • Case IV: Normal distribution of uncertain net-load vector ξ with 1.1 times base-load as mean and 10% of the base-load taken as standard deviation for each node. The case IV is used, as a harsh condition, to show the performance of proposed SA-SOPF formulation to achieve its intended goal of minimization of state-deviation with DA generation cost. The Pareto front between three objective terms of (3) (solved via (19)) for case IV is given in Figure 3 while results comparing numerical values of day-ahead cost based ACOPF and SA-SOPF Pareto solution are given in Table 2. Both these results show that the proposed SA-SOPF formulation achieves an order-of-magnitude lower expectation in voltage deviation, compared with ACOPF results. Further, the reduction in expectation of generation deviation is significant compared to the increase in DA schedule cost. This proves proposed SA-SOPF's applicability under worst-cases like case IV. 30-Bus System In this subsection, we present the results on 30-bus system [27,28]. This test system has total of six generators and 21 load buses. We consider all 21 loads as random variables, thus effectively having a 21-dimensional vector Ξ. We consider the base-load as 0.8 times of the load given in the data file with [27] to ensure feasibility within the uncertain load space. For learning the affine policy, a load space of ±20% is considered assuming that all the points within this subspace are feasible for ACOPF. We check this assumption by simulating ACOPF for 10 4 samples. For testing, we consider an uncorrelated normal distribution with a 5% standard deviation of base-load for testing this case study. Figure 4 shows the %L 1 norm error for the generator and voltage magnitude vectors. The affine policy quality is assessed against 10 4 samples of true ACOPF solution. The left sub-figure in the Figure 4 shows that the affine policy learned using GP has %L 1 norm error less than 0.03%. This indicates that the proposed method has been able to approximate the relationship accurately. The error plot for |V| shows that the percentage error in the voltage magnitude is of the order of 10 −3 which is considerably low. Table 3 shows the results of the proposed SA-SOPF Pareto optimal with the ACOPF results with perfect information. It shows that with 1.195% extra cost, there is 90.08% reduction in the expectation generator set-point deviation. Further, the expectation in the change in voltage deviation has also been reported as 99.91% in the Table 3. For indicating the increase in the generation cost due to different penalties, we present the Pareto front for considering all three objective terms in Figure 5. The figure shows that a significant decrease in the deviation of generation set-points is obtained, at the expense of cost and voltage set-point deviation increase. However, the cost increment of approximately 1% for Pareto optimal shows a possibility of obtaining the low state deviation solution compared with a very low-cost increase. Optimality Gap and Computation Time An aspect of the policy accuracy is the optimality gap between the solutions obtained using the proposed affine policy and perfect information ACOPF solution. There will be a non-zero optimality gap in most of the conditions due to non-linearity of power flow manifold. In Figure 6 we present a box plot representing the percentage optimality gap between the cost obtained using affine policy and true, complete information ACOPF. It is clear that affine policy has achieved a significantly less optimality gap (<0.1%) for all three cases. This shows that the affine decision rule is a highly accurate approximation of the actual decision rule, in distributionally robust manner. Table 4 contains the results for the time taken in learning voltage and generation set-point affine policy. The table also shows the time taken to solve the SA-SOPF problem using MOSEK 9.2 with YALMIP [29] on MATLAB R2020b. The time is given in seconds. All the simulations are performed using the GPML toolbox [30] on PC having Intel Xeon<EMAIL_ADDRESS>GHz, 16 GB RAM. These time results show that the proposed method has obtained the DR affine policy for different systems within 1 min, and the SA-SOPF solution is obtained in negligible time. We use only 450 ACOPF training samples, which makes the proposed method computationally-lite compared to large-scale MCS-based methods. Importantly, the proposed work focuses on finding an affine recourse policy and a solution to the SOPF problem, minimizing the cost and the state deviation for a future scheduling period. This future scheduling period can be a few hours ahead or a day ahead. We do not propose the SA-SOPF problem solving or distributionally robust affine policy learning in real-time. Therefore, the time taken by the proposed method has much less significant as the framework under which the SA-SOPF problem is solved is not time-constrained. We will explore the possibilities of using the scalable Gaussian process learning methods in future works for large-scale systems. Percentage Optimality Gap Figure 6. Percentage optimality gap between the cost of generation using true, complete information ACOPF and affine policy in (8) for IEEE 14-bus system. The test is conducted three different cases with 10 4 samples. The cost coefficients are taken from [27]. Conclusions A novel state-aware stochastic optimal power flow (SA-SOPF) problem is formulated and solved using convex relaxation. The proposed SA-SOPF problem minimizes the dayahead generation schedule cost and expectation of deviation in generation and voltage for ACOPF under uncertainty. The recourse function or policy to shift the generation and voltage set-point with load uncertainty are obtained using GP learning-the proposed method results in a distributionally robust affine policy. The policy has produced accurate results for different types of load uncertainty distribution with <1%L 1 error. The proposed formulation is able to minimize the expectation of generation and voltage deviation significantly > 60%. The additional day-ahead cost is not more than 5% for 14-bus system. The proposed SA-SOPF attains at least 90% reduction in expectation of set-point variation with only 1.195% extra generation cost for 30-bus system. Therefore, the proposed formulation opens up the possibility of obtaining SOPF solutions, which improves system performance in real-time. The lower state variation in real-time will lead to lower power flow change, lower locational marginal price change, and fewer control operations requirements like tap changing. Future work will explore the possibilities of combining the policy learning and optimization problem. The problem learning of distributionally robust affine policy for larger systems will also be considered in future.
10,106.6
2021-07-07T00:00:00.000
[ "Engineering", "Computer Science" ]